From Kubernetes to Cloud Run: Chainguard's Journey
E12

From Kubernetes to Cloud Run: Chainguard's Journey

Laura Santamaria: Welcome to Cloud
Native Compass, a podcast to help

you navigate the vast landscape
of the cloud native ecosystem.

David Flanagan: We're your hosts.

I'm David Flanagan, a technology
magpie that can't stop

playing with new shiny things.

Laura Santamaria: I'm Laura
Santamaria, a forever learner who

is constantly breaking production.

David Flanagan: So, apparently,
Cloud Run is better than BigQuery.

That is according to Principal
Engineer at Chainguard, Jason Hall.

Laura Santamaria: In this episode,
we get to chat with Jason about

Chainguard's migration from Kubernetes
and Knative to Cloud Run for a

pillar of their infrastructure.

David Flanagan: And in this episode,
we didn't even mention Rust.

Until we did, because Laura brought it up.

Laura Santamaria: Well,
you know, I had to do it.

Enjoy this episode and try
to spot the deepfake Jason.

Welcome to the episode!

Uh, we have a wonderful guest with us.

Jason, would you like
to introduce yourself?

Jason Hall: Yeah.

Hi, I'm, uh, I'm Jason Hall.

I work at Chainguard.

I'm a, I'm an engineer.

I do all kinds of stuff.

Uh, generally backend related.

also a lot of image building stuff.

That's kind of what we do there.

Uh, yeah.

Laura Santamaria: I mean, I've known
you for all of five minutes and

clearly you're an awesome person,
so we're very excited to have you.

Uh, you know, five minutes.

I can, I can learn about
somebody in five minutes.

Jason Hall: I've

David Flanagan: I've known him for
longer, so I know better, right?

Jason Hall: Yeah.

Yeah.

David Flanagan: Seven minutes.

No, I'm not joking.

Uh, no, I have, uh, we've followed
each other on socials for a long

time and I've always Always loved the
work that you're putting out and what

prompted this conversation today is
an article that you published to the

Chainguard blog about migration, some
cloud adoption, containers, and Cloud

Run, and we're going to touch on all of
these different aspects, um, but I, I,

I love your, your simple introduction.

I'm an engineer that does stuff,
back end stuff, stuff, I mean,

Jason Hall: Generally

David Flanagan: uh,

Jason Hall: me touch the front end.

Laura Santamaria: Oh, okay.

I know that feeling.

Usually if I touch prod,
everything breaks on purpose

David Flanagan: Hey,

Laura Santamaria: I tell

David Flanagan: I, I,

Laura Santamaria: know,

David Flanagan: I've done some
tailwinds last week, like, does

that make me feel stack now?

Laura Santamaria: yes, yes, it does.

David Flanagan: All right, awesome.

So I think just to provide a little
bit of context before we dive into

like the Kubernetes, the containers
and the cloud run is that you have

a career behind you that required,
or I don't know if it required, you

had some time at Google along with
other Chainguard engineers, right?

So the context here is that you've been
working with containers and Kubernetes.

For a while, is that a safe assumption?

Jason Hall: Yeah.

Yeah, yeah.

Uh, I, while I was at Google, I worked
on the Google Cloud build product.

I started that team with a couple
other folks, so that was sort of my

introduction to containers in the first
place, was how to build them as a service.

Um, a lot.

I made a lot of mistakes as everyone does.

Uh, and they hired me at Chainguard
anyway, so, um, a lot of, a lot of

folks from Google, uh, came over to
Chainguard and, and our focus there is

also building containers, you know, as
well as possible, as quickly as possible,

serving them as well as possible.

A bunch of folks were from the Google,
uh, container registry team also, so,

yeah, a bunch of folks there dealing
with both building containers and

running containers to serve them.

David Flanagan: Yeah, I mean, the reason
they're bringing this up, right, and I

don't want to condense their articles
down into one sentence, but I'm going to

do it anyway, is that ChainGuard couldn't
be archived with Kubernetes anymore.

I mean, that's what it
boiled down to, right?

Jason Hall: uh, I, I, I might, I might
temper that a little bit, but, but sure.

Right.

Like, uh, uh, found that we were not
taking full advantage of all of the

features that Kubernetes provides such
that it was worth the, maintenance and,

and upkeep and, general stuff, you know,
like if it, if it gave us, if we were

using every feature of Kubernetes, it
would have been worth it to stay there.

We ended up mostly running a bunch of
stateless services, talking to, you know,

Cloud SQL and R2 for blob storage and
a bunch of like off cluster services.

We're basically just running a bunch of
stateless services inside of Kubernetes,

which can be nice, but we weren't, you
know, we weren't finding it to be useful

for all that Kubernetes brought to us.

So, um,

Laura Santamaria: I mean,

David Flanagan: Yeah.

I mean,

Laura Santamaria: that you had
a lot of spiking traffic and you

were running Knative really, right?

Jason Hall: yeah, yeah, yeah.

So the, the lineage of the, of
the engineering team is also a lot

of, you know, Matt and some other
folks, uh, at JaneGuard started.

knative with many other
folks, not just not just them.

Um, and so they were deeply, you
know, involved in it and knew how it

worked and knew how it worked well and
where it where it didn't work well.

and Knative wasn't really the problem.

I think that, you know, generally it was,
it was that, um, I think that Knative

is sort of like the nice deployment API.

It's

Laura Santamaria: right.

Jason Hall: just a good deployment API.

I mean, not just, but, So, Knative
was never the problem, uh, and Knative

actually made it easier for us to
move to Cloud Run because they had

the sort of same API and container
contract and, and, uh, all of that.

So, um, yeah, Knative was never
really the problem, and Kubernetes

wasn't really the problem, and GKE
wasn't really the problem, it was

that we were not using all the stuff
that, Uh, Kubernetes would let us do,

Laura Santamaria: Mm hmm.

Jason Hall: we were just running
a bunch of stateless services.

Laura Santamaria: Yeah.

Jason Hall: and there's a, there's an
easier way to do that, it turns out.

Laura Santamaria: Right, and I also
saw in, in the article, uh, as David

eventually hopefully will return and
rejoin us, um, I also saw in the article

though that you all were basically,
instead of having spot instances,

you were having to maintain servers
running and maintain nodes running and

things like that because you had a lot
of spiking traffic, but you couldn't

Jason Hall: Yeah.

Laura Santamaria: so you kind of were
stuck with it, with all this, all

this infrastructure that you weren't
really using unless you had a major

spike in something, and that really
doesn't make much sense if you're

doing a lot of stateless systems.

So,

Jason Hall: So our, our
traffic is, um, is very spiky.

And, and like I mentioned in
the, in the article, um, a lot of

that traffic just comes from us.

Like we, we tried our, our main thing
is we're just trying to build images

as quickly and as fast as possible.

Right.

but in doing that, we also have
to push them to the registry and

then pull them again to scan them
and pull them again to test them.

And so when a new image is built, there's
like a little mini flurry of activity

around it the, on the backend, on the
infrastructure, um, multiply that by

an ever increasing number of images.

And basically we'd had like, you know,
over time, these spikes got bigger,

um, in order to absorb those spikes,
we it, you know, did the normal

thing you do, which is to keep a warm
pool of nodes and instances ready.

Laura Santamaria: right.

Jason Hall: them, um, which,
you know, you can turn the dial

between like cost and reliability.

We, we turned it, we turned it
towards cost and reliability, sorry,

raising cost and raising reliability.

but we knew that if, uh, if sort
of that trend was going to keep

going, we would need to figure out a
cheaper way to do that sort of thing.

Laura Santamaria: Yeah.

Jason Hall: And, uh, we had, we
had a lot of folks that already,

you know, knew Cloud Run, worked on
Cloud Run, worked near Cloud Run.

And so we, we, from the start,
had sort of thought that was a

good idea, um, to migrate toward.

David Flanagan: I don't know if you
noticed, but I had to rejoin, um, my

Linux crashed as it always fucking does.

Um, so if my question now seems
to take us on a wild tangent,

it's not intentional, it's just

Laura Santamaria: Do it!

David Flanagan: 40,

Laura Santamaria: Do it!

David Flanagan: No,

Jason Hall: That's right.

David Flanagan: I was going to put my
serious face on for a moment, right?

Because.

I was very dismissive of Kubernetes at
the start, saying that you couldn't be

ours, but you know, know, something,
I mean, something I feel very guilty

of as I go to conferences and meet
people and say, you know, Kubernetes

gives you all this stuff for free, but
there is a very real tax to running

and operating a Kubernetes cluster.

And I feel like that is.

Meaning, if you're not using the whole
thing, like you were saying, you know,

sometimes that tax isn't worth the
burden, and I think that's an important

lesson for people to walk away with,
is like, you know, know what you're

getting into, even the people that have
all of this experience in containers and

Kubernetes, sometimes just don't want
to pay that tax for what they need, I

Jason Hall: Yeah,

David Flanagan: if it was
mentioned, but Cloudron is, you

know, it's a fantastic product.

I run a bunch of things on it
myself, and then, you know, we can

dive into that in more detail, but

Laura Santamaria: Makes

sense.

Jason Hall: hard, hard for us to
like know how to use correctly,

I think, or at least me.

So, um.

We didn't really use a lot of them.

We wanted a simpler, um, just run
this stateless service, scale it

up as you see fit type of thing.

And that was, that was exactly
what Cloud Run was for us.

David Flanagan: Yeah,
so I am curious, right?

Obviously, we're, while we're talking
about the migration from your blog,

and we'll put the link to this in the
show notes for anyone who's listening

in the description for anyone watching
on YouTube, but this is for your

Chainguard public registry, right?

This is a serving infrastructure
that delivers emojis to people.

Do you have, you know, Are there
infrastructure that is still running

Kubernetes or is, you know, it's just
like your main operational project.

Jason Hall: Yeah.

I'd say there, there are roughly
speaking as of right now, three,

infrastructure y type things we run.

Um, one is, uh, serving, serving
images and, and everything

that it takes to serve images.

Um, auth, datastore, um, a lot of, like,
eventing and pub sub, uh, internal stuff,

uh, to, to do stuff when stuff happens.

Um, that's, that's the, um,
the first pillar is, like,

sort of I'll call it serving.

And that's what this, this
blog post was about and our

migration to Cloud Run was about.

Um, we also have a bundle of
infrastructure for building

packages at Chainguard.

We build every package from source.

Ourselves.

Um, and so when a new, you know, Python
release drops, we build that Python

release and, and build an a PK that ends
up in an image, um, that infrastructure

does still run on Kubernetes, and I think
we get a lot of benefit from Kubernetes.

We use all those knobs, we use all
the, you know, we use all of the

features that, that, um, are in there.

So.

That second one, package
builds, is still in Kubernetes.

The third one is image builds.

So when we build an image from
these packages, um, we take, you

know, the Python image contains both
the Python package and, you know,

CA certs and libcrypto like that.

We take these distinct packages and
put them into an image and push them.

That infrastructure is mostly At
some part just HTTP and tar, like we

just fetch APKs, stitch them together
into images and then push them, and

then test them and then scan them
and do a bunch of stuff downstream.

So the three of them sort of, and
then when it pushes it, it pushes

it to the serving infrastructure.

So all three are sort of
living together next to each

other, dependent on each other.

But, uh, yeah, the package build
infrastructure still uses Kubernetes

is very, is very reliant on it.

It takes, you know, I think full
advantage of all of the stuff that

Kubernetes gives us, but the serving
infrastructure wasn't really.

And so, uh, we moved
that one to Cloud Run.

Laura Santamaria: Thanks.

David Flanagan: You did something with
Cloud Run, which I believe it now does

that out of the box, but I'm assuming
your efforts predate that, which is the

ability to do multi region deployment.

So maybe you could go into that and
tell us, you know, because you're

using R2 as like the blob storage that
powers the serving infrastructure.

That is.

Um, powered by Cloudflare, distributed
around the world, lower than latency

to first byte for all your customers.

So, having multi region
deployments makes a lot of sense.

So, how about you kind of dive into you
did that architecture and deployed it?

Jason Hall: Yeah, so, um, the, the multi
region Cloud Run service module that we

have, um, is basically a wrapper around
run this Cloud Run service in three.

regions, right?

Or n regions.

Um, as far as I know, and, and
that does predate the Cloud Run

providing this feature by itself.

Um, as far as I know, and, uh, if someone
from the Cloud Run team is, is screaming

at this, telling me that I'm wrong,
I think, uh, that the, the Cloud Run

feature of a global service is basically
like syntactic sugar around run three,

You know, regional versions of this,
and, um, you can stitch them together

with GCLB or load balance across them.

So, our Terraform module for this, And
Cloud Run's feature, I think, are more

or less the same architecture of like, I
want to run this, I want to run this in

three places, and I want GCLB to figure
out which one is closest to the user.

Um, we, a lot of our services, uh, the
registry, for instance, um, when you, when

you ask to pull an image, um, we, route
that to the closest region to you, so if

you're on the west coast, it will, or the
west coast of the US, it will hit the west

coast replica, it will do some sort of
auth check and say like, oh, you're David,

I know you, you're allowed to pull this
image, and then it will, as quickly as

possible, redirect you to R2 to get that
blob, we want you off of RLAN as quickly

as possible, we want you to be, you
know, talking to R2, pulling that blob,

without us doing anything, so, um, As
much as is possible, we'd like to handle

that request as close to you as possible,
do as little work as possible, in as

little time as possible, and then say,
your blob's in R2, go get it from them.

Um, but as far as I know, the Cloud Run
Global Service feature is, you know,

infrastructure and niceness around.

this same thing in three places.

David Flanagan: Yeah, I actually
deployed my first native, um, multi

region cloud run a couple of weeks ago.

And, uh, I was disappointed to see that
I got three different URLs back for each

deployment and then had to manually go and
configure my own load balancing for it.

So I think it's still very early.

I mean, it's still, it's
either alpha or beta, it's very

early for what they're doing.

And I don't think all of the.

niceties are quite there yet, but I
think you're right, it's the same module

that you've done with your Terraform
module, only probably a little bit

better, because you have an open source
Terraform module that other people can

go steal and do this themselves as well.

Jason Hall: Yeah, and if, if the, you
know, if the Cloud Run, as the Cloud

Run product improves, there's nothing
in it about our Terraform modules that

requires them to be done this way.

If there is a, you know, global Cloud
Run service resource, we would just

use that, you know, and, and I think
most part, depending on a lot of

details, like things would just work.

Think, you know, the next time you
Terraform apply, it would just work.

Become a global service
or whatever, but um,

Laura Santamaria: Right.

David Flanagan: I'm assuming the
way that this works is like you

get through the networking layer.

It's probably like an anycast IP
or maybe it does DNS for the root

into the lowest latency, right?

I think it's just like serving
Cloud Run function is just a piece

of Go code that signs an R2 URL
and then you're handing off, right?

That's what you said.

You want to.

And we're going to get out of that as soon
as possible with the, with the caller.

Jason Hall: yeah, yeah, we do, we
do, you know, off checks and, and

as whatever we need to do to make
sure that you're allowed to get

this R two URL, and then we generate
assigned URL for you and redirect.

Um, that's also because, not just for
latency, not just to get you to the,

to the R two as fast as possible.

Um, but if we serve from R
two, then it's zero egress.

Like egress is free.

we don't want to like proxy that blob
through our service because then we

would pay, you know, egress for that.

So we just say, You're good.

We know you.

Go, go talk to R2 and get
your, your blog from them.

David Flanagan: Yeah.

And the zero egress fees
is so appealing with R2.

I feel like they're going
to rug pull that at any

Laura Santamaria: hmm.

Ha, ha, ha, ha, ha, ha.

David Flanagan: sitting there
like, yeah, we don't care.

As long

Jason Hall: I,

David Flanagan: through our
two or their CDN, they, they,

they, they're, they don't care.

Uh,

Jason Hall: yeah, I, I definitely, uh,
agree that it feels too good to be true.

Uh, and maybe it will turn out to be too
good to be true, but for now, uh, I'm,

I'm very happy that it exists and, uh,

Laura Santamaria: Yeah,

Jason Hall: to, happy to
sit on this rug for now.

Let's see how it goes.

Laura Santamaria: yeah, it kind of
makes you wonder what exactly is going

on underneath the covers, or underneath
the rug, I guess you could say, that's

making this a good business decision
somewhere down the line, because You

know, is it just because someone really,
really, really, really, really wants to

twist the knife on like AWS or somebody
just saying like, ha ha, see, this is

what we can do that you can't do it.

Or is it like, honestly,
they're going to use it to push

some other product out there.

I don't know.

It's just kind of an interesting
question how they're doing

it and why they're doing it.

Cause yeah, it doesn't make sense.

David Flanagan: when your CloudFlare
right, and you've got probably every

enterprise company in the world
paying the big money for the top end,

that, I mean, the egress transfer on
this and the storage means nothing.

I, I'm assuming, right.

I don't know anything about
CloudFlare internally, but I'm

assuming they're just like, yeah.

I mean, Google and AWS are probably paying
them a ton of money for other stuff.

So like,

Laura Santamaria: Yeah.

So if there's a cloud flare,

Jason Hall: of, of,

Laura Santamaria: go ahead.

Jason Hall: Oh, sorry.

My, my understanding of how like network
egress billing works anyway, is that

like Google and Amazon don't pay that
much able to shift bits around, right?

They just feel like
charging you a bunch for it.

And CloudFlare also doesn't get charged
that much to move the bits around.

They just don't like charging you
for it, which, you know, it's not

like they're losing money on it.

I think, I hope.

Laura Santamaria: Right.

Yeah.

David Flanagan: of their competition
are following suit as well.

You know, bunny.

net is a CDN with zero egress fees or
very, very competitive egress fees.

Siebel cloud started
doing zero egress fees.

So maybe it will have a knock on
effect, but I don't want us going down

Jason Hall: Sorry.

Laura Santamaria: Well,

David Flanagan: about something

Laura Santamaria: just,
I will just say that

David Flanagan: to push this down.

Laura Santamaria: person who works on
this who's listening and feels like coming

on to a podcast, uh, maybe reach out.

We're kind of curious.

Anyway, um, but I kind of want to switch
a little bit because I noticed that

you had been talking a lot about, um,
basically about least access as you

were building out these clusters, well,
they're not clusters, uh, services,

I guess, for lack of a better term.

So your service account
is very, very minimal.

And then.

Everything is working and I'm kind of
reading the article as we're talking,

I, you're doing explicit authorization
and doing all of the really, really good

things that are good security practices.

Was this, Was this a problem to kind
of figure out with, as you move off

of Kubernetes systems that are a
little more designed to work together?

Figuring out the architecture to
make that, that service system work?

I'm curious, mostly as someone who
hasn't really done a whole lot of going

back and forth between a Kubernetes
cluster and serverless functions.

Usually I'm just running serverless stuff.

I'm not doing both at the same
time, so I'm curious how that works.

Jason Hall: yeah.

Um, the, the first thing you mentioned
that the having like in our service,

um, module, the service account
has no permissions by default.

You have to explicitly get it, you know,
every, every permission, um, I totally

understand why Cloud Run by the, like,
the regular Cloud Run service comes

with a regular service account that
has, I think, fairly broad permissions.

Like, for people onboarding to your
service, you don't want to say, like, In

the five minutes I have your attention,
first figure out IAM and what exact,

you know, permissions this thing needs.

Laura Santamaria: Right.

Jason Hall: are you going?

Why are you leaving?

You know, um, so they, they by
default give you a, I think a very

powerful service account to run
with by default, totally makes

sense for onboarding and new users.

Um, but then that's terrible when you,
generally the route that goes is, you

onboard one service with a bunch of
permissions, you onboard five more, you

onboard 30 more, and then you realize,
oh my god, everything has permission

to do everything, and this is, you
know, somebody in security notices and

makes you, uh, fix that after the fact.

Laura Santamaria: Right.

Jason Hall: uh, ask me how
I know about this, uh, this,

this route of how things go.

So, um, We very early on decided like,
if we're going to migrate to this thing,

we want to do it right from the start.

We want to like, you know, uh, there
is no way to get this super powerful

service account on this service
without explicitly saying like, yeah,

this should be able to do everything.

which none of them ever do.

Nothing ever needs all those things.

So,

Laura Santamaria: Right.

Jason Hall: we, in the, in the list
of, in the modules in that repo,

there is a, um, a couple of things.

One is, uh, authorize a service
to call another service.

So, given two services, you can
say, like, this service here is

allowed to talk to this service here.

One really clever thing, which, which I
think, uh, Matt did, was the only way to

get an address to the destination service.

to call the authorize this service module.

So if

Laura Santamaria: okay.

Jason Hall: call B, there is a way around,
you can like, you know, the easiest way

to get the address for B is to call the
authorize service module and as a result

the output of that is the address for B.

So if you're trying to call B,
And you're like, how do I find B?

You find it by authorizing the,

Laura Santamaria: Right.

Okay.

Jason Hall: like, you know, as a, as
a fun, as a fun, like side effect,

you have also explicitly authorized A
to call B, um, which is nice and, uh,

Laura Santamaria: Yeah.

Jason Hall: we take advantage of a lot.

the other thing is like, so, so,
um, that's like service to service,

uh, communication authorization.

If there's a service that
needs to talk to KMS.

or to get a cloud secret or to,
you know, make a request to a

cluster or anything like that.

Um, because when you create the
service, you also have to create the

service account that runs in there.

You have the handle to say like,
okay, this thing that's running

this service is going to also need
permission to read a secret permission

Laura Santamaria: Right.

Jason Hall: you know, sign with KMS,
um, as opposed to create a service and

then try to figure out what service
account or you like the auto created one.

And, um, And twiddle its permissions.

Laura Santamaria: Mm hmm.

Jason Hall: I think that's just
sort of a nice, a lot of things

in there were thoughtfully to make
the default thing you do secure.

Like, not just for secure defaults,
but so that also, like, the easiest

path to get your job done is also,
happens to be the secure way to do it.

Laura Santamaria: Right.

Jason Hall: think was really nice.

And we've definitely, like, taken
advantage of that all over the place.

Laura Santamaria: Yeah.

Yeah.

When you make the easy path,
the secure path, that's

always a good design decision.

So

Jason Hall: Well, and

Laura Santamaria: you're right.

Jason Hall: review, right?

Like you can, you can see that somebody's,
you know, jumping through eight hoops

to get the address to be instead of just

Laura Santamaria: Yeah.

Jason Hall: it.

And you can see, you can sort of tell
like, this is an awkward way to do this.

Why didn't you just call the easy way?

Why did you walk around
the building to get there?

You know?

Laura Santamaria: Yeah.

Yeah.

Makes sense.

Makes sense.

David Flanagan: let's dig into the, the
authorization stuff a little bit more.

Now, I don't understand, um.

I don't know the commercial side of
the Chainguard business here, right?

Does anyone with a login can pull
any image as many times as they want?

Or does the authorization and
authentication go a bit deeper than that?

Where certain organizations
can pull certain images with

a certain number of times?

Like, are you using Spice DB or
anything based on the Zanzibar paper?

Or is it like a flat of
authorization across the board?

Jason Hall: great question.

We, um, don't have rate limits for
anything, um, for, for our public

free tier developer tier images.

For paying customers, we
have, we have no rate limits.

All of that comes with the
caveat that every service has a

rate limit imposed by physics.

So, um, you can't possibly,
you know, you can't send us

10 trillion requests a second.

Something will blow up.

But, um, we try not to impose
artificial rate limits on anything.

And a lot of that comes from, like we
were talking before, using R2, it's not

like you pulling our image, you know,
a thousand times is going to cost us a

thousand times more than the first time.

by all means, pull away.

we, uh, we don't use SpiceDB, we have
our own, um, uh, our own IAM model that

we, uh, Based heavily on Kubernetes's and
GCP's IAM model, um, it's a lot, I think,

simpler or at least simpler for now.

We don't have hundreds of roles
or hundreds of permissions.

We have, I think, maybe
two dozen, a dozen.

Um, but, you know, ask again in
three years and see if we're up to

the 350, 000 roles that IAM or IAM
roles that, uh, GCP or AWS has.

David Flanagan: Okay.

Cool.

Laura Santamaria: Interesting.

Right.

Yeah.

David Flanagan: but you talked about
Google's default service account for like

cloud run functions being permissive and
it's, you know, you don't want to scare

people away in the first five minutes.

always thought.

It was a bit more sinister.

Sinister is a terrible word.

It's like, they want you to
be able to go and use all the

other cloud products, right?

They want you to SQL.

They want you to use BigQuery.

They want you to necessarily, yeah, sure.

Yeah.

See, use whatever you want.

The bell just keeps
going up and up and up.

Like, woo, go nuts.

Um, you and the Chainguard as
a whole, like, like, you know.

The big you.

You do hook into more of GCP's services.

I've seen big query on the article
and you mentioned cloud SQL as well.

So like, um, is that again an
operational concern where you

don't want to take on the burden
of running your own postgres, etc.

Or were you always just going
to use cloud products first?

Like, I'm curious about
that decision process.

Jason Hall: Yeah, I think, uh, uh, in
general, I think it makes sense to think

whether, like, whether your expertise
is database maintenance, whether the

company's, whether the company, you
know, differentiates, the business

differentiates on how well it can manage.

SQL database.

And if that is your, if that is
your competitive advantage, and

there are, there are definitely,
uh, businesses and companies that

that is their competitive advantage,
then by all means run it yourself.

Right?

Um, JaneGuard's differentiating
on how well it can run a database.

So we just want one that works and
is reliable and You know, relatively

low touch to get the job done.

I'm sure it's not optimized to
the, to the ends of the earth.

I'm sure there's stuff we could do that
if we, if we, um, wanted to make it,

um, a little better, but, you know,
ultimately that's, that's not where we,

where we differentiate from anybody else.

On the other side, like, Building the best
images possible is where we differentiate.

And so we have invested a
lot in on that side, right?

And if, uh, if we had spent bunch
of time building our own database,

building our own, uh, uh, AMS, building
our own secret store, building all

of this stuff, which, you know, as an
infra nerd, I would love to, right?

Like, you know, please let me go do this.

But instead they tell me I have to,
like, build images or something.

Um, so Uh, uh, you know, it's, it's,
it's about like really trying to think

about what, your time is most useful.

Um, and managing our own Postgres was not.

So we, we had Cloud SQL right there.

We pulled it off the shelf.

We use it, you know, if, if at some point
our database becomes our, our bottleneck,

if that's, if that, and sometimes it is.

Uh, but if, if it becomes our biggest
fire, like we will invest in, you know,

Making that better and tuning it better.

And if that means we run it ourselves,
then that means we run it ourselves.

But I don't think that
would be our first choice.

Yeah.

David Flanagan: to put
you on the spot, right?

But you did mention at the start that some
of this migration was also cost driven.

And like, maybe it's just a
Scottish person and me, right?

But, you know, we're known globally
as quite tight fisted people.

We're cheap.

Um, I keep looking at the price of VCPUs
managed post graduates and I'm like, nah.

No, I'm gonna, I'm, I'm going to make
this at home and I often will run

my databases on metal or a virtual
machine, even though the operational

costs are astronomical because I
have to do my own backups and stuff.

But, I mean, Cloud

Laura Santamaria: for it.

David Flanagan: No, I

Laura Santamaria: like David,
you have the knowledge for it.

Like it's, it's a
business decision, right?

Like in the end, this
is a business decision.

You're evaluating the cost of buying
versus the cost of managing a team.

Like having dedicated people
who may be busy or not busy.

They may just kind of be sitting there.

I don't know.

Like this is how I always looked at it.

You can look at it the same way as,
um, let's say if you have a lawn, if

you have grass, maybe you do, maybe
you don't, depends on where you live.

Do you mow it yourself?

How much is your time worth versus
do you pay for lawn care service?

Everybody has a different risk
question there, but how much

is your hourly pay, David?

Think about it that way.

And like, how much is that hourly pay
and maintaining all of the tooling and

maintaining all of those things versus

So, I look out at the lawn that we
have in the backyard of the house that

we rent, and sure, we could get like
a little push lawnmower and one of us

could spend time mowing that lawn, or
we could pay a service that has one

of those massive ride on lawnmowers,
which I think are really cool, and I

keep wanting to get on when they're
not looking, but they don't notice me.

And they come and they do it in like
10 minutes, what would take me at least

an hour, and then I have to also get
rid of all the stuff that's there.

It's the same idea of, do I run my own
Postgres server, and my own Postgres

database, knowing that maybe I don't
have the best understanding of how

it works, so therefore who knows
how secure I really am, because I'm

doing it myself, or, do I buy it?

Until I can get to the point
where I can hire the people and

have a team that can maintain it.

That's really the business question.

Maybe I'm wrong, Jason, you can
correct me, but this is like

how I've always looked at this.

Jason Hall: No, I agree.

And you mentioned at the end,
like hiring folks to do that.

There's also a recruiting and
expertise sort of question, right?

Like we, we, uh, and this is
related to our, uh, to what our, our

differentiated, you know, our value
proposition is, but like we haven't

really hired database tuning experts.

We haven't really hired, you
know, Postgres operators.

Um, we could do it.

I'm sure we could, we could
muddle through and, you know, if

David can do it than anybody can.

But, um, then,

Laura Santamaria: Oh, brutal.

Jason Hall: but, uh, you know, we
would, we would rather spend that,

that limited resource of time and, and
energy on, you know, the product and on,

Laura Santamaria: And

Jason Hall: and improving the, you know,
serving infrastructure that, that is

differentiated from, from other folks.

Laura Santamaria: there's
a hidden cost in that

David Flanagan: back up a

Laura Santamaria: recruiting.

David Flanagan: Hold on.

Laura Santamaria: Well, hold on.

David Flanagan: I'm cutting you

Jason Hall: I didn't

Laura Santamaria: Okay, fine.

Yeah, David's like, I have
to get my revenge here.

Hold on a minute.

David Flanagan: No, no,

Jason Hall: I'm sorry.

David Flanagan: no.

No, but the, the, the, what the
CNCF wants us all to believe is that

we just run a Kubernetes cluster.

We stick the Cloud Data PG operator on
it or any operator for any software.

And this is all done for us.

No expense here.

Right?

Open source is wonderful.

Like

Laura Santamaria: Okay, okay.

There is such a thing as a, you know,
free as in beer or free as in puppies.

Like,

Jason Hall: Yeah.

David Flanagan: where
can I get a free puppy?

Laura Santamaria: hey, I know
plenty of places to send you one.

But, uh, you know, free puppy
doesn't mean forever free.

It's an illusion.

Mm hmm.

Jason Hall: there's also a, uh, uh,
when it works well, it's free when

it's, you know, I, I can in the five
minute tutorial timeline, like spin

up cloud native PG and have it running
and do a hello world and insert, you

know, a thousand rows and feel good.

Um, the, uh, day two is another thing.

And then day, you know, 390 is another
one that like, Oh no, it was really

easy to insert a thousand rows.

It's also easy to insert 20 million rows.

And now what?

And, and, uh, not that we
wouldn't have this problem on a

managed database either, right?

We, we still do, but at least we don't
have to, um, there's a lot of operating

stuff that we don't have to about.

Laura Santamaria: See, the worst
thing is, I know David is explicitly

nerd sniping a little bit.

He's doing it on purpose and
the I'm falling into it just

as much as anybody else.

David Flanagan: Hey, I'm
just enjoying us talking.

There's no sniping,

Jason Hall: I'm just

David Flanagan: say,

Jason Hall: questions.

David Flanagan: mentioned something like,
obviously, you know, people listening,

they're making these decisions every day.

Do I go and pay for the managed service?

Do I get an operator?

Do we run it ourselves?

Right.

And Jason's already gave us great advice.

Focus on what your
company needs to excel at.

Right.

I think that's ubiquitous.

Everyone should take that away.

But we are missing something here, right?

It's easy for me to say I'm going
to run my own Postgres when it gets

like 10 queries a week or 10 queries
an hour or whatever that number is.

So maybe, you know, you don't mention
this in the article and I don't

know if I want to put you on the
spot and say give us numbers, right?

But I'm assuming your scale isn't
in the hundreds of requests per day,

probably not even in the of requests.

I'm assuming we're going higher.

I don't know how much you're willing
to share there, but you know,

any context you want to provide?

Jason Hall: Vanguard gets
more traffic than my personal

blog, if that is the question.

You are, you are asking.

yeah.

Uh, uh yeah, I think, I think
the, the answer to everything

is, it depends, right?

Like, I'm gonna put on my, my
senior engineer hat and say, it

Laura Santamaria: I knew that was coming.

Jason Hall: this is Uh, if, uh,
you are receiving more traffic than

you can handle, you figure out why.

You figure out what the solution is.

If you are not.

If you're not currently experiencing
that problem, go fix the problem you

are currently experiencing, right?

It's never set in stone.

It's never, you know, we're never,
even this, you know, even while I

was writing the article about our
migration to Cloud Run, we were

making new changes to that stuff.

We were like adding new stuff and
tweaking stuff and moving stuff around.

So it's never, you know,
Even now, it's not done.

It was just sort of a birthday,
like, one year since we started this.

Um, you may look at this, if you're
reading this in the year 2026,

listener, uh, you may go back and
look at this blog post and say,

like, oh, it's all different now.

Right?

Like, it's all, you know, they said
that they did everything this way,

and now they're using magical global
Cloud Run services, and GCLB is

deprecated, and, you know, whatever.

So, um, yeah, it's never done.

David Flanagan: All right, no,

Laura Santamaria: I know, I
know we're getting to the end.

I just, I want to just
highlight one thing.

This was a lot of technical
debt that you had to convince

leadership to let you go fix that.

That's a lot to pay down that you
had to kind of catch up on to be

able to say, Hey, we need time
to be able to go do this in like.

30 seconds.

How did you do it?

Cause that is like the, the Holy grail
of like every infrastructure team ever

is convincing your leadership that, Hey,
we need to completely change what we're

doing now that we've scaled differently.

So

Jason Hall: uh, it's complicated.

Uh,

Laura Santamaria: thought so.

Jason Hall: Um,

Laura Santamaria: Right.

Right.

Jason Hall: start was helpful, right?

Like it was his idea.

It wasn't like our idea.

And then we had to convince him.

Um, he, He did a fairly thorough
cost analysis of like, this is what

it costs now to run both the prod
environment, the staging environment,

and everyone's developer environments.

I think this is too much and I think
the trend, you know, going up means it

will get worse the longer we put it off.

So, um, that definitely helped.

Um, he, uh, also just sort of started.

Doing it like not, not like migrating us
for real, but like demonstrating, Hey,

this is how easy it is to run the service
in Cloud Run instead of, uh, on Knative.

Here's how easy it is to get GCLB to work.

it was like a weekend of, of
hacking for him or maybe a week.

get a demo environment that was entirely
in Cloud Run, production ready, not

ready to start migrating to, but like,
hey, it's not as hard as you think.

Laura Santamaria: Right.

Jason Hall: be a nine month, you know,
gigantic engineering slog to do, if

we rip the bandaid relatively quickly,
we can, you know, this done and

make the line go down instead of up.

Um, all of those are very helpful
ways to convince folks that these

are useful migrations and not just
engineers being nerd sniped and loving.

for migrating sake, not
that I've ever done that.

Um, but, uh, yeah, sort of like
forecasting the cost, understanding

the cost now and forecasting it and
saying, if we don't do anything,

Laura Santamaria: Yep.

Jason Hall: a cost in a year.

If we do it now, then we upfront that
cost and have much less over time.

Laura Santamaria: Right.

All right.

David Flanagan: I've got
one more question for us.

Now, again, you just
mentioned dev environments.

Now, a common challenge when I'm,
you know, talking to people or

working with companies is the
more they move to the cloud.

Typically, their development environments
get a little bit harder, right?

It's not just running a Go binary that
speaks to local Postgres, which is

okay for Cloud SQL, but when you're
using KMS for secrets, you're using

PubSub for, you know, Webhook receivers
and all these other bits and pieces.

Maybe you can talk about the developer
experience of working on this on your

own machine and how that compares
to what you ship to production.

Jason Hall: Yeah.

Um, we, every, every engineer that has,
you know, that touches the infrastructure

has their own developer environment.

In this case, developer environment means
GCP project, cloud run, service, you know,

constellation, cloud SQL, KMS, etc, etc.

Um, if you want to run this, any of
this, or if you want to run any of

this locally, Most of the time you can.

If it talks to KMS, you need
to like, you know, set up the

auth to be able to do that.

If it talks to Cloud SQL,
whatever, you need to do that.

so, but if it's just, you know, two
services talking to each other or a

service talking to R2 to create a signed
URL or something, you can run it locally.

I think generally it's gotten
easy enough to run in Cloud Run.

That we just do that.

Like if it's, if it's not, if it's as
easy to get the real actual environment

up as it is to run it, you know,
two or three services locally and

talking to each other, then just put
it in the real one and run it there.

Um, it's not like, Okay.

It's not as easy as running it locally
a lot of times, but, but generally, like

if you're about to do something that
integrates with another service or Cloud

SQL or, or, um, KMS or R2, like just
run it in Cloud Run and it's, it's fine.

Uh,

Laura Santamaria: There you go.

Jason Hall: actually a big benefit of
this in the first place was that like

everyone's developer environment was a
significant cost when we had, you know,

clusters to our, you know, multiples
of them in each developer environment.

just in terms of, like, unused
infrastructure, but, like, okay,

it's been three months since I
last had to make an infra change.

I have to get my cluster back,
like, into a working state, right?

I'm going to spend a whole day just,
like, kind of getting the cluster back

into a workable state, and then I can
finally start working on my service.

Um, with Cloud Run, there's a lot less.

Bit rot happening underneath you.

It's just sort of that service, not
none, but a little, you know, most of

that service has already just been sort
of running there and it costs nothing

while you're not using it, which no
one, you know, I think my developer

environment gets less traffic than
my personal blog, if that's possible.

David Flanagan: a scapegoat
for all of our conversations.

It's like Jason's blog in

Jason Hall: Yeah.

David Flanagan: and it's less

Jason Hall: Is it bigger
or smaller than my blog?

I

Laura Santamaria: Oh.

David Flanagan: All right.

Well, thank you so much
for sharing all that.

I mean,

Laura Santamaria: Oh,

David Flanagan: I, we don't even talk
about BigQuery, I don't really want to

keep you too much longer, but you know,

Jason Hall: don't mind.

I got nowhere to be.

David Flanagan: I don't have a
real job, so I mean, I can sit

here for as long as you want.

I'll keep at it.

Jason Hall: All

David Flanagan: And yeah, Laura
got made unemployed, so I guess

we're sat here for a while.

Laura Santamaria: well, just for
another, what, 36 hours, I guess,

David Flanagan: Oh,

Laura Santamaria: I get that long.

Jason Hall: right.

So we're here for 36 more hours.

Keep it

Laura Santamaria: All right,
let's do this live stream time.

Um, anyway.

David Flanagan: this
ongoing meme online, right?

Like when people, don't know
if you've ever searched for

the Cloud Run and BigQuery.

I do this regularly as I'm
exploring, but people often argue

over what is the best GCP service
between Cloud Run and BigQuery.

They're like, they're both exceptional

Laura Santamaria: Wait,

David Flanagan: products

Laura Santamaria: people are arguing over,

David Flanagan: Not arguing, but
there's just a, there's a debate

because Cloud On and Big Query,
they give you so much for free.

Like

Laura Santamaria: okay, okay.

David Flanagan: amazing products

Laura Santamaria: So, so which one's,

David Flanagan: you have it.

Laura Santamaria: which one's
more valuable is what is what

I guess people are arguing.

Now this is interesting because I
don't, I don't get involved in these,

um, meme arguments, meme argument.

Is that like a thing?

I just kind of exist and find out about
these things on the fringes of the

internet when you tell me about them.

So,

David Flanagan: Well, yes it is.

I mean, it's, it's not common
because that would be tragic, right?

If people are just going onto to
Google and go to which one's better.

But you know, when it comes down to all of
Gcps products, I think that the two Shine

and Tar Twos are Claron and BigQuery.

They both cost pennies.

At really decent scale, um, and I,
BigQuery is one of these things that

I've never personally had a use case
to really dive into properly because

again, I'd be, know, throwing a tennis
ball down a large collider or whatever.

I don't, that's a

Jason Hall: Yeah, yeah.

David Flanagan: but I
didn't have anything.

Right.

Um, so yeah, I was just curious
about the BigQuery stuff.

Like at what point do you decide
that, yeah, we're going to use

BigQuery because I've never been there
personally and I'd love to know more.

Jason Hall: Yeah, we, uh, for
the record, I would say Cloud

Run is the better service and I
will fight you if you disagree.

I'm just, I'm just kidding.

Um,

Laura Santamaria: No, you're not.

Jason Hall: also don't use,
I'm not, I'm really not,

David Flanagan: That's the

Jason Hall: uh,

David Flanagan: now.

Jason says Cloud Run is
better than BigQuery.

Jason Hall: Hey man, I don't need, I don't
need people finding me on the internet.

No,

David Flanagan: scale.

Laura Santamaria: Yeah.

There we go.

Jason Hall: I'd have to run Postgres now.

Um, uh, what was I saying?

Oh, uh,

Laura Santamaria: even know.

Okay.

Jason Hall: don't use, we probably
also don't use BigQuery for, uh,

Something that actually needs BigQuery.

There's, there's also the meme of
like, your dataset fits in memory.

You know, like, if, if you are, you don't
need, you know, BigQuery and managed

services for nearly everything you use it
for, we probably don't, um, but again, the

nice thing is that it's a managed service
that we just sort of like plop data into

and can query later, um, that could be
Cloud SQL, I think we just like having

the, uh, uh, uh, not having to care about
the scale of these things and just toss

it into BigQuery and, and, and it works.

One thing we use BigQuery heavily
for, which I do like a lot, is

our eventing infrastructure.

We didn't get to talk much about our
eventing infrastructure, but basically

like, you know, when a push happens,
when an image push happens, an event is

fired inside of our system and it bounces
around and things can subscribe to it.

That's built on PubSub.

one of the subscribers to that
PubSub is a recorder that records

every event into BigQuery.

everything that happens, every time
anything happens, um, it is emitted

into the, the event pipe and recorded.

Um, and so that is something that's
really nice to, and we don't even about

it, like most of the time, it's just like
there is a log of every event somewhere.

Um, but when you want to go find
that, and when you want to say like.

Oh, a GitHub pull request was opened
that was approved, that did this,

that pushed this image, that did this.

Um.

have this sort of log of everything to,
to, um, quickly go through and visualize.

We also use these things for like,
uh, you know, generating dashboards

and graphs of like, are people, which
customers are pulling which images

and which pre tier images are most
popular and, and things like that.

So, um,

Laura Santamaria: I bet your security
team really loves that though, because

one of the biggest things about
logging that I always find people

don't realize is you need a read only
archive that no one can actually change.

Every single logging system should
have a read only archive of everything.

That way, if somebody does access
the system, cause usually what

they'll try to do is they'll go in
and then they'll change the logs

so that they hide their tracks.

But if it's pre logged into BigQuery,
they can't go in and just change it.

Hopefully, maybe they could

Jason Hall: They,

Laura Santamaria: if they have
the right permissions, but

Jason Hall: can do anything,

Laura Santamaria: I know,
I know, but like, I like to

dream that's a possibility.

I don't know, but

Jason Hall: to your, to your point
though, the, the recorder service that,

that, you know, listens to every event
and writes it to BigQuery is the only

service that ha or the only account that
has access to write to that dataset.

Laura Santamaria: there you go.

Yeah.

Jason Hall: up for if any user
besides this, uh, the, the set

or, or you know, reads certain GCS
buckets or, you know, does certain

things that they shouldn't be doing.

We call them the lasers.

Matt calls them the lasers.

You will triple laser and like, uh,
fire an alert and, and, you know,

notify our Slack that like someone
was editing that data set in BigQuery.

That's weird, right?

You can also turn off

Laura Santamaria: It's your first flag.

Jason Hall: you know, smart person
or, or, uh, motivated person could,

you know, Disable the lasers, but
it's nice to have another sort

Laura Santamaria: Yeah.

Jason Hall: layer of defense there.

Laura Santamaria: Yeah.

Always a good thing.

David Flanagan: but the, the
attack vector here is internal.

So I guess there's also,

Laura Santamaria: Yeah.

David Flanagan: it's where you're not
looking for malicious activity on the

outside, trying to deactivate your lasers.

Like it's, you know, robbery
or anything like that.

Jason Hall: Yeah.

Well, and it, it, it tends to show
up with a Slack alert went off

that like some coworker was, you
know, this GCS bucket or something.

And, and they'll respond to that in
Slack and say, yeah, that was me.

I was debugging this thing.

This is what, you know,

Laura Santamaria: Hmm.

Jason Hall: that laser
is overly sensitive.

We should, you know, detune it so that
it's not, you know, going to bug Slack

every time I check my email or something.

So,

Laura Santamaria: For some of my
co workers, I wish I could know

when they check their email though.

That would be nice.

Jason Hall: they're just avoiding us,

Laura Santamaria: I know.

David Flanagan: All right.

Laura Santamaria: Anyway.

David Flanagan: the GCS bucket thing
ties into your usage of Terraform as

well, which we didn't really talk about
other than the module, but I'm assuming

all your state files live in Terraform.

So if anyone were to go in there
and start manipulating that,

there would be a laser as well.

I like

Jason Hall: yeah, yeah,

David Flanagan: I would have
to start building lasers.

Jason Hall: we have, uh, there's examples
in the repo of our, of our lasers around,

uh, certain things, if a, if a human
accesses a service account, he, that is a

laserable event, that is a, is something
that a person should never be doing,

Laura Santamaria: See.

Jason Hall: if they have permission
to, we don't want, we don't want

you to be unrecorded doing that.

Laura Santamaria: See, this is, this is
something that you, you've told David

this now, and this is how we get Skynet.

Is David's just going to go.

Jason Hall: about time.

David Flanagan: All

Laura Santamaria: Anyway.

David Flanagan: think we should
finish on something controversial.

So, um,

Laura Santamaria: Oh, we haven't
been being controversial.

Okay.

Sure.

David, go ahead.

Jason Hall: better than BigQuery.

David Flanagan: I mean, uh,
we've already beat with, we've

ticked off the meme, right?

Now we

Laura Santamaria: Okay.

David Flanagan: open source
OSI definitions and talk about

why Terraform over OpenTofu.

Laura Santamaria: Have we, have
we argued about Go in Python yet?

We haven't done that yet this episode.

Go and rest.

No, rest, rest.

It was rest.

It was rest that we need to talk about.

Right, right.

We almost have all the memes going.

Okay,

David Flanagan: We almost
made it one episode without

me having to say Rust lottery.

And then, and then you
just, you poke the bear.

Laura Santamaria: poke,
poke, poke, poke, poke.

Anyway, so what were you thinking, David?

David Flanagan: No, back on track.

We're almost done with this thing.

Right.

Why not open Tofu?

Was that even in consideration or

Laura Santamaria: Oh,

David Flanagan: like,

Laura Santamaria: decisions.

Jason Hall: It is a consideration.

I would say mainly we are
still floating on inertia of

having used Terraform so far.

Um, I don't think any of us have
any reason not to use OpenTofu.

A couple of, or at least one person
at Chainguard, John, has contributed

to OpenTofu to make it faster, to
make it more performant, and I think

we would, uh, love to migrate to it.

It's just another migration to, to
consider and prioritize with the rest.

We have, we have nothing but
love for OpenTofu, um, uh, and

we'll, we'll migrate to it.

Whenever we can.

That wasn't that controversial.

See, you need to come up with more

Laura Santamaria: Well see, but I,
I guess you could say, so you have

nothing for love, but o for open tofu.

But there's a but in there.

Jason Hall: okay.

Laura Santamaria: Um,

David Flanagan: we'll deep fake his
voice and put in something spicy.

Jason Hall: Good.

Good.

Laura Santamaria: there's
ai, we added AI into this.

Okay.

What else do we need to
add in going on right now?

If you said crypto,
I'm just gonna hang up.

David Flanagan: no.

This has been, has been really cool.

And, you know, it's a
really interesting project.

The blog is

Laura Santamaria: yeah.

David Flanagan: should go read it.

The open source module examples I
think are universally adoptable and

stealable to anyone listening that
just wants to go and do some of this

stuff, which is really awesome as well.

all.

I love you.

I'll finish.

I mean, I don't know if Laura has anything
else, but I'll say I'll finish with this.

I keep saying one more question
and then giving you like

another six, but, you know,

Jason Hall: Uh,

David Flanagan: in 12 months or
three months, it'll all be different.

I'm curious over the next three,
six and nine months, what is your

next, what's your next mission?

What do you think will change?

What are you wanting to experiment with?

Like what's on your roadmap?

Jason Hall: focus, focus, focus.

I have to only do one thing
for each of these things.

Um, I mentioned the three sort of
pillars of infrastructure we have.

Um, the serving infrastructure
I think is really solid.

We just recently are wrapping
up sort of an investment in our

package build infrastructure.

I think the image build infrastructure
is the next thing The next oldest

pillar, maybe we'll just bounce
between all three and just like

keep, improving them in, in shifts.

But, uh, yeah, the image build
infrastructure is something I

think we'd, we'd, uh, I would
love to start improving more.

I don't think that would be a
three or six month timeline thing.

I would love for it to be.

Uh, what was the rest of your question?

Laura Santamaria: Focus.

Focus

Jason Hall: Focus.

Right, right.

There's so many, there's
so many shiny things to do.

David Flanagan: thing, because this is,

Jason Hall: Yeah.

David Flanagan: why you're shipping
and I'm sitting here, I've got

12 plates that you can't see off
camera that I have to go spend.

That's why I had to
reconnect and everything.

That's like, I just, I can't stop
messing with So I really, always

amazes me when people are able to
say, we're going to do this one thing.

And I'm just like,

Jason Hall: I, I think, I think my
coworkers listening to this and hearing

you say that I'm very good at focus
will be laughing, uh, very, very much.

Laura Santamaria: Well then they
need to go work with David for a

little while and then see it all

Jason Hall: I hope

Laura Santamaria: in one place.

Jason Hall: yeah,

Laura Santamaria: have

David Flanagan: mean,

Jason Hall: I hope that I've
given everyone the impression.

Laura Santamaria: what?

Jason Hall: I haven't, I haven't.

David Flanagan: a domain today?

I mean, come on.

Laura Santamaria: Yeah,

Jason Hall: it's early here.

There's plenty of time.

Laura Santamaria: yep,
there you go, there you go.

Well, my, my one thing is just,
hey, congratulations on getting

through, like, some technical debt
fixes for infrastructure that,

like, that is the holy grail for
every operations person ever.

So, uh, congratulations on surviving that.

Jason Hall: There's, there's

Laura Santamaria: know, that's,

Jason Hall: from too.

Laura Santamaria: yeah, it's just, you
know, I mean, every little bit of debt

that we can pay down is always quite nice.

Uh, so.

Jason Hall: All right.

David Flanagan: Well, thank

Laura Santamaria: kube?

David Flanagan: much for your time.

Any final departing words for the
audience before we say goodbye?

All

Jason Hall: No,

Laura Santamaria: Well now I've
known you for an hour, and I still

think you're a cool person, so
thank you so much for coming on.

Yeah, there you go.

Jason Hall: next time.

Laura Santamaria: Okay.

David Flanagan: right.

Jason Hall: All right.

David Flanagan: at 12 months
to see what's changed.

Laura Santamaria: Woohoo.

David Flanagan: I hope
you all have a good day.

Laura Santamaria: Take care, y'all.

Jason Hall: Thanks, guys.

Laura Santamaria: Thanks for joining us.

David Flanagan: If you want to keep up
with us, consider us subscribing to the

podcast on your favorite podcasting app,
or even go to cloud native compass fm.

Laura Santamaria: And if you want
us to talk with someone specific or

cover a specific topic, reach out
to us on any social media platform

David Flanagan: and tell next
time when exploring the cloud

native landscape on three

Laura Santamaria: on three.

David Flanagan: 1, 2, 3.

Don't forget your compass.

Don't forget

Laura Santamaria: your compass.

Episode Video

Creators and Guests

David Flanagan
Host
David Flanagan
I teach people advanced Kubernetes & Cloud Native patterns and practices. I am the founder of the Rawkode Academy and KubeHuddle, and I co-organise Kubernetes London.
Laura Santamaria
Host
Laura Santamaria
🌻💙💛Developer Advocate 🥑 I ❤️ DevOps. Recovering Earth/Atmo Sci educator, cloud aficionado. Curator #AMinuteOnTheMic; cohost The Hallway Track, ex-PulumiTV.
Jason Hall
Guest
Jason Hall
World's Okayest Dad, pizza enthusiast, single-hyphenate, container nerd at Chainguard