Dapr: The Future of Microservices
E17

Dapr: The Future of Microservices

Welcome everyone.

Unfortunately, Laura
could not join us today.

Again, what does she even do?

Geez, sorry.

In today's episode, I'm joined by
Mark, the co-founder and CEO of Dapr.

A CNCF project that helps make
microservice development almost possible.

Let's go dig in without me.

Enjoy the episode.

hi Mark.

Welcome to Cloud Native Compass.

Could you please take a few moments
to introduce yourself to our audience?

Thank you, David.

It's fantastic to be here.

Yeah.

My name's Mark Fussell.

I'm the CEO of Diagrid.

Diagrid is a company where we build on
open source technologies, particularly

on Dapr, what we'll talk about a lot
today, and excited to be here today.

Awesome.

Thank you so much for joining us.

I'm really excited for this episode
because I have built distributed systems,

with microservices and workflows, and
pubsub event queues and, I love the

polarity, I don't know if that's even
the right word, but when we adopt

microservices, I often tell people it's
to make their lives simpler, and then

bam, they've now got this horrible, crazy
architecture with all of these things.

So maybe we can talk about.

The adoption of microservices, are
we going in the right direction?

And how Dapr is hopefully gonna
make that easier for people.

Yeah, microservices is
not new now in any way.

If you look at a lot of the publications,
and we, there's a great thing from

InfoQ where they published, they're
crossing the chasm technologies from

the architectural point of view.

And microservices is way over on the.

The right now, and it's crossed the chasm
from a technology perspective, and have

been around for, I mean, people been
talking around for good 15 years now.

And you have to look back.

what does it actually mean?

You know, it was a driving factor from
the businesses because the business

wanted to move faster when you built this
large compiled piece of code that wasn't.

distributed in some manner.

It meant that, its rate
fast and ship features fast.

And there's this requirement.

Actually the two driving requirements
was one was as we moved more services to

the cloud and ran things on the cloud and
kinda distributed them around, so they

ran on more machines, coping with the fact
that the business wanted to ship features

faster, because I wanted to be able to.

Update my inventory application faster
than sort of my payment application.

that split things apart.

And so there's just a big debate,
which I think is a big, kind

of strange in myself, which is,
what's the size of a microservice?

you know, the size of a, The
answear is it could be anything.

It could be a giant piece of
compiled code, or it could be,

millions of little small things.

And often it lies somewhere in between.

usually around, you know, domain
driven, data driven design where, you

buy, draw, and, domain and context
is where you sort draw boundaries

around your business itself.

where's my payment application
sit, and where does my kind

of other business process sit?

And you sort of draw
boundaries around those.

And then of course, as developers do.

You know know you sort of have to
choose a choice of implementations.

so that's kind of really kind of the
brief reason for microservices and,

and you know, the history around them
of, you know, why they've developed.

Because, you know, we're often now
in this sort of Cloud Native world

where everyone runs things on top
of Kubernetes is the platform.

And what we just saw time and time
again is that people were reinventing

the same pieces of code themselves,
the same software patterns in order to

build these distributed architectures,
which run on multiple machines.

And so that's where,
you know, Dapr comes in.

Awesome.

yeah,

So with microservices, I often say to
people, and it's, I don't know if it's

like the best analogy I've ever had
in my life, or the worst, it's if we

think about a carrot on a stick, right?

And we're leading, or a horse,
we're leading the horse to water.

We're like, microservices are
gonna make your life better.

They're simpler.

You can rewrite them in a day.

And all, we've all heard these terms
before, but unfortunately we take all

these developers, we guide them to the
water, and actually it's a notion and now

they've got to boil it in some degree.

It is a notion of complexity with the
regards to the tools that they need.

Because when we simplify the
code itself, the application, the

microservice, that is really easy.

It could be 12 lines of code, it
could be a hundred lines of codes.

it's not important.

But that complexity
doesn't really get removed.

It gets pushed to the platform
or to the infrastructure.

And I think from, my knowledge of Dapr,
you're solving that, that big ball

of complexity of the platform bit.

Exactly.

I mean, that's what it does is that,
you, everyone approaches this and you

jump in, you go, yeah, I can create,
you know, process A, process B, or

container A, container B, and I'm
just gonna talk between this front end

application, talk through usually some
typically gateway service that's deployed.

And then I have, you know, say we
built a fictitious application, had

a shopping cart service, a checkout
service, and maybe an inventory service.

And you deploy all these
things, but all of a sudden.

There's all these difficult problems
that emerge and I think they usually

stem down to things like communication
between them, coordination across them

and sort of state around them all.

And so the first thing that you
often run into is that you want to

be able to do communication between
some services because everything

is not running in isolation in
a giant compiled ball anymore.

And so you've really distributed
the problem to a networking problem

now, which means communication.

And so all of a sudden, you know, you've
gotta have service A talk to service

B. and so the first thing you run
into is it's gotta be discoverability.

And so, you know, one of the
key tenets that Dapr brings to

a distributed substance platform
is this concept of identity.

where every piece of code, actually,
every piece of process or running code.

Has a name associated with it.

And it turns out that this is incredibly
important because, if you're in the

infrastructure land, you don't really
care about name pieces of code.

You deploy these processes, but you
wanna say, I want to call my inventory

service and I want to call, the payment,
method on the inventory, or sort of

the stock service, I should say, on
inventory service to see if it's there.

do I have stock inside of man inventory?

And so you have these named
identities the first problem

you've run into is discoverability
and then calling, securely.

and then also then you fall into the
concept of retries because you've done

a call and it's failed and you have to
retry 'cause there's network failures.

Yep.

top of this, of course, you
want to have observability.

did my call succeed and do
I have observability data?

So, this whole thing turns out to be a
you get a big challenge in the end, and

before you know it, people jump in and
they think, oh, I'll do a, a simple rest

call between these and they, I gotta
make it secure and then I go do retries.

and then this whole discovery
mechanism where you bring in something

like zookeeper or, what's it, uh,
the, the one from HashiCorp the.

Console.

console.

Yes.

in order to do name resolution around
those things, and you're sort of

building the whole platform up yourself
just to kind of talk between two

services and call a method on them.

and run this on some
distributed systems platform.

So that's exactly what Dapr does.

Dapr says, well, why are you reinventing
the wheel or reinventing the pattern?

We say, let's take advantage of
building a platform and a set of

consistent APIs that allow you to do.

This communication coordination and sort
of state management, and also things

like secrets management, between things.

And so, you know, to take that into
concrete example, Dapr has this concept

of a service invocation, API, which
says, you create two services, service

A and service B. they both have names.

And then I kind of get, those are
registered into a resolution service that

can discover where those services are.

And now as a developer, I can simply call
onto the invoke API that says invoke.

the method near the stock service
on the inventory, service wherever

it's running or, and call that method
for me, and that's all I have to do.

And up is all heavy lifting for you.

Does the discoverability, the
secure calls, the retries, the

observability around all that.

And basically what this is all about is.

developers have to write
business code in the end, that's

what you're trying to get to.

not build platforms the whole time.

And so, although it's fun
for developers to build those

platforms, you keep reinventing
this just for every application.

So, Dapr solves these hard
distributed systems problems and

we can go into each one of those
APIs and talk about it a bit more.

'cause each one of them
is fairly significant.

but that's what its underlying
concept is to accelerate this.

we've shown, actually we do a every,
in fact, we are just about to release,

now a thing called a state of Dapr, a
state of Dapr report to 2025, where we

interviewed, 200 plus, developers, who are
using Dapr and asked them, what was the

number one thing that, you got from Dapr.

And, you know, the categor
thing that came back was that

it accelerated the development
of my microservices application.

Anywhere between, 30 and 50%
productivity around all of them.

you kinda asked for a better
response than that, right?

that's exactly what you want be here.

I'm assuming not to put words in your
mouth, but yeah, that's really cool.

So there's one thing I want to touch on
a little bit there is, you talked about

where Dapr fits into the stack with
Sandlin retries to a certain degree, it

makes, communication or request to other
services, a function call rather than a

generic HDP call where you do the
plumbing and the retries and all that,

and people always say, Kubernetes, it
gives you service discovery for free.

And it's like, you know, well, yeah,
kinda good luck with that, right?

there's a whole market here of
service measures that said, no, it

doesn't because you've gotta hook
in again, all of this more stuff.

And what I like about where Dapr sits in
the stack versus the service mesh is that

Dapr is service and application aware.

It actually understands my
code to a certain degree.

Where service mesh is more of this
generic fiddle, bit of pipe or

plumbing that does give me value, but
Dapr slightly higher up the stack.

I'm assuming that was intentional.

So maybe we can talk about why.

Why is this important for developers
as they want to increase their velocity

building in microservice architecture?

that's a question that we often get on
the project is, is Dapr service mesh?

And the answer is, is kind of yes and no.

the answer is yes, it does the
same sort of functionality, but

service Meshs work purely at the
networking layer and only at the

networking layer

Yeah.

layer.

There's no understanding or concept of.

Yeah, of an application, as it were.

And this goes back to, that application
identity thing I just referred to,

service meshes are, coordinating network
packets and changing things on the

network layer and doing, and, you know,
if you want to enforce that consistently

across every single thing that happens
inside your company, then great.

you can implement service,
a service mesh to do that.

but this idea of application identity.

You know that a piece of code, a process,
a container actually has identity

that I can reason about and discover
and call is the big difference in the

end between Dapr and service meshes.

and that kind of gets to the point where
what happens is that, Dapr really is.

in its comparison with service
meshes, there's only just one

API that aligns with on that.

That's a service invocation.

You the other very important API
that uses all the time with Dapr is

publish and subscribe, or pubs up, or

Yep.

messaging, where you publish a message and
you can sub subscribe to it on a topic.

And you can then sort of broadcast
or send it to everyone's listening.

And asynchronous messaging and the
concept of, message brokers in between

is sort of key to a distributed system.

In fact, that's probably the most
common API that gets used inside Dapr.

and so, you know, at that point as well,
you're also sending messages between.

two services or two
applications that are running.

And that's the big difference when
Dapr does its concepts of end-to-end

security and, it does observability
and it does resiliency across these,

you get it across all those APIs.

So in a case of, if you decide to
do a service invocation call between

two services and then you do a pub
sub message, you get to end-to-end

observability of going through all these
APIs and you see that whole call train.

Well with something like with a service
mesh, you just see the call that happens

on a network and you don't know, it went
over this message broker all of a sudden

and then came back because it doesn't
know this at the application level.

So one of the big, differences
is that whole concept.

And I think, this often comes
down to people who think about the

infrastructure layer of things and
in that space, and then people are

looking at the application layer.

And what's very unique about the Dapr
Project is, we solely talked about what

are the needs of an application developer.

how is it they need service
invocation, a asynchronous calls

with pub sub and then I think one of
the most important ones is workflow.

Where

do coordination

Yep.

between services.

Yeah.

Service A has gotta call service
B and call service C, and if it

fails, how do I recover and do that?

Retry mechanisms.

And often what's referred
to now is durable execution.

The fact that I can keep a long
running workflow running like a piece

of business logic, inventive failure
recover to where I was start to.

from where I left off before and keep
going, and do this in a coordinated

fashion, just like, you know, any
sort of classic workflow process.

but the unique thing that Dapr has, it's
a code based workflow, and so you write

your code, as a developer would write
in your favorite language, be Python

or C Sharp or Java, and you can do this
coordination or, saga type patterns.

As well as combined with orchestration
patterns as well for pub sub.

And this is sort of the, the
powerful concept of Dapr, has pub

sub messaging for communication,
has workflow for coordination.

so, you know, going back to your question
about, I. Service meshes and Dapr.

Yeah, service invocation.

and that aligns with some of the
service meshes with Dapr, but

Dapr is much, much more than that.

and actually we see Dapr used side
by side with service meshes as well.

if people do want to use it with
a service mesh, that's good.

All right.

There's a lot we could
talk about from there.

I love that you go onto the kinda.

The, I dunno the term, the different
actors of Dapr, the different, feature

sets the APIs that are being provided.

Because you're right, durable
execution as it's known like

today seems to be everywhere.

Right now everyone is talking
about how do we do this?

What are the tools that we're doing and
what are the language supports, et cetera.

And I think they solve
an important problem.

And This was possible before to a certain
degree where you publish something

to Kafka, you have consumers, they do
the job and the sagas have been around

much longer than durable execution,
but it was completely event driven and

the challenge there, at least from,
back in the days when I was doing this

stuff is the visibility across the
whole system was really difficult.

Yes.

this event?

What is consuming it?

Then?

How does this chain work through
to the point where you have like

whole tracer based test systems
that feed end to end workloads work.

This new durable execution pattern of
just writing the workflows and code

and making sure that the responses are
memorized and continued and all that.

I think it's so cool.

I can remember the first time I've seen
it, I was like, wow, like this is amazing.

And I had no idea that Dapr even offered
this today until I was kind looking

around on the website to see what was new.

And I, it kind got me thinking is
that, you offered Kiwi, I think

there's, the Pub/Sub stuff with Kafka.

I don't know if it's supposed
alternative backends.

There's no workflows.

Like you're essentially taking
all of the problems of distributed

system or microservice system and
bringing out new APIs, climbing up

the stack, whatever you wanna call it.

And I think that is so important and
it brings me back, people always ask

when should I start with microservices?

When do we adopt microservices?

And the answer has always
typically been, oh, make it work.

And beg, monolith, try not to make it too
much of a ball of mud and then refactor

to microservices with all these patterns.

And that's been the
status quo for decades.

Right.

And I feel like now with
Dapr people are buy into it.

You could be a team of two and ship
hundreds of microservices and be really

successful because of all the plumbing
and hard work that you're removing from.

Yeah, exactly.

That's it.

I don't think, I think that you
can go straight into that design

now and start to use, Dapr is very
designed to be incrementally used.

You can just use one API.

And you can just do service
implication between two services

and do a call like that.

And that's it.

And then you can add, the pub sub
messaging API, and then you can

sort of adopt the long run durable
execution workflows to do this.

And it's very incrementally adopted.

It's also very much designed
for when you do modernization.

you might have that big
bundle piece of code already.

In fact, we see this very frequently.

people have developed that sort of
monolith and then they just wanna

split out an individual piece of
code and break this thing out.

And just communicate with this first,
because they do it, typically, a pub, a

pub sub messaging with that and and do
sort of asynchronous communication or

they want to make it part of a workflow.

So it's very incrementally
adoptable and, you know, that's

what it's, yeah, it's key element.

it's also about, how you,
we see this a lot now.

In fact, AWS talks a lot about this,
about the evolution of architecture

where you make decisions at the beginning
and then things evolve over time.

And you could have an adaptable
system around all this.

And I think this is where Dapr plays a key
role because not only those APIs, I think

another thing that's really important
to draw out is the abstraction between

the APIs and underlying infrastructure
and Dapr does this thing through

what's called its component model.

So you have the pubs of API, and you
know what we see a lot of the time

when people go up and they start
building their first microservice,

they go, yeah, I'm gonna use Kafka.

Kafka's the best Pub
Sub thing in the world.

And they pull in the Kafka
SDK, and they embed it in their

application and they start using
the kaf and then they realize that.

It's not quite Pub Sub, it's actually
got a bunch of things that they've

got to do around it all because it
really is a, you know, streaming.

And so they have to build sort
of a pub sub API around this.

And then they wanna add some other
additional features around all this.

And before you know it, they put the Kafka
SDK bed into application and they sort of

do this across lots of them all and then
further down the line, someone goes,

well, you know, I like Kafka, but I really
always wish I was using MQTT as well and

all of a sudden you got, this SDK embedded
in your code and you're trying to satisfy

the requirements of sort of two teams.

So you in a, I mean as code evolution
world of things, Dapr actually

abstracts all of that because there's
this concept of component model where.

You may use the Pubsub API, but
you can plug in Kafka or Rapid

MQ or MQTT or Azure Service Bus.

AWS SNS or Google Pub Sub
as your underlying message

broker and abstract that away.

So your code adapts and changes with time.

And if you decide that,
Kafka wasn't the best one.

In fact, we've worked with one
client, a customer that, they had

2000 microservices and they had Kafka
built into every single one of them.

And they decided they didn't like
Kafka, and it literally took them.

A year to re-engineer this,
because they wanted to change this.

And, the component model allows you
to, swap out that underlying one, but

you still get all the benefits of that
particular, piece of infrastructure.

So you can still get, take the
benefits of Kafka because of some

of its features around this.

going back to your question, that
code evolution, the adoption of APIs.

is key, to Dapr success that allows
you to build these microservice

architectures from the beginning.

It allows you to evolve their design,
improve and incrementally add other APIs.

for example, we see a lot of people
start with the pubs of messaging, API.

And then they also start to, in time,
typically bring in sort of the secrets

API for managing their secrets.

So they communicate with their
underlying infrastructure.

And we're always looking at what's needed
in a distributed system architecture.

right now one of the things that's
emerged, of course, is a lot

of people like to talk to LLMs.

and before we jump into the AI thing
too quickly, we've introduced a

conversation, API that allows you to
abstract a consistent way for you to

talk to an underlying language model.

So you may have a hugging face model or
a thropic or a open AI AI model and they

all have their own different, clients
who wanna pull in things like this.

in this last release, the Dapr
one 15 release, It means just a

conversation API that allows you
to have a consistent way plug in the

component to talk to that underlying
peace of that service underlying piece

of infrastructure, yet keep the a
API same, and actually additionally

lead layer features on top of that.

So this API doesn't, isn't
just a wraparound the

client, but it actually does.

PII obfuscation so that if data comes
back, like social security numbers or

addresses, you know It scrubs all those
and also does prompt caching as well,

so that if you retry a prompt and it's
a prompt you've already done, it can

cache up prompt for a period of time.

So it basically saves you money.

So that's the thing
we're always looking at.

How is it that there are APIs
in the world of microservices

and distributed development?

Developers are starting to use a lot,
workflow and durable execution is

like a key one because practically
every application out there has

to do coordination between things.

Yeah.

you combine that and the joys
of Dapr actually is 'cause

it's a code-based workflow.

I can't stress how great that is.

there you are as a developer and you
can actually debug and run your business

logic code and set break points in
it and see the calls and combine that

workflow with, other, the other Dapr APIs.

there's always been this big debate
of what is the difference between

orchestration, versus, coordination.

it's, you know, s you know, the saga
coordination as opposed to choreography,

as opposed to orchestration.

and I think, Dapr and the event
driven world as opposed to the

long, long running, workflow world.

And I think, basically Dapr says you
don't have to choose to can combine

those two and make them work together.

we see a lot of that, usage of Dapr, being
very strong in terms of, how you design

and build your Microsoft architecture.

So first thing I'll say is the component
model that you've mentioned, I think

is a seriously understated feature.

right off the top of my head I
was like being able to not care

about the implementation of that
pub sub simplifies a whole lot.

Like even just, oh, in my staging
environment, am I really gonna

run a big cafe, cluster, no.

Can I publish something else?

Of course, that's nice.

And production, my constraints are
completely different and not having to

modify code is really important there.

And I'd love to ask a kind
hands on question now about.

What that would look like.

First, let's talk about a kinda classic
dual write problem, and I think everyone

who's ever tried to write a distributed
or monolithic micro server, anyone who's

ever tried to write distributed system or
microservice architecture, fully aware.

And that is the challenge of I need
to write something to the local

database for that service and then
publish it to a queue, and it's

something that has to happen or
has to fail in an atomic fashion.

How do we do that with Dapr today?

actually that's a fantastic question.

as Dapr looks across these distributed
systems patterns and problems, one

of the patterns we actually built was
called the outbox pattern, which is

what you described the where I want
to transactionally write a piece of

state into a database and then send
a message under the same transaction

to, cope with a over a message broker.

you can actually take that, you
can dapr has a pluggable components

for 30 plus different state stores.

anything from was Dynamo DB, Azure, Cosmos
DB to Postgres to Redis, you name it.

It has more.

And many of those ones that
are transactional because not

all of those are transactional.

You can combine with your favorite message
broker, for example, you can combine, for

example, AWS Dynamo DB and a and AWS SNS
and transactions, save a piece of state,

into that state store and then at the same
time publish a message onto, you know, AWS

s and s with inside the same transaction.

And if, the state fails to save or
it doesn't save in some way, the

message doesn't get sent and if it
does get saved, the message does

get sent on the same transaction.

So that's one of the
things that we built in.

And it's a very.

Complex pattern to get right.

you really

Yeah.

make sure there's a transactional
guarantee around this.

It's a very common pattern that happens.

And so I'm glad you brought up this exact
example because we see that used a lot.

because it's very common for you to
say, I just want to, I want to save

the state of this order, and then
publish a message to the email service

that tells someone to send an email
and make sure that they don't get an

email saying, your order was successful
if their order wasn't successful.

so that's what you start to see.

We start to.

Actually combine these APIs together.

That's a combination of state and Pub Sub.

we actually sort of combine
other APIs like that as well.

I think one of my favorite ones
is where we actually combine

state and service invocation, Dapr
actually has this concept of actors.

and actor are long running
durable stateful objects.

I think that it was super cool.

If you go back and look at the
original paper from Carl Hewitt who

published all about the actor model
in the seventies, and he was a great,

American computer scientist who came
up with, talked about the actor model.

he basically foresaw the whole
world of distributed computing

and things running into.

millions and millions of
little pieces of code.

you know what Dapr has as another
API is this actor API that brings

together state and service invocation.

So you can have lots and lots of
running instances of durable state

that you can call and it does it and
provides you with lots of guarantees,

like around concurrency and, prevents,
multi-threaded calls inside all this.

and what we see quite frequently
is that people, for example, use

these to represent iot devices.

So we work with a great, customer
of ours that does lighting systems.

And they have, they do lighting
systems for clients and every single

one of those lights is a little actor.

And if you imagine, the actor says,
here's the luminosity of the light.

Here's when it was turned on when it was
turned off these are all the methods,

and you have like literally millions
of them and each one has an identity

and basically Dapr then takes all of
these say, wherever you're running it

on your Kubernetes, Cluster distributes
these millions of little light objects

all around the cluster makes some more
reliable, durable, safe, scales in between

machines if a machine fails, it recovers
them over here you as a developer just

go, there's this durable object out there.

it's, long running.

it's secure I get all the telemetry
from it all that I think is a great

combination of the state invocation APIs.

So I know I took you, you started on
the outbox pattern and I took you to

actors, but it's, it's just another
of these distributed systems patterns.

and actually what's emerging today
is that these concepts of these

actors and what they're about is
really can of the same thing as

this whole agentic framework thing.

so yeah, and we'll get onto this topic
because I think this is a super cool

area to take the conversation, but these.

these actors are like agents.

they have memory, and now you've
just gotta say, how do they

work with these language models?

What we've actually done inside the
Dapr framework is we've combined all

these actors and used them as part of
the durable execution workflow engine.

I. so the workflow engine that does its
coordination of Task A, task B, task

C, make sure we Recovery is itself.

Each one of those tasks is a
little actor under the covers.

So if you think about workflow,
it runs and it goes, I want to do,

this . task or activity one and then
activity two and activity three.

And you could do all sorts
of different patterns.

You could do chaining, you
can do parallel, you can

do wait for external input.

Um, but each one of those is laced,
durable state and then the workflow

engine does all of the smarts.

So basically coordinating those and making
sure that they run in your, nice friendly

language preferred way, whether that's
net or C#, and it, and storing all the

state so that if your application fails
and you wanna recover where your workflow

was running, it loads up all the activity,
state loads, all the actors, and back to

where you were running before all of the
state variables and off you go again.

So I think, whole combination of
APIs, bringing them together and

we're always looking for how to do
this, We taking this down the path

of combining pub sub messaging with,
state to create this outbox pattern.

We're also taking pub sub messaging
and combining it with a jobs API And

the jobs API we introduced is a bit
like, allows you to do CR jobs so you

can actually do deferred messaging.

So I could do a Cron job and say, when
this Cron job or this job's API triggers

off, at this event in, in five hours
time, send us message to go with it

or, so what's key about Dapr is it's
all about distributed systems patterns.

How is it you combine these
concept APIs, put them together.

For outbox for long running durable
actor state for workflow, how you

then put together pub sub with,
jobs API for Cron jobs and just

make the developer's life beautiful.

Wow, that was amazing.

you don't know this, and this
isn't planned, but the actor

model is my favorite distributor.

Pardon?

And so I even carried a
book with me everywhere.

Oh,

and I've,

yes.

I have given more copies of
this book away than I have

anything else in my entire life.

I always feel that everyone
needs to learn about the actor.

Pardon?

Because the power, once you model your
applications in that way, is phenomenal.

And, yeah, and again, you just went
and chatted about that for nearly, I

don't even know how long I was just
sitting, listening and smiling, but,

I could spend a whole day talking
about the act of patterns because

it also is one of my favorite things
about, because I think it's the most

underrepresented and misunderstood,
because if you look at the world of, you

know serverless Serverless as it became,
and functions and that sort of thing.

Everyone got a very excited
by this and, for all the

yep.

it's great.

but those things are basically stateless,
they, yes, they store their state, but

they themselves are a stateless function
and they run and you put 'em in the cloud.

Just think of actors as just
like a functioned runtime.

It is a functioned runtime,
but in the case of the

that one

Yeah.

it's stateful, So in memory,
it's a combination of.

the state variables you have in it.

So going back to the light, it'd have
state variables for its luminosity

and time it was switched on and
maybe, the color of the light.

and then it has methods of act on
it, like turn light on, turn light

off, tell me it's current temperature
and things like this or luminosity.

and an actor really is a stateful long
running, function that you can put

and that turns out to be very useful.

and then

Yeah.

you just be able to do
communication between 'em all.

so one of the, you know, of, as we
look towards our roadmap, the actor

pattern inside Dapr gets heavily used,
as I said, I mentioned in our workflow.

and we're starting to introduce
more features inside this

particularly, improvements on
sort of its messaging patterns.

So it can do the, it can do a, the
message box, like pattern in the

original actor model, which allows
you to send messages, asynchronous

messages between named actors.

That's sort of one of the things
that we've had on roadmap it's

been an ask for many years.

We're finally getting round to it all.

but yes, the actors, as steep for
long running objects is super cool.

Yeah, so I'll tell you some of my
frustrations 'cause I'm curious

now if Dapr fixes this, right?

So I spent nearly 10 years in the elixir
ecosystem or airline ecosystem, depending

on how masochistic, I felt that day,
because I wanted to write processes,

actors, distributed, things like this
and then ventured over to New Orleans

the framework because they brought the
whole idea of virtual actors, which

to me I was like, this is really cool.

But the challenge is with both
of those approaches and other

approaches and rust, et cetera.

You then get really locked in to only
that language or ecosystem because

communicating with other actors that
involves you bringing in a new layer

of communication such as H-B-G-R-B-C
generating types across multiple languages

gets very painful and complicated.

So with Dapr, if I want to go down
actors in this approach, I can

do that in any of Dapr supported
languages, and it's just gonna work.

Yeah, the answer is, yes.

and there's a little
caveat inside all that.

Okay.

Dapr has actors written in Python, in
C# , in Java, in JavaScript, and in Go.

So it is got five major
supported languages.

we see the actor model used
heavily across all of those.

Um, and people create active
types and start to use them more.

the one challenge that you do have
to be aware of across these different

languages though, and this really comes
down to the languages themselves, is

that they end ended up serializing.

How they serialize their objects is
slightly different between the languages.

Yeah.

and unfortunately, there's no agreement
even on things like JSON serialization.

and so the challenge you get is even
though people are serializing adjacent

format, that might be inside.net,
that JSON format can't necessarily be

reread back into your Python actor.

so it is totally possible it's just that
you've just gotta make sure that when you

plug in the serialization mechanism that
you've chosen between your actors, you've

had to choose a serialization mechanism
that's consistent across those that

can be read between those two different
actor types from different languages.

And if you.

careful to choose that and choose a
adjacent serialization format or adjacent

serialized it that will, for example, do
that or you could use a binary format.

it doesn't really matter then you
can achieve that, out of the box.

So because we've tended to write the SDK
owners in those particular repos have

tended to use the preferred serialization
format for that language specific.

It doesn't happen just naturally
but it's not too hard a problem to

solve if that's a key thing for you.

And so it is totally possible
as long as you're aware of

the serialization that you do.

So I have the choice to pick between
using some from string base like JSON.

But if I wanted to use Flatbuffers
or Protobuffs, that's possible and

I'm assuming that would give me
some sort of level of guarantees.

'cause I think those are almost
ubiquitous across languages.

Although always say JSON
is, and it isn't, but.

Exactly JSON ISN and isn't.

you could have a serialization
format, put a above that you write

out and then read that back in and
then achieve that sort of thing.

And this is one of the other goals
at Dapr would really wanted to do

is we see now that when we go into,
organizations that, there is a

multitude of languages quite frequently.

there's one major language
like net or Java, and there's

a lot of Python coming in now.

And, this was another key tenant that
we wanted to address in the whole, the

Dapr ecosystem was that we saw lots of
different microservices frameworks out

there, but they were always very language
specific and then I would, I'd pick,

spring Boot is a great example of this.

It's a fabulous framework.

it, it does sort of allows you
to build Microsoft's concept, but

it's all in the Java world and
you're bought into a Java world.

And so, you know, if you start bringing
the Python developers to work next

to them, there's a harder challenge
about, well, okay, how do I make my

Python code work with your Java code?

Dapr breaks down all of those language
and barriers between developers because

you can send pub sub messages, you
can work across workflows, you can

use a, common SDKs across them all.

and, bring in and make,
particularly, being able to call,

a service that's written, say in a

java and it calls a Python service
over pub sub or service invocation.

you can do that and then of
course, as we talked about with

actors, it's also, possible if you
think about that serialization.

So breaking down those barriers
between different teams and

different languages and different
choices of tools was a core tenet.

There's been very successful with that.

particularly as you see more
python code emerge inside.

Yeah.

Application development today.

Nice.

I'm gonna ask one more question
and then we can move it back to the

agent, LLM stuff and kind of get into
maybe a practical workflow there.

'cause I do think that.

It's just so on topic these days.

I don't know anyone who isn't trying
to integrate AI into whatever they're

working on, but I'm curious about
this now, polyglot support with Dapr,

because you're a company, right?

You're the founder of Diagrid, you're
supporting an open source project.

It must be tough to work out, okay, do we
go and support in new language and at what

level of demand do we decide to do that?

Versus are we building new value
for existing developers there?

I've seen this problem.

I used to work at PMI and people
were always asking for languages.

They wanted rust.

they wanted Java support, they wanted
Blab, whatever new language is out this

week and it is really difficult because
the level of commitment and maintenance

required for these SDK is not trivial,
and we're seeing the same right now with

Dagger who are trying to do CI/CD is code.

And their ability to add languages is
coming down to community it's support

and whether people are willing to take
on that challenge because they need

to focus on their product, especially
given how early they are at the moment.

So from, from your perspective,
are you gonna add 12 more

languages in the next 12 months?

What is, like, how is that,
is it easy as all generated?

Like I'd love to know more there.

Yeah, we have five core SDKs.

I think you could probably
count there, there were, there

are sort of seven in total.

we have those five I talked
about plus rust and C plus

plus, on PHP on top of all that.

But you're right, getting maintainers
to look after all the SDKs is a

little challenge around all of this.

We do

rely

Yeah.

on the community around those.

but we've also taken a philosophical
approach with Dapr that, its APIs that you

can use are HTTP or GRPC under the covers.

when you're working, for example, with
JavaScript, the actual, SDK on top of that

is very thin because, JavaScript naturally
is born into an http world and so you can

call the HTTP APIs for Dapr very easily.

You just have to be able to kind of
make sure that you think through http

concepts as opposed to maybe your
language concepts around these things.

so generally, you know, our SDKs are
a language shim layer, as it were on

top of the http or an underlying GRPC.

And you can interact them directly
I mean, we showed multiple languages

that we didn't have SDKs for just
calling an HTTP call because they

wanted a pub, send a pub sub message.

but, we do, and you know, Dapr
is driven by a large community.

there's over 8,000 developers
in the Discord community.

Amazing.

There are community maintainers and
core maintainers for the T net, SDK,

you know, inside, inside the Java, SDK.

In fact, in all the SDKs there are,
maintainers from the community and so we

rely a lot on that, community effort and
engagement and it's been built up over the

years in order to make the SDK successful.

typically, we start with a new API
that we put into the runtime, In this

last release, the conversation API,
and then, we encourage the community

to come along and add that in.

In fact, just this last week, with a
community member, contribute into the

Java, SDK, the conversation, API that
can be used there directly, rather than

you having to do an http call from Java.

It's a first class language concept
inside that it was a contribution.

It gets reviewed by the
JavaScript SDK maintainer.

So we're very community orientated.

we strongly encourage the community
to come along, add things into

the SDKs, the runtime takes a
little bit more to get used to.

There's a lot, more, let's just say.

it takes time to learn, core runtime
features to be able to contribute to that.

but we also get a lot of people who
contribute to the components contribute

API with all these components.

in fact, to go along with the
conversation, API this also in this

last week, we've had contributions for
you know, other language, other LLMs,

particularly, that, I've got added into as
components that you can communicate with.

So, know, going back to your question,
community support is important.

We recognize that, our SDKs
are critical to community.

and, they do take time
maintaining all of these things.

Um, we just keep pushing out there
and encouraging the community

to be part of the Dapr project.

I'm curious if when you were
you know, putting together the

idea of the conversation API
and building out how it worked.

Did you have like that, definitive use
case that you can tell a story about

where you're like, this is what we want
to make work, and what would that be?

Yeah, so what we've seen is
increasingly, I mean, I'm gonna

take this onto one other exciting
advancement actually that's coming

out tomorrow and I'm, from the CNCF.

we're gonna be announcing
what's called Dapr Agents.

and it's actually right now as we
do this podcast, you know, it's not

known, but if this podcast goes out
after the 12th of March, it'll be

publicly known about, but Dapr agents.

it's a framework that's been donated
by the community that allows you to

build on top of the actor and, the
conversation and the workflow model

in order to create the first class
concept of an agentic agent that allows

you to have long running durability.

and the same time combine that
with the conversation API, in order

to talk to language models so you
know what's happening in the world.

All this excitement about agentic systems.

We just see as just another way of
talking about distributed systems again,

and so lots of people have created
new terms for saying your agents and

how they communicate and how they have
memory and how they're durable and

recoverable and they need workflows.

Well, actually it turns out that's
all just a distributed systems

problem with a new name around it all.

The thing that makes it different
of course, so of course is the

integration with language models and
what does that make, that makes it

very much more dynamic in nature.

So the way I look at it all is that,
you want to be able to write your

very traditional, structured, and
consistent workflows that go step A

state B just like a state machine.

I know the states to go through
and it does these things and it

doesn an embrace systematic way.

But also you wanna be able to write
these hi agentic ones which allow you

to take advantage of language models.

With a language model, which is sort
of non-deterministic in some way, can

come back with different answers, can
help you achieve human-like tasks.

It were, I mean, of course the classic
one is, I want to build agentic system

that helps me book an airline ticket,
book a hotel, and a favorite event that

I'm going to when I visit City X. And in
that world, you'd have an agent that knows

how to talk to airline ticketing systems.

You'd have an agent that knows how to
talk to hotel booking things, and an

agent that would figure out how to.

go off into a particular city and
defend your events and they would all

be communicating with language models.

But in the end, it's still a distributed
systems problem you don't want halfway

through this, that the whole thing
fails and gives up and lost all your

hotel bookings and forgot everything.

and that has to be recoverable and so
it has to be durable in memory and

durable executions is all part of this.

what we've gone and done, is that
we've built this ag agentic framework

first class into Dapr again, built on
all of these core APIs around actors

and workflows and conversation API,
which is just a way of talking to

any generic underlying language model
to bring into the agentic system.

So you have this concept of a
workflow agent and a standalone agent.

And by annotating these and
saying, I want this to be a long

run, durable, type of agent.

And by the way, here's
the tools it can use.

this agent knows how to use the API
to call onto the weather service

or the airline booking system.

You

can then combine and build
these, agent systems.

So I think.

What's exciting for us about this is
that we see a lot of other frameworks

out there, starting to build these
agen distributed applications, but

we're like, well, you know, Dapr
was kind of pretty much designed

from this, from the very beginning.

Especially going back to the actor model,
which is this durable state that keeps the

memory of something and was recoverable.

You've now just gotta combine this
with a language model because, the

language model, is coming back with
some answers and you can keep asking

it and refining it and that's why the
conversation API with this language

model combined with basically actors
and workflows has been key for us to

announce this Dapr agents model and one
of my kiosks from this is like, please

go out and try out this Dapr Agents,
framework it's only in Python right now.

we started there because Python was
by far the most popular language.

We are gonna bring it
to Java and.net and go.

and, And JavaScript in time.

but we think it's a very exciting
way of kind of bringing developers

into the Gentech AI world, which
is a distributed systems problem

with language models in the end.

Wow, that's really awesome and I can't
believe it's happened tonight, tomorrow,

whenever, as you're gonna release that.

'cause I literally spent the last three
weeks playing with every AG agentic,

LLM framework there is trying to put
together some real scenarios for the

Rawkode Academy because when I have new
videos that I put into an R two bucket,

you know, I want an agent that converts
'em into the HLS format that I use for

distribution and then want it transcribed.

But I want an agent that can then go
over the transcription from X API,

and check it for comment errors and
terms that I have in the system.

Because, let's face it, no LLM so far
has been able to transcribe Rawkode

probably wouldn't get Dapr correct
but if we give them context, yeah.

They can look at text and say, we
actually think, given that this is a

Cloud Native podcast about a product
called Dapr, we probably wanna

make sure it's spelled correctly.

And it, and even now to generating
thumbnails where I post it to my slack

and I can thumb up to approve it.

And then it schedules the episode
like, I believe I could do all

this with agentic, LLL systems,

Yes.

hadn't really found the framework
to make that happen yet.

And it sounds like you've just solved
my problem in a 50 minute podcast.

I'm pretty happy.

that you'll find, I, I would
love to hear your opinions.

In fact, we could do a whole podcast
in itself of comparison between the

different Lgentic frameworks out there.

There's a few common of popular ones
that emerged like Lang Chain and Crew AI

and, auto gen around these things but I
think the difference that we're expecting

that the Dapr one brings is that, we've.

You know, Dapr has been, Dapr is
used by thousands of thousands, tens

of thousands of companies today.

we track, 40,000 different
companies through the open source

project that are using Dapr today.

It's heavily used in production scenarios.

It's used, we've shown it can be scalable.

Going back to the active model,
it's incredibly efficient.

given that each one of these agents that
will run will be an underlying actor, the

actor can start up under 50 milliseconds.

So it's super fast startup time.

you can pack tens of thousands of
these on a single core machine.

So it's very cost effective.

It's kind of production ready.

It's very well tested.

and it has all the needs because a lot of
these other urgent systems haven't thought

about the communication problem, about
distributed messaging around these things.

They haven't necessarily thought about
and had a great workflow engine for doing

all the coordination around them all.

And they certainly haven't even
thought about the component model,

which is abstracting, your code
from the underlying infrastructure.

They all tightly, couple into some
particular infrastructure around

these things and thought about that
because, that wasn't a core tenant.

that's where, Dapr agents
we believe is kind of has an

edge around all these things.

Of course, we've still
gotta prove it over time.

it's still new.

it's playing, catch up probably in, in
the agent framework world, but then.

Hopefully it'll take off.

So please, I'd love you to try it out.

I would love to hear your feedback and
I would love you to do an entire session

of comparing all these frameworks.

'cause I bet you the Dapr agents one
will come out pretty close to the top

in my mind in terms of ease of use
and in terms of functionality, and

particularly in terms of production.

readiness.

I definitely will, we'll
definitely make that happen.

But you did set yourself up
for one more question, just

with that, amazing talk there.

You said thousands of agents on a single
node and 50 milliseconds starting time.

So right off the top
of my head, I was like.

Buying a container, how are you
running and orchestrating them things.

those are actually done as
individual pieces of code.

So I mean, what happens is
that Dapr runs on sort of

on, Kubernetes as a platform.

But you know, the way, an
actor actually works is it's

just a it's a stateful object.

So just as you write a class object
inside, say a, an object orientated

language like .Net or Java or one of
these things, it's just a piece of

code that runs inside your system.

I mean, you

you can give of it in some ways
as like a process, but actually

it is a lower level in a process.

So if you imagine, you've got
processing inside that you've got

object types it's just an object type.

And so, you know, that's
why it starts up fast.

it's just a piece of code and it's
got, but it's got a piece of code

with an identity associated with it.

So it's actor 1, 2, 3, or,
Mark's agent number one.

And The cleverness of Dapr is that
it routes messages through to an

individual piece of code with a named
identity, of which you have tens of

thousands of them, or even millions
of them running on your system and

route, routes that message correctly.

it's at that level of granularity
and that's the thing that people,

fail to understand sometimes.

You know, like, you know, containers
are large things and you know,

even getting down to processes,
you know, great a process, but this

is sort of multiple actors within
inside a process, as it were.

does that answer your question?

Yeah, it definitely does.

I mean, in my head I thought, okay,
an agent is a Dapr service that

gets deployed to my ClusterAPI.

so I need to take a look at the
conversation API and actually

see how this all pieces together.

but it's definitely sounds very
interesting, very exciting,

and I've got so many things
I want to throw out already.

this might be a late night in the office
room, me we'll see how things go, but.

As we finish up here, I do
want to just mention how does

Diagrid come into all of this?

because, you know, what, what would,
what do we do at Diagrid, well,

at Diagrid, you know, we actually,
we're very, source orientated,

particularly around the Dapr ecosystem.

and you know, what we focus on
particularly here is that, we want

to make your solutions successful
using Dapr, the open source project.

So we focus on, we have, we have
two services that we, have today.

we have a service called Diagrid
Conductor that allows you to manage

and operate Dapr on top of Kubernetes.

So you imagine that Kubernetes
is kind of a key platform for

people running Dapr on it.

conductor allows you to do monitoring
and upgrade and kind of visualization of

your applications and all the ops inside
that and then we have a very core service

called Diagrid catalyst, which you think
of it as a serverless offering of Dapr.

It provides you just a, think of it
as a take away this Dapr sidecar.

we never actually mentioned that, but Dapr
runs as a sidecar for your app application

rather than you managing and operating
it all we simply host all the Dapr APIs

for you, like the workflow, API or the
pubs, API, API on a service and you can

then literally call 'em from anywhere.

So you can basically build an application
and deploy it onto a VM or a container

service or even a function runtime,
and then take advantage of say, the

workflow Dapr, API, in order to do
coordination across these things.

as a random example, if you
are an AWS developer and you're

using step functions, and if step
functions can only be used with.

as AWS functions instead, and you wanna
take advantage of Dapr workflow, you could

use, Dapr workflow, for example, to work
across, AWS Fargate service to coordinate

a set of, containers if you wanted to,
and do the coordination of, service A

equivalent service B across containers.

So Catalyst is allows you to have
an easy way to, you consume the Dapr

APIs from anywhere and actually very
well fits in the concept of platform

engineering as a team which also is a
very key area that we've seen emerge

recently where in platform engineering,
what's the contract between the platform

team and the application developer?

Turns out that app, Dapr is an amazing.

Contract between those two because
it allows the platform team to

provide a set of different services.

and yet the developers don't
have to change all their

code around all these things.

if they're using, for
example, Pubs up API.

but the underlying platform team wants
to provide a new message broker, whether

that's Kafka or RapidMQ , without going
back to our previous conversation,

ripping out the SDKs of everything.

With the platform engineering trend
that's happening in this world as well.

particularly in sort of the Cloud
Native ecosystem that contractual

or interface understanding between
those two is where Dapr shines

beautifully around these things.

Awesome.

Thank you for sharing all that as well.

will Diagrid have a booth at KubeCon?

We will do, yes, we will be
at KubeCon,  KubeCon Europe.

That's coming up on the 1st of April,
so please come along and meet us there.

We would love to hear all your
questions, hear you wanna hear about

Dapr, talk about agentic systems,
talk about platform engineering.

Talk about, how you build applications.

Distributed runtimes.

we're a game for everything.

Yeah, please come find us.

Yeah, go to the booth, get a demo,
and definitely enjoy the conversation.

Thank you so much for joining me.

that was a, I just loved
everything we talked about there.

That's completely up my street
and I hope that everyone listening

at home enjoyed that too.

thank you for having me today.

It has been great.

Awesome.

Thanks for joining us.

If you want to keep up with us,
consider us subscribing to the podcast

on your favorite podcasting app, or
even go to cloudnativecompass.fm.

And if you want us to talk with someone
specific or cover a specific topic, reach

out to us on any social media platform

and tell next time when exploring
the cloud native landscape on three

on three.

1, 2, 3. Don't forget your compass.

Don't forget

your compass.

Episode Video

Creators and Guests

David Flanagan
Host
David Flanagan
I teach people advanced Kubernetes & Cloud Native patterns and practices. I am the founder of the Rawkode Academy and KubeHuddle, and I co-organise Kubernetes London.
Laura Santamaria
Host
Laura Santamaria
🌻💙💛Developer Advocate 🥑 I ❤️ DevOps. Recovering Earth/Atmo Sci educator, cloud aficionado. Curator #AMinuteOnTheMic; cohost The Hallway Track, ex-PulumiTV.
Mark Fussell
Guest
Mark Fussell
Founder and CEO of Diagrid, Dapr co-creator