Trust and Validation in AI
Welcome to Cloud Native Compass, a podcast to help you
navigate the vast landscape of the cloud native ecosystem. We're
your hosts. I'm David Flanagan, a technology magpie that can't stop
playing with new, shiny things. I'm Laura Santa Maria, a forever
learner who is constantly breaking production. All right, this
episode of Cloud Native Compass is all about AI, so
David decided to input a request rest into Chat GPT
for a sonnet about Kubernetes. David, why don't you start
us out? What did chat GPT say? In
digital realms where servers and sway a symphony
of nodes and rhythmic dance there lies a master
orchestrate and play Kubernetes, the captain of advance
its containers. Like actors on a stage, hold microcosms
of code's vibrant art with docker's grace they come alive
engage a ballet where technology takes part
scaling with grace like. Stars and boundless skies
replicas bloom weather at its will
resilient adheres to failure's cry yet presses
on each challenge to fulfill. Oh, Kubernetes, conductor
of the cloud, you orchestrate our dreams forever proud.
I can't believe I just read that.
So we have a guest on. So, JJ, can
you tell us who you are and what you do and
how it all works? The whole thing? All
of it? Yeah. Hi, JJ.
Asgard. I'm a developer advocate for IBM
now, and that means a lot. I wear a lot of hats.
That's what it boils down to. I kind of engage in
different organizations in different spaces, try to represent
developers and
their persona, I think is the project management term.
I'm trying to learn the vernacular. What can I say?
And right now, I'm focusing on
a product called Watson X, which is an AI
platform that's built off of open source, and I feel
like your audience would be interested in hearing a little bit about that.
Yeah, it's kind of an interesting topic to really bring
up. Now, I know that there's the Linux Foundation's,
AI and Data Foundation, and then there's obviously the
CNCF, which is another sub organization for the Linux Foundation.
So is Watson AI part of the AI data?
One. Is it the cloud native one? I want to dig
all the way down into this. Let's go for it.
Yeah, well, first of all, I got to be because I am an IBM er
and I have to get the branding correctly. Yeah,
it's Watsonx AI is
the official name of the AI platform for
Watsonx AI. We also have a couple of other
products coming down that do other portions that are required for
a real AI platform to work. I don't think I'm allowed
to publicly say those words quite yet. I don't actually know. But
the one that your community in general would be focusing
on, most likely is Watsonx AI. Now,
that one. So when you kind of tie that up
to the foundations and the organizations that
our industry uses, like LFAI and
things like that, what we have done as IBM is
kind of stepped into the LFAI to help build
the ecosystem for enterprise grade
level, an AI platform that allows enterprise
to be comfortable with using AI. I'm going
to just go into a quick little tirade about why, because you're probably like
JJ, why wouldn't any enterprise be happy with
this? Right? Yeah. Tell us a bit about why enterprises
maybe don't like AI. That's an interesting.
So. So there's this whole world right now of people being like,
AI is going to take our jobs,
right? Everyone knows this. Well, we can make fun of it,
but the truth of it is, as much as
people think Chat GPT or
all the other major disruptors out there have come into the
ecosystem and started giving you sonnets about Kubernetes,
which is actually surprisingly good. Yeah,
seriously, ask it for a song later. Okay. No,
anyway, and as much as people think
that's entertaining and interesting, when you start looking at
actual how our business is done, there isn't a
very good safe environment for this information.
So everything the Chat GPT gets, it learns off
of. So if you've looked at the Samsung for instance,
issue with people giving out the proprietary information
and then all of a sudden someone else gets that proprietary information about
the Samsung thing, hopefully we can find the link to
that. That is just a microcosm of
the problems when it comes to business. Because right now Chat
GPT is a complete and utter black box. There is no
way that the owners of Chat GPT or
OpenAI or Microsoft who has invested
billions into it, will ever give us the data
that Chat GPT has been built off of. And if you start going
down that train of thought, all of a sudden you recognize you're
giving all your proprietary information. You can't give your proprietary information, whether it
be a PDF of how your HR system works to the schematics
of a F 15. You can't use Chat
GPT with it because you don't know who's going to get it on the other
side. This is where Watson X comes into
play. A really great analogy for Watson X
versus the industry right now is Chat GPT is
Napster, watson X is itunes. If
you think of and you put it in that paradigm, all of a sudden things
start making a lot more sense on the enterprise level because now
you can go to banks and say, hey, we can give you a foundational model
that we can give you all the data that was built off of this. Obviously
with some money businessy stuff happening, right?
We're not going to just give it to you. Of course we need to build
the relationship, et cetera, but we can give you the
training on top of it, and then you can put your proprietary stuff on top
of that, so you can have the Chat bot that gives you that.
Turns out that Aetna is your insurance and he give you all the information you
need about Aetna or whatever, right. Which is, believe
it or not, something that's insanely
powerful. Right. So anyway, when you start
going down that track, you start seeing more things in that space.
You can go, David. I mean, I get asked more questions about open source and
AI, but you go next. Yeah, I just want to make sure
I understand the proposition correctly. Right, so you've done
this comparison, right? You're saying Napster versus itunes.
People use, say, chat, GPD and OpenAI to go and
ask any question because it's a model that is
trained on multiple billion of parameters, right. And all that information, like you
said, is a black box. We don't understand it. It doesn't really allow you
to, I don't know, dig into sort of niche subjects with
high cardinality. Yeah,
exactly. Watson X, I get to
decide what that data is, and I get to then
query it through a similar style interface. Is that what
Watson X would offer me as a developer? Like a model where I say,
here's, all of my data have my life, right? And then I can ask
it questions and it's going to give me answers. But if
that's true, which is cool, right? I have this little question where I'm like,
okay, is there a bias where it's only going to
confirm the stuff that I've given it in my data?
Is it going to be able to be slightly more objective? Can you feed outside
properties to augment and enrich? How does that all
work? That is a very great question,
and that's a natural progression, right? So let's go a
little bit deeper than the CIO CTO level
and let's go down to the senior.
You like that?
Yes. It's a very valid question. What is AI? A
good friend of mine, Carl, actually mentioned this the other day. What is
AI? It's just a yes man. Right. When you actually look at it, what is
AI? It takes percentages of the possible questions
you're asking. It finds the highest percentage of what you were looking for and gives
you that answer. That is what AI is. In a nutshell, it's a yes man.
It's a crony. Right? So what we need to do
is give accuracy to the crony to get the answers you were looking for very
specifically. Now, we have something called
foundational models, which is, I think there's four of them. I don't
know exactly what I'm supposed to say publicly right now, so I'm just going to
say there's four foundational models and they're all built
off of data that we
have agreed upon that are safe to
use. Now let's pivot quickly and talk about models in the
open source ecosystem. Models in the open source ecosystem.
Something called HuggingFace. If your audience doesn't know about it,
the easiest thing to describe hugging face as is the GitHub of
AI. It allows places to put
models up there that you can build off of previous models. They also give some
really nice I think the term is shims on
top of AI development work. So you don't have
to do all the stuff around it. You can write some very simple python to
be able to leverage stuff from hugging
face. Now, this is not
hyperbole. 95% of those open source models are
built off of something called literally the pile.
If you Google the pile AI, it is
a massive data set out there that is just like I
think 800 gigs last time I looked at it, that
people yeah, that is all just text and
information on the web that people just shoved together.
So LLMs or
linguist, language models. Language, language learning
models. Large language. Yeah,
large language model. Laura's language model. None of us
actually know. None of us know what this means.
Exactly. So 95% of
these LLMs out there are actually trained on
the pile. And trust me, Google it, it comes up in a
Wikipedia article. It's this whole big data set problem that they have. You've said
something right there, right? We have this open
source governed, centralized knowledge of Wikipedia.
Why is that not the base for these
models? Great question. Before we
get on that branch, let me finish what I was trying to talk about. Yes,
sir. The pile problem, which is
the pile isn't clean. There is a
lot of proprietary
pirated issues inside of
inside of the pile data. Inside of that there
are books that are completely ripped off. So when
you start looking the copyright law, right? And all of a sudden
finding out people have written and their books are in the
pile and the LLM is trained on their book without
permission, and they create something. Off of that. That brings
into some really interesting conversations about how our copyright law
works and then how with that too,
patent law works and then with that
licensing and all of a sudden,
what? Sort of looking for pyramid
scheme? Not only pyramid scheme, but the house of
cards that is our industry all of a sudden
starts falling apart. Because frankly
speaking, if we allow which right now, at this
exact moment in time when we're recording a federal judge and at
least the US government has said anything from
a AI that is created cannot be copyrighted,
right? That is this exact moment in time. I think that was literally
yesterday. But again, that's only here in the
US, right? And being
a US centric company at ABM, we obviously are paying very close
attention to this. This all brings up a whole conversation
of how do we keep going down this
path without possibly destroying our
industry, right? People joke about how we're barreling
forward to the end times, leveraging
AI and all that jazz, but when you actually look at
how we protect and can use the
entities that is governments to enforce yes, I have
not stolen your idea, and it is my right to sell this to people
for goods and services. If all of a sudden AI
can create all that stuff, and they create something very close to that and
they can start selling that as their proprietary thing, all of a
sudden businesses don't function. Right. As soon as
we get a real AI, that creates the next
I mean, if we call it
Taylor Swift bot, I don't know. You get what I'm saying? You
see, all of a sudden, the onion gets scarier and scarier and
scarier. So we have to build this
wall to make sure that the world we know
right now, which is going to change because we're going
to try to create avenues to be successful.
But if we're not, it's
scary, people. It's scary. Yeah.
So why not Wikipedia? Great question.
Yeah, I'm going to bring it back to a little bit of levity because otherwise
we're just going to go screaming down the black hole. We've already
passed the VP level. We've already passed the director. We're already into the thought
leadership engineering people, right? Thought leaders,
yeah. There you go. I do want to
say, well, at least from my understanding
of how Wikipedia works
and when if you parallelize it parallelize it?
Sure. With the pile, the amount
of errors inside of Wikipedia and
conflicting information, on top of the
ability to actually get the information out of Wikipedia and train it usefully,
there really isn't actually as much to get useful
information. It goes back to my simple example of a bunch of PDFs
to look for suggested areas to find stuff.
In my mind, that is the easiest path for most
people to grasp when it comes to understanding
the power of AI. Right? Now,
we've passed the whole if then statements of AI.
We've passed the whole ecosystem of
give me avocado chairs or
whatever from Dolly. Right now, we need to look at how we
can make our lives a little bit better.
There are LLMs or not even LLMs. I think they're
considered classification. But any point being is that you can shove a bunch of
PDFs into a model, then say, hey, I'm
looking for information about our
growth over the last two quarters. And maybe that's only in
a chart right inside of the PDFs of your business
logic. There are elements that can read all those charts and figure
out, hey, turns out growth over the last quarter is in this
graph, and it's over the last four years. This is the fourth spot
right here. Okay. It looks like it was 50%. It can respond back. It
looks like your growth was over about 50% referenced in this
diagram, in this PDF. Right. So it becomes like a
really great librarian if you think of it that way. And
then if you can tell that story to an enterprise.
Every enterprise, right, has massive amounts of PDFs of all their policies,
procedures, and everything like that. If you can create a
librarian, think of whatever Dystopian Sci-Fi that
has a librarian in it, which is always a thing.
If you can give that to an enterprise and say, I can give you a
way to do that with your trusted data that I know
will not leave our borders because that is core to our
business, maybe you should look into that.
But that also assumes that the librarian is not
making up data. I don't know if you heard about the lawyer
that asked for somebody to go through all of the legal
history for something and it turned out they made up, like the
LLM or the AI made up a
fake court filing to prove something and
judge caught it and it was a big deal. That's in the US.
Again, but still, I'm little worried about that
librarian not quite
being correct. But does this come down to
a misunderstanding of where people think AI is today?
When I speak to my wife, I speak to my family, I speak to friends,
and they're talking about Chad GPT, right? I mean, they're all trying it, they're all
playing with it because it's getting so much coverage. They don't understand the
generative part of it. They think it's giving them
knowledge. They don't know that it's just all made up. Now, we do
because we're in this industry, we follow the news, we read these stories,
but to most people it's a fact machine that's artificially
intelligent and is going to give you the correct answer.
Was the lawyer liable? Did they know it was fake? Did they not
verify it? Sure. But what did they expect from the AI?
And that's one reason why we have to get AI to cite the stuff, right?
That's not too far away to be able to like, where did you get this
information from? The natural progression is
citation and trust but verify. But
again, that also requires on the model that you've created and which model you've
chosen, which black box you've decided to do,
where the data is actually trained off of. Again, this all goes
back to the other problem that we don't talk about as an industry. And
as soon as you start playing more in the space, you recognize
that it's not the LLM or the model. I
know we're interchanging LLM and model.
I want to acknowledge first of all that that's wrong, right? But most people
have exposure to LLMs, so that's why it's
becoming like the Kleenex conversation. Again, I want to
acknowledge that that's wrong. There is very specific terms here, but just to
get there for the conversation, I want to make sure that that's clear. I know
what I'm talking about, but I'm even making this mistake.
The core value, the core problem of this whole narrative
is that we're talking about the compute, we're talking about the
compiler, we're not talking about the source code, which is the
data, right? And the data is what actually gives
the compiler, which gives you the answers of
the AI, the information we need to figure out a
way that we can have trusted data that exists in this
space that we know that won't create the court filing
or has the ability to create a court filing, right. And
we have that ability to have that conversation and trust that the AI does
that. But again, it goes back to the pile, which is a
bunch of untrusted data, where again, I'm going to be an
IBM shrill for a second and say we have engineers who are
dedicated to cleaning the pile. We actually have
humans that are going through the pile making sure that it is trusted
data to build our foundational models off of,
so we pull from the open source ecosystem. So we actually do have
that's the other part of IBM's whole model and why we have such
a good relationship. If you look into it with hugging face, specifically,
we've built partnerships directly with hugging face to pull
from the open source ecosystem because as Ibmrs know,
the open source ecosystem is going to drive the winds of the sail of the
ship or whatever enterprise statement you want to make
there of the industry. And
we acknowledge that. So we are spending a lot of time and effort in
building and building bridges to hugging face and taking the open
source stuff from hugging face and finding a way to make
it safe for enterprises to work. We have
whole teams of people inside of our research at
IBM that's all they do is they clean data.
I don't envy them at all because that's got to be boring.
But we do have these people who do it. And
I was at a training relatively recently about this, and
they were proud because they were able to say that this sliver of
this knowledge inside of this open source data set, we can
now talk to our downstream clients to say, no,
we can provide this level of confidence
with this level of information. And you put your level on
top of it, and you build some prompt engineering on top of that,
and then all of a sudden, you're getting the answers you're looking
for. Is that Sanitized data
that is now verified pushed back to hugging pace for. Other
people to well, because we're
investing so much money at this exact moment in time. Now,
that is going to be one of our selling points, is that we can give
you the core data. We can give you the data
set that we have built. Our model, and, of course, the model that we're going
to be offering to you through the Watson X platform will be so
large and so processor intensive, we need to run
it on our Mean. What does IBM have? It's literally business
machines, right? So we have a lot of compute power to do this stuff. So
we can run that model, but we can give you the actual data set and
we can say with legal penalties that this is the
data set that builds this model. So you can actually push
this out to and put your level of information on top of it to get
the information you're looking for. Okay.
I want to make sure let's take
all the things we covered, right? What's next? We've got clean, trusted data.
People can come along and say, okay, I'm going to use this for my organization
because I have this level of trust right now. I'd like to understand a real
use case that any developer listening to this episode would be like, yeah, that
sounds really sweet. Right. And I'm wondering,
could we take Kubernetes events? Could we take metric servers? Could
we push this all into Watson X? Can we derive
insights and predictability into our workloads on a Kubernetes
cluster as an example? Right. Is that something you're seeing people do?
Well, the answer is
yes, with a lot of Asterisks behind
it.
Yeah. Sorry. I'm a senior engineer. It depends
as much as the conversation we had around Watson X is, the
best part about of this whole narrative is that I use this
term in a lot of conversations I have, but it is valid in this
one. We want to be the Red Solo Cups of AI.
If you don't know what Red Solo Cup is, watch any
us. College movie, right? The beer pong cups,
right? That's correct.
Yes, exactly. Perfect. I'm not in the US. But that's all I know them as.
That's it. Yeah, but Solo makes an insane
amount of money on those things because they're everywhere. Right? They're just the way
you do that stuff. IBM wants to be
that of AI. We do not want you to interface directly
with Watson X. We do not
want to be B to C. We want to be B to
B. We want to give you an API with a trusted environment
to be able to work off of that. Now, the reason why I'm saying this
is because the developers that are listening to
your podcast right now, what I am offering to you is
a simple, non crazy Rest
API with Watson X that you can just literally
use. Use requests from Python to do a simple
post to the backend with a certain API
Key and a couple of requests of which model you want to push it through.
It's a really simple yeah,
JSON, and then it comes back with what
you want. So you do all the heavy lifting inside of Watson X,
but for your application, you can literally just leverage it a
little bit higher. Right. You just add one little request, and it comes back with
a nice little blob of information. So the practicality of it
is, again, it depends, right. But the
idea is we're trying to build the railroad for you here so you can
get the information you're looking for and be able to pull intelligently
back. Did that answer your question? It
did, yes. Thank you. I was going to say, so basically what you're trying to
do is you're trying to say, OK, see all these booths at KubeCon?
Imagine a number of these companies actually running on top of what's
next underneath, but you just don't know it yet. You as a consumer
wandering around, okay.
We'Re building phone lines, building the railroad, whatever you want to call it, whatever
major infrastructure change you're thinking of. In essence, we're trying to do that
for AI. You're trying to be Bell Labs is basically
what you're telling me. You're trying to be the old school Bell Labs that's building
out the original stuff that eventually everybody builds on
top of. Yep. And then on top of that, like, I
haven't even mentioned quantum inside of this space. Right. There's a
spur that goes into that ecosystem. Buzzwords. Don't just throw
buzzwords, JJ. I have a bunch of
PhDs, all they do every PhD physicist, all day, every
day, all they do is look at this weird ass computer
that's going to take all our lives or something. Yeah, exactly. That
makes sense. I guess my question always comes down
to there is a people problem under this to me, right under
AI in general, no matter where it ends up, if it gets
integrated into the cloud native ecosystem, or if it stays in the AI
ecosystem, I guess for lack of better term,
but
we can get the AI to start saying, here's the citations. We can get people
to try to verify everything, but we have that trust, but
verify. But how many people sit there and actually look at the citations?
Right. That always was a thing in science, I remember, was
people don't always check the citations. You have to learn to be very good about
checking those. It doesn't come to you naturally most of the time.
So how do we like, if I think about, let's say
that some company decides, okay, I'm going to build
a monitoring tool that goes through all of my live metrics
and analyzes everything using AI and comes out with an
idea of, okay, this system probably is going to fail, has like
a 20% chance of failing in the next 24 hours.
Right. Let's say that's an eventual thing because that's an
evaluation a person might do, but it gives you all the citations.
But who's going to go back and look at their logs to verify it
right now? Right. If I'm told like, oh, it works,
my reaction is, okay, it works. Right. So
how do you deal with that people problem when it comes to this AI question
of, well, we do have to verify it somewhere. Otherwise we
get pages that wake us up at 03:00 a.m. In the morning
for nothing, because nothing actually went wrong, because the AI
got it wrong, because they analyzed some other incident from somebody
else that just happened to have a correlation here or something, right?
How do we get there? I don't know if that makes any. Sense, but
how do we fix the chicken riddle problem? Can I try and
broaden that question? Because I think you're both thinking something very similar,
right? And I don't know if I'm trying to get there. Yeah, David, you might
be able to say this better than I can, so go for it. I don't
think so, but I had a question and then you asked something really similar and
I'm like, oh, if we just expect so let's try got all
we've got a world of AIS. Every developer is out there talking to a different
AI. I'm just going to call them singularities and AIS. Right, I know there's
deeper whatever now. We're all asking these questions,
we're all getting answers. And we mentioned licensing, we mentioned copyright.
Derivative works are obviously a huge challenge when it comes to people
using these answers to then put something out into the world.
So there's this ethical dilemma as well, which I think ties
into what Laura was asking is that if ten people go and
ask AI how to write a good song or a catchy song or a
number one hit, and then they all go start to use this, that's
ten different AIS that then somehow need to be answered. The question
of backtrack. Do we need
another AI to answer the question of was this written by an AI? Do these
AI companies need to work together to provide a transparency log to inputs
and outputs to prove that something was an artifact from
their algorithm? What's
the future look like? This is clearly a tough question or a tough
predicament. So how does IBM tackle that? How does what's Next tackle that?
Are there conversations with other companies? This was
actually a wonderful working session
at Fosse. We had this whole and I can
link to the Etherpad that captured all
the notes for this. And it was basically the
open source. If you don't know what Fosse is, it's what OSCON used
to be. But Fosse has broken off from it's the same
group of people, but not under the O'Reilly banner.
It's under the software conservancy. They're the ones who go after
the GPL people, right? They have the GPL lawyers
and they are an interesting breed, let's just put it that
way. Anyway,
the interesting conversation happened was, how can
you verify and say that this data
is not stolen? That was what it boiled down to, which is, I
think, kind of where you're going there. And
it always went back to the
this end entity, this Tarball. Sure, let's call it a
Tarball, make sure that this Tarball, which most
likely is a binary, isn't stolen.
It always went back to the data. It always kept going back to, if
I can prove to you that this Tarball came. Out of
this data blob through the model that I've
created and this data is verifiable. That
is the only way that you can backtrack through. Problem
is, models don't have like a shaw,
right? They don't have some way to say that this was the model that I
used to do off because the technology just doesn't exist, right? And
as much as we want to add all that stuff onto this stuff to be
able to say that, yes, this exists, the
ecosystem is already so large and has moved
forward so rapidly ahead of us, where you can get
an older version of Chat GPT on Hugging Face. You didn't
know they've open sourced those models. The models, not the
data, the models onto Hugging Face. So you can play with Chat GPT on your
local laptop. They're shit, but they do exist.
The challenge is we have no answer for that in the space. And the
way that IBM was answering it is through
doing the pile cleaning, being able to show you the
models and with legal penalties, with literally
pen to paper to this is we can show you exactly what's going
on. But Chat GPT, Microsoft, I can
tell you they will never do that. Right? As
much as we want to say that when
you go into the disruptors, I'm just using that
as a term to cover the non,
I say this with love the normies when they think about Chat
GPT, right? Those are the
disruptors, the ones that are going to be using it to write the paper for
their history class or whatever, right?
The models and the data that exists in that space,
the companies behind it, like OpenAI will never give us
those data sets. They will never do that. And then with Microsoft
investing their time and effort with Copilot,
right? That's one I kind of just skipped over, but I fall in the same
space as Chat GPT. As much as they
claim that they didn't take code from GitHub,
as much as they claim you can have only
Copilot only look in your we're computer
professionals, we know that's never true, right?
And as soon as you get that data slurped into the model, you
have no way to pull that data out. People forget that
models are. There'S no
reverting. Once you've trained something, there's no way to revert it
and pull it out. Not easily.
You could destroy the model and recreate the model with removing the data,
but then again, if that model is already out, just again, it's a compiler, you've
got to think of it as a compiler and then all of a sudden, a
lot more things start making sense. So Microsoft will
never give us a legal affidavit saying that, no, I did not
take any private repos from GitHub and
no, I looked through all of
GitHub looking for all the license files to make sure the Attribution is
correct on all of the Open source projects. I did that
alone is a Herculean effort. Did you get
trained off of this code that somebody just put up on GitHub? And there's no
default license, legally speaking, software
conservancy couldn't represent them in court. Going after
Microsoft because of that, right? And again, it goes back to what we
were starting at the very beginning of this conversation with Is.
We have no visibility into this space because the technology
moves so quickly without checks and balances that we are now at a
point where, okay, bad
analogy. And it just kind of hit me right now. Stick with me for a
second. You know, I'm the queen of bad analogies, so go for it.
In essence, we've created a bunch of
printing presses, right? All of a sudden,
we created printing presses all over the world that you can create,
that you don't have to organize anymore, that can just start
printing out information for you. And now what's stopping
you from selling those books? Right? Because there's no
validation that as long as you got that initial printing press with
the plates in it to give you the books to
shove out, that is what AI has done. It's given the ability to send
out that information very quickly, at least when we had the Internet, when it
first hit and people were worried about pirated books and stuff like
that, the ecosystem created
blobs of ways of secure pathways to getting
these things right and validated the itunes of
the world, if you will. But here,
that horse is already out of the barn and already to the next
town, right? We've got a lot of catching up to
do. And the only way and problem also is that
this is right. Like, this is not just
our friends in China. They could create LLMs that do all
the that create patents for them. And us.
Law means nothing. Right over there. They have their
own set of laws and their own ways to doing technology, and they have
a lot of computing power. Over
just I'm not trying to be doom and gloom. I'm just trying
to express this to our audience that we're talking to here. Saying
that as much as you think, hey, the joke of,
hey, chat GPT, give me an application. And it gives you
an application in five minutes. It does. Blah. You're going to be spending 24
hours debugging what the hell that application does because you can't trust
it. And there's this whole ecosystem around that that people
don't recognize that it actually spurs out to a lot of other stuff. Anyway,
sorry I got on a roll there. It's okay.
All right, so the TLDR is we're all doomed. It's all fucked. Go
hug your loved ones. Hugging face. Just turn off
the computer. Yeah, hugging face. Hugging the face. Hugging
face. Turn it off. And the next podcast will be coming to you.
Generated by AI. Our faces will be moving,
but we will not be the ones setting
I guess we're at the end of, like, there's a ton more to dig into
and who, uh, if there's more you want to hear on
this topic, by the way, there is a discord that you can join.
It's called Raw Code Academy, and there's a channel specifically
for cloud native compass, and I'm calling it out right now because I have it
open in another tab. But if you want to ask more
questions, maybe we can do another episode on AI someday
in the future. You can see. But yeah,
let me throw out a positive thing, right?
Or positive. Either one.
Sorry.
For me, right? I'm not in the ML space. I'm not in the AI space.
This was all new to me as Chat GPT and
OpenAI came out and Google Barred and all that sort of stuff. To
me, those were the only options. But I think we're in a really fortunate position
now. But there are other options. There's a lot of movement, it's a lot
of not volatility, but there's new
ideas coming all the time. And there's a lot more open source happening, too, and
there's a lot more trusted execution environments like we're hearing about Bozon X.
I think there's a lot of positive things going to come from AI, even though
it's easy to point out the scary bits, the negative bits.
Right. But I'm still extremely hopeful based on everything that
I've seen in the past and I've heard today from JJ. That is going to
have a net positive impact on my life and hopefully other people's
lives. I'm excited for the people working on this.
Okay? I'm excited. And at the same time,
I have more history in ML and
things like that from Python, but also from science. Just
thinking through that to me, I'll be honest, all of this is stats
all the way down and stats and I have a hate, hate relationship for the
rest of my life. Yeah, I just echo
and I'm just like doing my homework. Stop it. Scan lies and statistics. I mean,
come on. That's exactly how it works. I think Mark Two is a
little fiesta in that sense anyway. But who
knows? Maybe we'll get really lucky and it works out. But
JJ, is there any last thoughts, comments,
whatever? Because we're already longer than we normally are, but this is a really
interesting conversation, and maybe you have any last things,
last plugs, last whatever. Where can we find you online? All
that fun stuff? Yeah, well, first of all, I'm
pretty easy to find online, JJ, asgar most
places. If you are interested in Watson X, I do need
to plug the URL. IBM biz
dev Watson X The dash is the actual
dash. Not D-A-S-H but the actual Dash
Dev. Watson X. IBM is.
I do want to acknowledge that it is hard.
You think Kubernetes is hard? You think our
cloud native ecosystem is hard, and it is when you
start playing in the AI space,
be prepared to be confused,
question your ethics and morality, and
never ask what a developer advocate is, because I did
once, and it told me I was, in
essence, a CIA agent killing people. That was a little weird.
What? Okay, yeah, there's a reason I have.
Not created a Chat GPT account or any
AI. This was just an open source LLM. It was even better. It was just
like some generic open source LLM that basically described developer
advocates going sent out by Obama to kill
Russian entities or something like that. It was really weird.
It was really weird. For more exciting life than I
realized. On that note,
thanks for coming out, JJ. I hope it was fun. I hope you all
enjoyed David last thoughts. I know you're
lagging all of a sudden. Yeah.
I guess that's the answer. AI.
No, it's probably good. But on that note, thanks, y'all, for listening.
Thanks for joining us. If you want to keep up with us, consider subscribing
to the podcast on your favorite podcasting app, or even go to
Cloudnativecompass FM. And if you want us to talk with
someone specific or cover a specific topic, reach out to
us on any social media platform. Until next time, when
exploring the cloud native landscape, on three, on
3123. Don't
forget your compass. Forget your compass.
Bye.