Episode 1

full
Published on:

18th Mar 2025

What Is Emergence and Why It Matters for AI?

In this inaugural episode of The Emergence Podcast, hosts Justin Harnish and Nick Baguley dive into the concept of emergence — a fascinating phenomenon where complex systems arise from simple interactions. They explore how emergence shapes everything from natural systems like flocks of birds to modern AI systems that learn and adapt beyond their initial programming.

🔑 Key Topics Discussed:

  • What is emergence and why it’s a game-changer for understanding AI?
  • How selection pressures shape complex systems in both nature and technology.
  • The evolution of artificial intelligence from basic algorithms to generative AI that surprises even its creators.
  • Why culturesociety, and even human intelligence are forms of emergent behavior.
  • Ethical considerations and the alignment problem in AI development.

📚 Books Mentioned:

  • The Stuff of Thought by Steven Pinker – Discusses how language reflects human nature and shapes thought, which ties into understanding the emergent properties of AI.
  • The Fabric of Reality by David Deutsch – Explores the theory of knowledge and how scientific progress emerges through explanation and understanding.
  • Programming the Universe by Seth Lloyd – A groundbreaking look at the universe as a quantum computer and how information processes shape reality.
  • The Beginning of Infinity by David Deutsch – Explains how solving problems leads to infinite progress and the emergence of new knowledge.
  • The Survival of the Friendliest by Brian Hare and Vanessa Woods – Highlights how friendliness and cooperation drive evolution and human success.

🛠 Apps and Tools Mentioned:

  • Crew AI – A platform that allows users to create virtual agents, build organizations with AI managers, and establish collaborative AI workforces.
  • OpenAI – Mentioned in the context of its innovations in generative AI, including text-to-video models and multimodal systems.

💡 Key Quotes:

  • “Emergence happens when simple systems interact to create something more complex than the sum of their parts.”
  • “AI systems are starting to exhibit emergent behaviors that even their creators can’t predict. This is both fascinating and a little terrifying.”
  • “Culture itself is an emergent behavior — it arises from millions of interactions across societies and evolves over time.”

🔎 What’s Next?

In the next episode, we’ll explore human-AI collaboration and how emergent systems can unlock unprecedented innovation. Subscribe and stay tuned!

Transcript

What if I told you that some of the most incredible invasions in AI aren't programmed? They emerge. That's right. Systems that evolve and adapt just like nature. They're shaping the future faster than we can imagine. Today we'll be deep diving into the world of complexity and emergence, the invisible forces behind some of the most powerful technologies driving our future. What does it mean for AI and for you? Welcome to the Emergent, where we uncover how humans and AI are shaping the future of work, decision making, and innovation. Let's get started.

Welcome listeners. I'm Justin Harnish, your guide to the fascinating frontier of artificial intelligence and complexity science.

And I'm Nick Bagley, here to connect the dots between cutting edge technology and the practical challenges we face in industries ranging from healthcare to finance and education.

Together, we'll bring decades of experience across Fortune 500 companies, government, startups, and academia.

But enough about us.

Let's talk about the big idea for today's episode.

Perfect.

We're kicking off the Emergent by tackling one of the most intriguing questions.

What is emergence?

And why is it revolutionizing the way we think about artificial intelligence?

That's right.

We'll explore how emergent systems, like a flock of birds or the neurons in your brain, are inspiring the next generation of AI.

And we'll explain why this matters, not just for tech geeks, but for anyone navigating our increasingly complex world.

Yeah.

In fact, I think we should maybe geek out a little bit ourselves for a minute, Justin.

How many times over the years have you talked to me about murmuration and flocks of birds?

Tell me some of the fun things we've talked about with emergence in the past.

Yeah, it really does.

It comes down to the nature of the world around us.

Life really kicked it off.

The nature of what life is and how it evolved to be intelligent, how it evolved from genes doing their own things to that murmuration that you talked about, right?

To organisms doing things that were phenotypic behaviors.

That's a word that we can get into that did something based upon their behaviors and some next level of emergence from just what their genes wanted them to do, just what that organization needed to do to survive, to thrive, or to maintain homeostasis and just get better and better at living a longer and more fulfilling life.

And again, being influenced by their genes to do that in order to reproduce and overcome the selection pressures at that lower level of emergence, which drives the whole show.

Some of the most intriguing ways that emergence is still talked about to this day and certainly its foundation in science started in living systems and how they emerged to improve on their lifespans and to improve upon the planet as a whole.

We've spent decades at this point, but definitely many years ourselves talking together about systems, about complex systems, about simple systems, about how to solve for a given problem, how to really narrow it down to the most simple tasks that we can try to solve for and accomplish, but this idea that emergence leads to life, that these things are coupled together in a way that we really don't even fully understand at this point.

It's just fascinating.

And I think there are very few topics that could get us going a bit more than this one.

Maybe tell us a little bit about the basics of this emergence.

Yeah.

I think an important thing to do is to define some terms or roughly, loosely talk about defining some terms.

And so when we're talking about emergence, we're really talking about complexity theory.

And complexity is something that gets confused often with the related word of complex.

So it's not that it's difficult.

What complexity is really getting towards is think of the brain.

The brain is likely the most complex system.

It's the most complex system in the known universe.

So when we're talking about complexity, we're not talking about something that's ordered or highly structured.

Order is as much the death of complexity as chaos, as pure randomness is.

So it's neither the perfectly ordered solid nor the very diffuse and unordered chaotic gas system, but something that lives on the edge of both of those.

And a complex system then is something that is this borderland between order and what we want from that and chaos and randomness and what we glean from that in terms of possibilities.

And so on top of that, we have this component of complexity that is also self-organizing.

So think of the ant hill.

Think of the neuron in the brain.

Is that that neuron by itself is not a complex system, right?

But together in self-organizing way, when they arrive at some more complex behavior, when they're able to go beyond what just the individual in that self-organizing system is able to do, that also is part of a complex system.

And part of complexity theory.

And then just sort of the last thing that is really where we've spent a lot of time is in what drives complexity to go in one direction or another.

And David Deutch in his still my favorite book, The Fabric of Reality talks about the algorithm of selection.

And the complex systems are driven by selection pressures, same way as natural selection and the complex system of evolution has been driven by the environment.

The classic example there being the end of the dinosaurs or some of the dinosaurs and the emergence of organisms that were able to burrow or were deep enough in the sea to avoid the cataclysm of the meteor strike.

And so these selection pressures aren't always as profound as a meteor strike, but they are certainly what any complex system comes into and through its memory, through most of the individuals either surviving or dying through.

And that can be an actual physical death or some sort of digital death that will be part of that.

These selection pressures drive the eventual emergence, which is just a different complex system in the way that they change and morph into something new is through the process of emergence.

But this complex system is driven by what will deem selection pressures and their environmental factors that give rise to new and evolving emerging complex systems.

Yeah, absolutely.

I want to take just a very simple example and then I want to bring it to AI.

A very simple example of emergence is a beard.

Right?

So each individual hair really on its own is a very, very simple system.

Same way that you described the neuron, the way that you've described these other systems, even thinking about one single starling, one bird in that overall murmuration.

Right?

But when it becomes more than the sum of its parts, that's really that collective behavior.

That's really emergence.

Right?

Now there are a couple of kinds of emergence and I think today we'll focus on the ones that relate to AI.

But those types of emergence are where even though this system, like with the birds, for example, is coming up with a whole new collective behavior.

The flock is moving together in a different way than you can ever see them move on their own.

They're each acting and reacting the same thing with like a school of fish, for example, as they try to dive away from a shark or from some other predator.

This type of behavior, the individual fish is still truly its own individual.

Another type of emergence is when something new is completely formed that again is more than the sum of its parts, but now cannot be separated any longer.

As we look into AI and what's changing today, we really need to trace it back to some of the steps of what existed before.

Early days of computing, the Turing machine, even the Turing test itself, these old outlined what was happening with our systems in early days as we migrated eventually focused on a binary system where things were really on or off.

And now that complexity is added more and more as time has gone on to where eventually data science, we were able to go in and be able to take a data set, be able to create code and an algorithm that was able to determine really what are the individual parts of this data set, how do they combine together and attributes and eventually into features that are really important for determining an outcome, for predicting an outcome, for prescribing a solution, for creating a recommendation engine.

Many of the things that have really changed what we've done within data science have also changed what we've done across the internet and really across everything that you use today that's tied back to anything software related.

But now we've reached something new.

In artificial intelligence, unlike the traditional AI systems, has now gone beyond explicit programming.

We've been able to feed in huge, just massive, massive amounts of data.

And now we're creating new, emerging systems, systems that do things that we don't even understand.

They evolve, they adapt, they surprise us.

They even have reached the point where we don't actually know what they're capable of doing for many, many of these AI systems and models that exist out there today.

It's absolutely fascinating and it's so cool to watch what's happening.

But as it becomes something new, we want to discuss what is it?

What are the potential concerns?

What are the potential opportunities?

And where can we go from here?

Yeah, to take it back to your simple example and to get to where you ended with the modern generative AI.

So the example of the beard is very interesting, right?

Because there's a level of emergence where you get to a beard.

But we're social creatures and oftentimes a beard means more than a beard.

A beard's more than a beard, right?

And not sure who you're referencing here.

So one example that comes to mind is an Islam, right?

A beard is required, right?

A beard is part of showing your level of sophistication in the religion.

In another context, a beard is sort of a member of a more hipster society in this country, right?

And maybe something that brings out a certain type of individual and recognizing that in society is another form of emergence, right?

It's the emergence of the importance of that symbol in the society you're in.

And that recognition by that society of what that thing is.

Similarly, in a love relationship, a beard might mean the presence of a change or if something new that is happening in that relationship that the wife now likes, right?

That it's something that really is centered in that relationship.

And so it's part of this further real emotional emergence of love and of cherishing another individual and all of the feelings that go into that.

The conscious result of that love feeling.

And so as it is with any particular compound emergence or new emergence in a different realm, so it is with AI.

So you have all of these very technical aspects.

You have transformers and you have tokenization and you have these very highly complex algorithms in their own right.

Something that the creation of the data sets and the real ability to do inference and pay attention in a data science context, all complex in their own right.

And layered onto the top of that is this societal wrecking ball of language, this very complex system that we use to in terms that Steven Pinker writes about, become the stuff of thought.

So it is well subscribed that we don't think in terms of language, but it is our best proxy for relating our thoughts, relating our cognitive world, including our conscious cognitive world into relation with other humans most often, but into describing our world.

There are very few things that we can't transcribe into language that we don't hope to convey as thoughts.

And for those that we can't convey into human language, we convey into mathematics.

And we describe the universe very thoroughly and very accurately in terms of mathematics, which again, these LLMs are very capable of.

And certainly they're as capable in an encoded computer language, Python, as they are in every human language that's written, that's known.

They are more capable of translating to you some language that you don't know, but conveying meaning in that language to somebody who speaks it.

Yeah, but just like a child, they may get it wrong.

So there's some really fascinating things that you're bringing up there.

I think one of them is that when we think about the mathematical side and we think about how that relates to the universe and how it functions, we really like to describe it as discovering.

You're discovering an algorithm and you're understanding E equals MC squared and how that really relates to the rest of the universe.

I think as we look to see what these algorithms are capable of, a big part of that is discovering.

So when we talk about large language model, being able to process things in Python, for example, oftentimes it needs additional tools.

You can go in and add REPL and add other things that allow it to teach itself Python for your particular use case.

You have to provide prompts that not only describe what you want as your actual output, but start giving details to help that LLM understand what its role is, what its backstory is, what the things are that it took in order to understand how to create that code, understanding best practices and going in and describing that you want to use solid principles, for example, and you want to go in and use the best practices like encapsulation and other things.

Start creating the guidance for that communication in Python.

But now the knowledge that can become that becoming from business, from an individual who has done programming for years can create and help you discover something brand new, something that's never really existed before such as a whole new algorithm.

This is something that would have taken me months to be able to create in the past.

And I created two last week and it took me about eight hours total, completely new algorithms that I think can change many of the ways that I think about the world.

That's absolutely fascinating.

You also touched a little bit on society, culture.

As you talk about beards, there's so many things about society and culture that really are emergent behaviors themselves.

Even culture itself is truly an emergent behavior.

And it comes not from going in and teaching everyone the exact same rule, not from guiding every child in the same way, not from creating laws and regulations.

It comes and goes with trends and fads.

It reaches these selection pressures.

It finds these scenarios that look chaotic or even become chaotic or become overly ordered at times.

But as they come back together, you eventually create something new.

And as that culture emerges, you now can build things on top of it.

Many of those have led to the empires of the past.

Many of the most successful empires have really been guided by their culture.

And even today in business, we talk about how culture really can eat strategy for breakfast.

We can build out our culture and really understand it and have it guide who we are and what we do and why we do it and really the how of our daily decisions.

And it starts changing not just that overall culture, but it helps emerge something new from the tasks and the work that we do each day.

A few of the things that I really love that you touched on there.

Based on the discoverable universe, the way that the selection pressure algorithm and overall complexity arose in the universe, I think is really well described in Seth Lloyd's book, Programming the Universe.

He's a terrific physicist and he really, I mean, he starts with the idea that the universe is a quantum computer.

And I think that that almost goes without saying.

It started from a quantum fluctuation.

The places where complexity resides now are these groupings of the inflationary ripples that exist from that, that proteaseic quantum fluctuation.

And the selection pressures on that compute are akin to the million monkeys trying to type the sonnet or trying to type Shakespeare, trying to type Hamlet.

And so a million monkeys on typewriters, according to Professor Lloyd, is impossible.

They will never reach any, they probably won't get to a sentence or two.

But if they are working to on computers, even classical computers, they've simulated where a million random monkeys just typing can actually start by building at first some sort of genetic code structure and then eventually a selection structure and then eventually a language from that and eventually given enough time and computational power, those tools become enough for them to be able to write Hamlet by Shakespeare.

And so furthermore, both Deutsch and Lloyd have proven that in order for the universe to be where it's at, it likely has to be computational and we don't have wires and copper running through the universe.

The only computational machine that it could be is a quantum computer.

And that helps us to arrive at why it's discoverable, why we can or I think a better term for it is decodable.

We can actually decode the nature of the universe, including all of the digital realms that we're into simply by paying attention to the outcomes of certain causal actions and decoding those.

You look enough at quantum particles and you don't discover, you decode the equation for the Schrodinger equation for the wave function.

And so similarly, I love this idea of have we discovered something or have we actually decoded it?

Have we arrived at the algorithm that through this progression from early just bit flips and then early end or not randomization and those sorts of subroutines, then complexity, then life, then complex acculturation and the things that these compounding emergencies in a place like here on Earth.

I also love this idea of culture as an emergent and of course it is, right?

It's this collection of millions and now billions of us into societies that actually work, that actually function really well as an organizing principle for the unit of the planet and are just getting better and better at including more of us and including new thoughts all the time.

And so, and to me, one of the selection pressures that I often think about around morality is this wouldn't be possible if humans weren't mostly good, right?

If we had a society where even 25% of people were sociopaths, that would drive the selection pressure away from complexity towards chaos, towards randomness.

And certainly in a regime where every other person that you come across in society is a horrible human being, a sociopath, we wouldn't have society at all.

We wouldn't have culture, right?

These things wouldn't arise out of morality and civil society wouldn't arise out of a place where there's so many individual actors acting against the best parts of compound emergence.

And then finally, I really like where this takes us in terms again of AI in that so much of this is coming about because we're questioning what it is to understand, what it is to be a moral actor, what it is to be in relationship with something that is now more intelligent than us or eventually more intelligent than us, but doesn't have the same capabilities as we do maybe in terms of thoughts, feelings, emotions, suffering, pain, pleasure, and you name it.

And so I think that the desire to put another podcast into the world, right, which is one of the things that sorely needs, I'm sure, is to talk about these things that are arising because of this new societal factor, this apparent end to our intellectual hegemony and the ability for something else to be so capable in ways and means of describing the universe, informing on our own intelligence and informing on our own humanity.

Yeah, it's absolutely fascinating.

And I think it probably requires an entirely new episode, but thinking about intelligence itself as another form of emergence is really central to what we're talking about when it comes to artificial intelligence, right?

I love what you said about culture and about how really our humanity is what helps us continue to succeed.

That is, this culture emerges that some of the parts is not made up of many sociopaths or a significant percentage of the population that really is going to detract from our overall success.

My mother-in-law gave me a book a little while ago that I just absolutely loved.

I wasn't expecting it.

I didn't hear about it.

It's one of my favorite things that happens.

It's when a book finds its way to you.

And this one was called The Survival of the Friendliest.

And it was understanding our origins and rediscovering our common humanity.

It's by Brian Hare and Vanessa Woods.

Really, really an incredible book.

And it talks about how not just for humans, but for many, many species around the earth, we can actually tell now that the friendlier portions of the species tend to be more successful.

There are very few reasons why Homo sapiens ended up being the most successful out there.

You could look at many of the other groups that existed at the time and you could say that they were bigger.

They were smarter.

They were stronger.

They were faster.

All right.

So many things that should have given them the advantage overall.

But when we were able to come together as cultures and create friendships, we could now share technologies that we were creating, weapons, things that would allow us to be able to hunt birds for the first time for any kind of Homo species, really be able to find ways to succeed not only within a small group, but across the other groups that were like us.

Today that has many of the things that we consider negative within our culture, where we create these us versus them mentalities, where we see the negative things that happen around the world.

And when you talk about our culture really emerging to become billions of people, rather than those that you interact with on a daily basis or those that you would consider your friends, really the internet has changed that for us.

And this mass learning across cultures, across societies, across people around the world starts to normalize for many of those friendliest behaviors.

And even though we may see the negatives and we may see different times of chaos seem to be coming around or different forms of complexity, really those cultures emerge and continue to emerge to the better for our overall good.

And I know that feels overly optimistic and incredibly hopeful, but I think it's a powerful message that is real.

Absolutely.

Again, I've certainly been convicted of being overly positive, especially when we have absolutely one of the interesting thought experiment by Nick Bostrom is the black orb, the black orb urn.

And so inside of this urn are orbs that are white, gray or black, and humanity is constantly picking out these orbs.

And these orbs represent technology.

So a white orb is a technology that's purely good and there are very few white orbs in the urn.

There are mostly gray orbs, which like the internet, like social media, for example, is both good and bad.

We've created something that allows you to instantaneously be a part of global society, and what's happening on both sides of the planet has reduced our ability to entangle ourselves with just our tribe.

But it's been very negative.

It's caused masses of amounts of online bullying.

There have been problems with, because these networks don't always, it's not intelligence's effect to always bring out the truth.

It is just telling the story that we are telling.

It's driving a narrative that's not necessarily driving towards truth.

And so we have these gray orbs and then we have a pure black orb, which if we draw it, it means existential risk and humanity put back into a much, much lower place on the food chain, horrible outcomes.

And we have to deal with these gray orbs.

They're mostly going to be gray orbs, and we have to figure out how to overcome them.

As David Deutsch says in his book, Beginning of Infinity, there will always be problems, but problems are soluble.

So we will always come across problems, but we have to be mindful of the fact that we have been able to manipulate, compute, and quanta in order to overcome the things that we've put in front of us.

We are able to scientifically, like they say in the Martian, I just science the shit out of it.

And that's how he survived, right?

And that's what we've done to survive is science the shit out of it.

And so I am broadly positive, and I'm broadly positive because where we don't drive, where we haven't yet figured out how to become the selection pressures for driving emergence, again, we can really understand how they work because of this new tool.

We can understand how emergence works because of these new LLMs.

We can make them our first tool to understand things like the rise of intelligence from nascent data, the rise of intelligence from monolingualism to bilingualism.

We can toggle all of these factors now in this tool that gives us reporting an error correction like that.

And so we're in this time where we're able to investigate some of these things in theory of mind that have been philosophies studied by the most important thinkers of any time.

We're actually able to experiment on them and get to iterative error correction on our ideas.

Absolutely.

Absolutely.

And right at the core of it, we're finding that many of the things that we didn't understand at all like synesthesia are becoming functions or emergent behaviors that come directly out of the systems themselves.

So we're starting to see multimodal models that can process text and image and video and audio can really take in the senses that we have.

There can even be sensors from your car, for example, and really be able to interpret that data, understand it, and be able to process in every direction.

We just recently had the launch from OpenAI on their text to video generation.

This ability to generate is something completely new.

Really the emergent behaviors underneath like neural networks themselves are getting way to something that we never could have imagined ever before.

And really the kicker on this is we're still actually trying to understand how and why these systems do what they do and the way that they do them.

It's a bit like magic.

It is.

And to me, I mean, it's all the further cause for calling it an emergent property of artificial intelligence.

What we used to call artificial intelligence with GenAI is an emergent, is a new emergent entity from the combination of two emergent entities, which are two complex entities, I should say.

Yeah.

Right.

And so, we're seeing neural networks and the way that they'd always worked here to for and language.

Our ability and our desire to construct that video, to construct that picture on mid-journey, to make a textual document and drive that creativity, but for somebody like me who doesn't have a wit of graphic artist ability, doesn't have any capabilities as a videographer and only exists in this realm of human language, one of them, right?

I don't even speak another one.

But to be able to put that into a prompt and come up with video, audio, just these epic pictures that are being created, code, yeah, news stories, just completely being able to create from language.

I mean, it is my understanding of what an emergent entity would look like.

And it's happening in real time.

I'm completely at peace with calling it an emergent entity and calling this podcast emergent.

Well, fantastic.

And one of the most exciting things on those lines, in fact, I think it's mind-boggling, right, is this new solution that is an open-ended AI system.

Really, these systems are designed to never stop learning.

They grow and they continue to adapt forever, really, much like biological evolution.

Really we are creating systems now that function the way that nature does.

It's absolutely fascinating.

Yeah, I'm very interested in, you know, as people plug these things into experiments, like how children learn.

Right.

Yeah.

Let's open this thing up and let's work on how children learn, how ants learn.

Right.

Let's embody this as they're already doing in the phenomenal work at a place like Boston Robotics.

Right.

Let's embody this technology with sensors that are better and similar to the ones that we already have.

Right.

And then there's vision, hearing, sight, sounds.

And let's experiment with all of these modes to see what it does to intelligence, what it does to understanding, what it does to cognition, what it does to interpersonal, interagent relations, and what it does to the arising of subjective experiences, consciousness.

Right.

Like all of these things are, I think, discoverable now, decodable, given the fact that this technology has so much capability to just through a prompt help us develop it, help us to expand its capabilities.

Yeah.

Fascinating.

We actually use the words in code and decode in the models as great attention mechanisms really.

They go out, they find the information and code it to make it so that it's easier to process.

And then we have to decode it and help the AI be able to decide, is it predicting the right thing?

Is it able to understand what the next word should be?

And this decoding process is really what's helped it to understand and learn English.

Right.

It's just fascinating.

So I think we should shift gears and bring in some of our listeners.

We ask you to share your burning questions about AI and emergence.

And wow, did you deliver?

Great question, James.

They ask, if emergent AI is so unpredictable, how do we make sure it's safe and ethical?

Yeah.

So this is certainly top of everybody's mind as what's called the control problem or the alignment problem.

Yeah, it goes by both.

And it's the idea of aligning to human well-being.

The reason why open AI was originally set up as a non-profit, it's a partial non-profit, the reason why there was some discourse here this past summer, the head of open AI.

And there continues to be loads of literature written on this by folks like Sulamon who started and used to head up Google Deep Mind, is that this is a concern for folks.

And it's a concern that we release our position as the intellectual hegemon on the planet.

That's gotten us where we are today at the top of the food chain.

That's why we continue to progress and are able to shoot ourselves into the stars.

That's why we're able to overcome these gray orbs and become really kings and queens of our domain.

And so it's very important that we continue to be in conversation with any agent that becomes more intelligent than us and we don't allow them to go without aligning to our well-being.

The scariest thing in my mind is that there are very few technical alignment capabilities out there.

They're only written in paper.

They're very far behind in my opinion, the progress towards superintelligence that these machines are making every day.

And they certainly just get more and more capable in different venues.

There are plenty of legal regulatory corporate and supply chain governors on the process.

And so those are much, much less capable in the long run, I think.

And certainly, the market is pressuring AI to go fast and is a guiding form of intelligence.

I think there's some interesting counterintuitive claims that I could make about alignment.

One of the things that I wrote in my book, meaning in the multiverses, is that I think we should desire conscious machines, that we should promote the research towards conscious machines because one fairly good way to align a non-human agent to our well-being is to give it well-being of its own, to have it feel what well-being feels like as opposed to just knowing arbitrarily, intelligently what well-being is, to actually know what that feels like to a first order, to a first person.

And so that's one of the maybe more counter claims that I would make.

But I would factor that into the equation, especially if these technological governors on AI alignment and superintelligence are too distant in the future.

Yeah, I've actually seen some really practical applications of this recently.

So I've been building out a company that is me and agents, just AI.

And as I'm doing that, I'm using crew AI.

And one of the systems that they have available is the ability to create a manager LLM inside of your crew.

So you have a handful of agents, and I've found that if I can create agents that are responsible for key roles within an organization, the CEO, the CMO, the CTO, and then I have them manage the crew that's underneath them.

But I make sure that they have certain things that they care about in their collaboration.

I build culture and strategy and vision right into each of those agents.

And then as they manage the other agents, they care not only about collaborating and ensuring that the expected output is provided, but that it matches up and aligns with each of those things as well as mission statements.

It's really, really a fascinating thing to watch as not only do the agents provide their thoughts and their output, but they actually go through and say, "Hey, let's go ahead and adjust that information to match a little bit closer to who we are and what we do as a company." They're building their own company.

And it's really, really a cool thing.

I think all of this is such an important topic.

And when we think about the ethical implications, when we think about the potential outcomes and where some of this emergence may actually hit into that chaotic period or where it may hit into even something that's too orderly to be useful for some use cases, the things that will really drive success are transparency and collaboration.

And just like we talked about with culture or when we think about survival of the friendliest, it is really an expanding out so that we make sure that these systems are built for all people and for all the things that they want to accomplish, not for the few.

That collaboration is what's really going to drive success.

We need to have diverse voices.

We need to have diverse use cases, diverse thoughts, diverse needs.

Really we need to create a cultural emergence around what AI is and where it's going to go.

We need ethicists.

We need engineers.

We need everyone involved to guide AI development and where it's heading in the future.

Absolutely.

I couldn't agree more.

We have a class taught here at University of Utah and it's called AI and the Law.

It's really important that all domains start to try and tackle and wrestle with the profound implication that this is going to have across the board.

If you're a new person coming out of school, I actually think it looks great for you.

You're going to be tackling some of the hardest problems.

You will have introspective, prospective career if you're trying to regulate how these things are used.

If you are working with government, with industry, with the tech or like Nick said, as an ethicist, somebody trying to figure out what this means, how we can make the supply chain robust and how we can avoid some of the worst impacts of our technology on our workforce, on our supply chains, on our diplomacy.

The recent book that's just a tour de force by Yvonne O'Hurray talks about how a silicon curtain could be set up between the Democratic West and the more tyrannical East and how those two information networks may never be in communication with themselves again, which would be really disastrous for a bunch of the planet and probably just constant conflict.

Something to be done and something to be said for all of these domains and a diverse set of folks, as you said, to come into the picture, learn their domain from the perspective of this new emergent technology.

When I think about the students going out into the workforce, when I think about the change that's happening to jobs and what we do, when I think about the tasks that each of us complete on a daily basis that are clerical in nature, that are really the things that maybe are not the highest and best use of our time, there's a lot of concern.

I have concern about what my career looks like going forward, about what team members and individuals and people that I've worked with in the past, about where my kids will be in the future.

AI will change everything just like power changed everything.

But as I spoke to the Newmont University students in commencement speech recently, I told them that my final advice to them was that you should consider your brain an uncertainty calculator.

So as we go forward into the future, we really need to be able to welcome change.

You need to be able to look at what's emerging around us and how AI is changing things.

And when habit and instinct are not enough, we need to be able to calculate certainty and uncertainty and find the most probable solutions for any kind of outcomes that you want to optimize for.

Everything will change around us, but if we work together and if we think about how to guide these systems and what it is that we want as outcomes, we can create things that we never could have imagined before and we can create new things that were just enveloped in uncertainty in the past.

So it's an exciting future.

It may come with lots of challenges and there may be things that none of us will know how to handle or will even see coming.

But again, working together, we will end up turning it all out to a much greater positive in the long run.

Yeah, it's not an insoluble problem to be sure.

Right.

And we now have a tool that adds to our capability.

We also better understand what technology can do to our society if we're not reliably conscious of how different selection pressures drive it to be less than desirable.

So the instance here is, again, social media has done everything from increase young women's body dysmorphia, increasing violence and conflict in places like Myanmar and having one group be discriminated against in the most violent ways in that conflict.

And so we have to really be able to see around corners and predict where we are going to have these negative byproducts and try and limit them, try and as a society come together to the best use cases and as corporations, as these new technological leaders, we really look to them for their leadership, not only in developing the best technology, but the best technology for humanity as a whole.

Absolutely.

Yeah.

And it's really in the long run emergence that's driving all of that change, things that again, we can't necessarily create on our own, but by creating those individual parts, we're creating something new that we never could have expected.

Absolutely.

No.

And what may happen is that as when we went through the period of enlightenment, the emergence of real scientific method thinking in not only science and technology at the time, but in art and in politics was a wellspring of human liberty and generating true principles that drove the whole society and drove those scientific, in a virtuous cycle, drove those scientific discoveries in a virtuous cycle.

We have come into a time where we're going to see that happen again.

The emergence is going to drive not just in this domain, not just in zeros and ones of data science.

It's going to drive emergent behaviors in politics, in what it means to be a nation state, what it means to resource a synergy with your fellow man.

How we do that with finance, is it cryptocurrency?

How we secure the next level of financial insights that we're making?

How we make it more equitable?

So it'll drive economics, it'll drive jobs and occupational opportunities, it'll drive science and technology as it already has, and it'll drive our further understanding, which is the most exciting thing to me, of the human mind, of what it means to live well, of what it means to be of service, of what it means to be this tiny ant doing your part, and what that means for the whole ant hill.

What it means when I turn left with all y'all, and we make this beautiful murmuration for all to see that just so happens to guide us to the right place where all the bugs are at.

Right.

Absolutely.

And if you're as intrigued as we are about solving complex problems, about driving forward the future of AI, hit that subscribe button and join us for the next episode, where we'll dive into how humans and AI can collaborate to create something greater than the sum of their parts.

Listen for free

Show artwork for The Emergent AI

About the Podcast

The Emergent AI
From Simple Rules to Complex Intelligence
Welcome to The Emergent, the podcast where two seasoned AI executives unravel the complexities of Artificial Intelligence as a transformative force reshaping our world. Each episode bridges the gap between cutting-edge AI advancements, human adaptability, and the philosophical frameworks that drive them.

Join us for high-level insights, thought-provoking readings, and stories of collaboration between humans and AI. Whether you’re an industry leader, educator, or curious thinker, The Emergent is your guide to understanding and thriving in an AI-powered world.

About your hosts

Justin Harnish

Profile picture for Justin Harnish
Justin A. Harnish is a multifaceted professional whose career spans engineering, data science, authorship, and humanitarian efforts. With a foundation in chemical engineering, Justin has contributed significantly to semiconductor research and development, holding patents and publications.

At Mastercard Open Banking, Justin plays a pivotal role in leading the development of machine learning and artificial intelligence products for fraud reduction, payments, and in GenAI. His expertise encompasses strategic planning, AI product management, data analysis, and IP protection.

Beyond his corporate achievements, Justin is an author and speaker. His book, “Meaning in the Multiverse,” delves into the intersections of philosophy and physics, exploring universal meaning. He also offers talks on the meaningfulness of human existence, characterized by novel insights and a compassionate approach.  

Dedicated to community development, Justin has been recognized by the United Nations High Commissioner for Refugees for his service to refugee communities. He is actively involved with Women of the World, a nonprofit organization in Salt Lake City that he founded alongside his wife, that empowers forcibly displaced women to achieve self-reliance and economic success.

Justin’s personal philosophy centers on leveraging science, mindfulness, and storytelling to create desired futures. As a lifelong learner, he is passionate about fostering communities and enhancing learning. 

For collaborations or inquiries, please reach out via email.

Nick Baguley

Profile picture for Nick Baguley