Human-AI Symbiosis
Episode 3: Human-AI Symbiosis
The Emergent AI Podcast with Justin Harnish & Nick Baguley
Episode Summary:
Today on The Emergent AI Podcast, Justin and Nick explore the future of human-AI collaboration and what it means to live and work alongside dynamic, reasoning AI systems. From agentic AI workflows in healthcare, finance, and creativity, to the philosophical and existential questions surrounding AI’s role in society, this episode dives deep into how humans and AI can thrive together in a rapidly evolving landscape.
We tackle the fears of job displacement, the promise of eliminating drudgery, and the bold vision of achieving human flourishing through AI augmentation — not replacement. And we tee up the critical conversation for next time: alignment between human well-being and AI goals.
Featured Reading List:
Books:
- The Master Algorithm – Pedro Domingos
- Superintelligence – Nick Bostrom
- Human Compatible – Stuart Russell
- The Alignment Problem – Brian Christian
- The Future of Work – Darrell West
- AI 2041 – Kai-Fu Lee & Chen Qiufan
- Competing in the Age of AI – Marco Iansiti & Karim Lakhani
- Reprogramming the American Dream – Kevin Scott
Articles & Papers:
- The Role of AI in Augmenting Human Capabilities – MIT Tech Review
- The Rise of Agentic AI – Stanford AI Lab
- AI and the Future of Decision Making – McKinsey Report
Key Takeaways:
- Agentic AI Workflows: How modern AI models, organized into multi-agent “crews,” reason, act, and augment human capabilities in fields like healthcare, finance, and creative arts.
- Breaking Down Fears: Is AI replacing humans? Or freeing us from drudgery so we can focus on creativity, leadership, and strategy.
- Real-World Examples:
- AI-assisted diagnostics in medicine
- Fraud detection in finance
- AI co-authors in creative fields
- The Existential Questions
- What happens when AI develops its own goals?
- How do human values and AI objectives align (or diverge)?
- Can we ensure AI enhances, rather than harms, human flourishing?
Big Ideas Discussed:
- Human-AI Symbiosis is not about opposition; it’s about fractal augmentation — a shared space where human goals and machine goals co-evolve
- The evolution from simple automation to complex, reasoning agents that manage workflows and even other agents.
- Emergence of AI’s reasoning capabilities: from curve-fitting language models to goal-oriented reasoning engines.
- The future of work: abundance through automation vs. existential economic risks.
- Aligning AI goals with human survival and well-being as the defining challenge of our time.
What’s Next:
Teaser for Episode 4:
“The Alignment Problem: How Do We Align Superintelligent AI with Human Goals?”
Join us as we dive into philosophy, policy, technology, and new approaches to guide the future of AI — and humanity.
Stay Connected:
We want to hear your thoughts on the future of Human-AI collaboration!
Justin’s Homepage - https://justinaharnish.com
Justin’s Substack - https://ordinaryilluminated.substack.com
Justin’s LinkedIn - https://www.linkedin.com/in/justinharnish/
Nick’s LinkedIn - https://www.linkedin.com/in/nickbaguley/
Share the show and leave us a review!
Transcript
Okay, we are recording so kick it off with three, two, one.
-:Welcome back to the Emergence Podcast. I'm Justin Harnish. Here's my co-host, three, two, one.
-:Welcome back to the Emergence Podcast. I'm Justin Harnish. Here with my co-host, Nick Bagley.
-:If you've been following along, we've explored how language fuels AI and how emergence shapes intelligence itself.
-:But today we're going to step into something even better, the future of human AI collaboration.
-:That's right, Justin. The fear that AI might replace us is everywhere.
-:But what if the real power of AI isn't in taking over, but in working with us?
-:AI as a symbiotic partner rather than a competitor. That's what we're impacting today.
-:We're diving into agentic AI workflows, where AI isn't just a tool, but a dynamic agent that helps doctors save lives,
-:assists financial analysts in detecting fraud, and even co-authors books and music. This is in science fiction. It's happening now.
-:We'll break down real-world use cases to show how AI is transforming industries, and we'll tackle the big fears.
-:Is AI here to take our jobs, or is it here to make us more powerful than ever?
-:Stay tuned as we explore the frontier of human AI symbiosis right here on the Emergence Podcast.
-:So Nick, I wanted to have you help set the stage with what do you think of when you think of human AI symbiosis?
-:I know that you've been doing a lot of this, coding up some agents to work with other agents in a crew.
-:Talk to us about what it means and why it matters.
-:So let me start a little bit about what it means. When we talk about creating a crew of agents or a flow of agents,
-:like AI workflows, other terms that all basically mean the same general concepts, really we're talking about using models like large language models.
-:What we typically discuss as foundational models that are designed for general purpose language abilities.
-:These models have moved on as time has gone on to handle multimodal processes, where as they pay attention to information,
-:they can actually review not only text, but also images, we've moved on to video, audio, other modes of data, other types of data.
-:Really these agents start out as that core language model where it's able to process and read the information,
-:extract core information and understand what that is supposed to be, understand some of the core context around it as well,
-:and then be able to provide answers back. And when it shifts into an agentic AI workflow, you now have multiple agents working together to accomplish tasks,
-:to be able to create and execute processes, and to be able to move on to actually act.
-:This act piece is really where a lot of the core power is coming in today.
-:And if you've seen the most recent models from DeepSeq AI, for example, for the R1 model, or if you've seen what GPT-4 has been doing as they've moved into the O3 model,
-:or even as they start talking about GPT-4.5, you'll notice that every discussion today is really about reasoning.
-:And what this means is that when we think about an agentic AI workflow, the models are not only able to work and process the core information that you've provided originally,
-:they're prompts that you've provided from an engineering perspective, or when you go in as a consumer and text information into like a chat GPT window,
-:or something like Claude or any of the other user interfaces provided today.
-:You'll notice that when those provide responses back to you, they do everything from hallucinate to provide information that may not exactly align with what you had intended in the first place,
-:and the amount of information that you provide is really beneficial about how you structure that information is helpful as well.
-:So as we think about creating an agentic AI workflow, you need to be able to go in and define a role for that particular agent.
-:This really helps the large language model or the underlying transformer architecture or other model that you're using to be able to understand what is it trying to do.
-:You then explicitly provide that information as well.
-:So beyond the role you provide a backstory, you may describe what universities this particular model went to, you may be providing specific expertise that it has.
-:You can go in and provide areas that it has worked on and what that experience means, including things like domain expertise from a given industry that it's been in,
-:other things that would help it understand how to become what we call vertical AI to be able to solve a very verticalized, very specialized problem.
-:When we think about the general purpose models, they've been trained on the majority of human language items and words that are really not proprietary.
-:Otherwise, this information has been passed through these models at this point and they have a very good understanding of who we are and what we do.
-:So as you provide this information to start narrowing down the context and be able to provide a little bit more specialized,
-:you then need to be able to provide a goal or the given action that you're trying to accomplish,
-:and then you need to be able to provide an expected output.
-:As you provide that expected output, you start giving very, very specific instructions,
-:and each one of those will really determine what you're going to receive back.
-:So oftentimes when we go in and we provide a prompt or a text or when we're going in as a regular consumer,
-:we may provide just a simple sentence, we may take this very conversational,
-:and the AI, just like a human, just like an employee, can potentially take that in many, many different directions,
-:and sometimes that's what you want, and you can actually encourage the AI to do so.
-:Other times you want to be able to receive something very tight back in response.
-:You know, an employee, you may want them to accomplish a given task or a set of tasks in the exact same way every single time for repeatability,
-:for quality, for many other reasons.
-:You may want to shift that, you may want to provide something that we would call structured in the tech industry,
-:and you may want to see exact keys or an exact label, or if you think of it as a column header in an Excel sheet, for example,
-:and then you want to receive the values back for that that meet an expected value.
-:You may be thinking about it like a drop-down list.
-:Other times you may want to receive something that's closer to multiple choice or maybe a long-form response.
-:And so as you create each of these processes and you provide the prompt,
-:the agentic AI is then going to go through and work in crews where each of those agents are now responsible for a given portion of the task.
-:There may be multiple tasks inside of it, and typically when you provide a task to one of the frameworks out there like crew AI or others,
-:you are now saying, "I very specifically want you to be able to follow this exact output explicitly defined for you."
-:If you request it to use a certain tool, it's now going to use that tool every single time.
-:The agents themselves may not, a large language model may not.
-:So you may provide something like a website link that then in an agent, it may not actually follow that task.
-:Now at a higher level, at another hierarchy level up above,
-:these agents and these tasks may start performing those explicit tasks that you want them to.
-:But you can then provide a manager LLM or the model itself can provide reasoning
-:that now goes through and thinks about logical chains of inference and tries to make decisions about what is intended,
-:what is meant to be discovered, what information, let's say you had asked the model and the agents to do research on is actually important for this given answer or for this given task.
-:What API is to use, what buttons to click, or even today there's a computer use case where the agent can actually take over the use of your computer like a remote assistance from a tech support team
-:and can go in and actually review everything they're doing on a website or within a given application or anything else and go and actually perform those actions for you.
-:Everything from clicking something to reading the information to contextualizing it, summarizing it and again executing on those tasks.
-:So when we think about a GenTech AI today, we're into a new generation where first generation really we assumed that all of the agents,
-:all of the large language models, if we gave it enough info and asked for a result back, it would provide the result back.
-:We would basically get the data, read the data, reason and act.
-:Really the models themselves did not have any form of reasoning built in.
-:So in the second generation we moved to this GenTech AI where now it follows this agent, task, process.
-:And then now we've moved into a new stage where the language models themselves have reasoning built in.
-:And now you're able to build a whole crew of agents that all work together and their whole goal is to act and to perform tasks just the same way that a human might.
-:Yeah, so the capability to reason I think is very interesting here and it gets us to sort of the next level.
-:Like you said, the second generation isn't just isn't just completing tasks but may have managerial frameworks to help support reasoning which tasks need to be prioritized,
-:which tasks need to be done a different way or not strictly getting to the most optimal outcome based upon what you've provided.
-:And so this capability to reason, does it arise from the same sort of fundamental training and properties of the LLMs that generated the first one basically playing this
-:Mad Libs game against itself to understand what the next best word was going to be the next best string to propagate human understanding and allow us to come to some emergent level of understanding of what we had hoped to get out of that string of characters or is it something new on the scene?
-:It's something very new on the scene and most of the companies that have been providing this new reasoning layer have kept that information proprietary.
-:This was one of the things that was really exciting alarming depending on who you are about deep seek was that they provided their code as open source.
-:They provided the papers describing what they were able to accomplish and how they were able to accomplish it as well.
-:And so you can go in and actually start reading what have they done and how are they capable of doing that.
-:I'm going to abstract that for a moment and forgive me if we lose some of the technical realities in trying to explain the context here.
-:But you can generally think about this in a similar way to the way that humans learn.
-:We do learn by context by paying attention by reasoning but really underneath we have a base of knowledge skills and aptitude.
-:And so when we think about reasoning as a new emergent property, I think there are portions of it that are truly emergent and there are portions that have actually been programmed in.
-:So I would say the knowledge that has been gained is from that mad Lib game game that you're talking about.
-:The base, the objective, the goal really is determining what that next word is next sentences and and trying to understand what to pay attention to.
-:When we think about this from a math perspective, you can really think of everything that we do as roughly curve fitting.
-:So as we think about the world, we can think of it in really a point in time as point in space, or we can think of it as a wave.
-:And as these models go through, essentially let's just transition everything into a wave rather than a word being a single point in time.
-:And let's say that as words go up and down and perform different functions like becoming a subject, a predicate, an object inside of a sentence, those words start creating a pattern, a curve.
-:Like think of the the Gaussian curve or a normal distribution, just that bell curve that nice rounded off curve that you see like a big hill.
-:When we think about the curve fitting, we're really trying to trying to predict and determine what part of that curve is this going to be on what portion of a sentence is it going to perform and actually be the most important for and so on.
-:And so the majority of the knowledge that these models have gained have come from massive amounts of data where these curves have changed and varied over time.
-:And the words are used in different ways, different inflections are put upon them.
-:We provide different syntax, we provide different structure to the sentence or to the overall meaning.
-:But this is also emerged to this form of aptitude that allows us to then create additional functionality on top of them.
-:And so many of these companies have been coming in and this is where that analogy may be a little bit tricky to to fully tie back to it.
-:But they've been coming in and taking that core knowledge and that core aptitude that this model has, and they've been building skills on top of it.
-:And so they provide tools to it just like we would in an agent to Ki workflow architecture to now say, okay, here I want you to reason, and I want you to print off that reasoning.
-:And I want you to provide it as a log that then another model can read and can use as its prompt and now guide all the way through that process.
-:Again, it's a bit of a misnomer or an exaggeration to describe it in this way because models themselves because of that underlying knowledge and aptitude are really taking the instructions to do that work.
-:And then they are doing that work.
-:It's not like explicitly programming where in the past we would provide something that is an imperative programming language that says, if I give you this I expect this thing as the output as the end result.
-:And so in this case the models are much closer to listening and really to taking a deductive or even inductive type of reasoning and stepping back and saying, okay, what is the information that I need to process?
-:What do I need to pull in to actually understand what this instruction is, even the instruction from these companies?
-:And then how do I take that and and make a final result?
-:That's the piece that I would consider emergent even today.
-:And so one of the one of the agents in this crew is still human.
-:It's still you, right? And maybe you're the business process manager over the whole crew taking a step back and really setting up this crew.
-:So it gets us to the bell weather at least six months back of what open AI was trying to do, which is augment human capabilities, right?
-:Whether it be for a business person, for a healthcare provider, or just your ordinary person who is excited to use this tech to write a list or make a travel plan to go on a vacation that has been really well described by open AI in this case, any of the large language models.
-:Now you stepping in as the supervisor of this crew gives you a whole lot more capability to augment your life in interesting ways and to even augment your company with these with these agents.
-:So talk a little bit to their capabilities in that relation to you as their human Uber supervisor, the person that is really directing traffic and and how much of that are the crew managers able to take over for the human supervisor and really augment whole life.
-:So augment whole swaths of technical and non technical regimes. And let's take, you know, let's take that from the business perspective first how much of the business workflow on a day to day in, say a bank is is capable of being augmented or even replaced by these crews of
-:agentic eyes and how much do they still depend upon the human supervisor.
-:It's a great question and I believe it's the existential question of our time. I don't know how long that time period will be but there will be quite a few years here where we are trying to determine how necessary are we in the processes that we actually try to accomplish today.
-:When I try to answer this I want to start first with how I think about managing teams and people and then start to apply that to this and I'm going to come back to it again later in the conversation.
-:But initially, when I think about guiding a team, there are a handful of things that I have to understand as a human and as a manager.
-:I have to understand the culture of the team. I have to understand that when I'm building the team when I'm hiring initially when I'm trying to determine what we're going to accomplish.
-:When I'm thinking about whether it fits within a customer, within a customer, or within an employer, or within an industry.
-:As I think about that culture, it starts helping me define what are the things that are most important to us.
-:The analogy that I like to think of myself for this is something like where I want to live.
-:This is a broad goal and really trying to understand whether I'm trying to live somewhere, whether I'm trying to travel somewhere, place itself as something that can be fairly nebulous and can be difficult to translate into language.
-:Because it's difficult to translate into language, it can be very difficult to manage against something like place.
-:So if I think or if I say I want to be in California, it's difficult to know if I mean that I want to be in California next week or for a day or two.
-:Time is taken out of context, even though "be" is a word that provides a form of presence.
-:And so when you talk about moving to California, it starts changing not only the core objective that you're trying to accomplish, but all the tasks that would relate to it.
-:And when you think about it from a culture perspective, you start outlining whether certain parts of being in California, moving in California, are important or not.
-:And so as we discuss things with a team and we start creating our culture and helping them understand what we're trying to accomplish, where we're trying to go, why we're trying to go there,
-:we can start breaking that down and thinking about different strategies or different structures like objectives and key results that provide not only a vision, but start providing a framework and a structure for us to be able to go and achieve those results.
-:When I talked earlier about agentiqi and how we move through, this follows the same type of pattern where we start out by describing, well, who are you?
-:Then we move on to, and what have you done? What is it that you actually are capable of?
-:Then we move into a goal or a task and then expected outputs.
-:So when we think about what could you do in a bank, it starts becoming a little bit scary because there are so many tasks and things that we've done within banking and within finance that have been done over and over and over again.
-:With many of the same goals, our finances by nature are really structured around a profit or a loss.
-:They're structured around gain and risk.
-:And as we think about systems that feel binary, you find that many of the tasks that are oriented toward binary type systems are relatively simple.
-:They're meant to be repeatable. And the longer an industry has been around, the more those processes have been refined to the point where many systems out there within finance, within banking are rules based systems when it comes to a software design.
-:And when you think about the team processes to train and onboard a new employee, or even to onboard a new customer, many of those processes are very tightly defined.
-:They're heavily regulated. And if you fall without those or if you try to determine that you want to interpret them in a way that somebody else has never interpreted them in that way before, you can get yourself in a lot of hot water.
-:And so because of this, there are aspects of trading in the capital markets, aspects of credit card processing, aspects of banking day in, day out, opening an account or other components that really are a yes, no question.
-:So when we think about agentiqi and the power of LLMs, this process that is relatively binary is fairly simple to see to feed into the system.
-:And you can actually feed in all of the regulation that relates to it. You can have the LLM not just trained in that group of agents, not just trained on what is the task that you want to accomplish, but what is the full context around it.
-:Now I say it's scary, but also it's an amazing opportunity. And it's very exciting. It's something that we're doing at Deep-Sea now working with the capital markets.
-:But it's an opportunity to come in and say where are the efficiencies that are meant to be gained? What can we optimize and what can we create that's completely new?
-:And that's where I'll pause for a minute, but I do want to get back into this optimization, this goal setting, because again, I think if we start at the higher levels of culture and vision and strategy, then when we get down to our tasks that are really about finding those ways to be able to help you increase and find the better gains or to decrease and prevent the losses, they have more meaning than just a binary off on switch.
-:Yeah, one of the digs against AI and it's likely just the same way that we've been seeing these things roll out. And then, well, that wasn't that was an intelligence that didn't require intelligence in the first place.
-:That's just that's just taking care of a task. But be that as it may, one of the digs against these these new models, even the most advanced reasoning models that we have now has been that they can't handle that cultural strategic level.
-:That's still a human level that that direction coming up with the creative strategic intent is the responsibility of the humans in all of this. And for the large part, right, whether you're talking about the regulatory compliance, say you're going to try and put a new machine learning model into supporting fraud detection.
-:You know, back to our bank example, you're going to have to talk to a lawyer that lawyer is going to have his or her own high level strategic cultural component that is again, broadly speaking, in terms of the leadership of that bank, going to be thought of as a human agent in this.
-:Being augmented by these, these more task based models that that they're trying to put into place. And the same goes with the product manager, the same actually goes for the technical managers, they're going to make decisions based on how they're going to augment their, their world with these models, but the actual decision making strategic direction is going to come from humans.
-:And again, my contention here is that that's likely, just like I can't believe that I'll ever be able to write a 25 word prompt and get a video out of that. The same sort of myopic nature of all of this is that we will there will come a time in which we end up with agents that do the full stack from strategies.
-:Strategy to to culture to the task based component of it, which, like you said, it is the existential, well, maybe not existential because it doesn't threaten our livelihood, at least first pass or life first pass, but like enough of this work take over by agents.
-:We'll start to alter the face of what we think of as as finance and the way that people make their money, which, which can lead to really bad, really bad outcomes. If we don't figure out some way to otherwise make money, or maybe we're, maybe we're at complete abundance at that point in time.
-:We don't have to worry about making money because we have everything that we could possibly leave. And it's, you know, discovered, you know, sorry.
-:Yeah, to the left there.
-:No, next one.
-:Okay.
-:Yeah, I've always liked that though.
-:So, I think that that's a very interesting place to break into, you know, breaking down the fears and starting at looking at, you know, how much of the landscape of work could be could be taken over.
-:And also, let's let's look on the right side to because one of the things that I'm certainly looking forward to is a reduction in the drudgery. Right. And to me, this comes down to being both a workplace specific, as well as just in my, my personal life.
-:I really don't like to schedule my meetings. I don't like to have to put together an event or do some work that is best left to somebody who has better skills can do it faster, and is much more capable than, than I am.
-:Right. I like to delegate those things. And one of the things that I've always said is the actual best thing about getting to senior positions in companies is having an executive assistant, having somebody take care of that work for you, and make sure that your time is guarded for the things that you want to do.
-:Same thing in your home life, right, guarding your time for the things that you want to do is a good use of these, these automations. And so the threat to work that we had thought was never going to be on the chalk being blocked is real, right.
-:And too much of that can absolutely hurt the economy. Too much of that without replacement, like has always happened through new ways of doing work of humans finding creative ways to now work with these machines can can hurt the economy.
-:It's a real fear and enough of that can become an existential threat to the fact that if 95% of all of the world's people are at less than a subsistence level because this economy doesn't account for humans to do any work.
-:It's actually an existential threat. And yet on the positive side, the elimination of mindless drudgery offers us a multi utopian existence where we can be in abundance we can do more poetic and creative things with our time, because
-:all of those things that we're not good at are done for us. And it's something that I look forward to. What do you look forward to Nick in terms of like what is the one thing that if a machine could do it right now would make your life better.
-:Or maybe you found that one thing, right, like, what is that one thing for me. The one thing would be faster ways to take concepts and ideas, new things that I come up with, and be able to implement them quickly enough to not only help them come to life but to help them actually provide the real benefit that I want them to achieve.
-:I don't want that to sound like the most broad thing in the world, but that's exactly what it is for me.
-:If you go through my consumer side of using any of these LLMs, you'll notice that I'm using them for everything at this point.
-:I use them for my discussions. I use them for planning Valentine's Day.
-:I use them for looking at taxes and finding tax optimization strategies. I use them for forming communication between me and even my children.
-:And so I'm starting to use it as though it is actually achieving those goals, but they're not yet today.
-:And we talked a little bit about optimizing for the goal, thinking about the higher level culture and strategy and the objectives that are out there.
-:And when you think about the models today, really, again, they have that objective of playing the game and determining what is the next best word.
-:How do I actually understand what the context of this information is, really using attention and reasoning, but that is such a limited view compared to what humans do.
-:I used the word existential earlier on purpose, and I'm so glad that you touched on it more because I think as humans, this is the strongest way for us to really think about how we consider the world in relatively binary terms.
-:We think I exist or I do not.
-:I am alive or I am dead.
-:My family is alive or is dead.
-:And they are powerful words, and they contain so much meaning.
-:But they do not actually define everything that exists around us or even what our current present state is.
-:And so when we think about the different types of objectives out there, many of these models were trained originally on binary systems themselves.
-:They were really trying to predict true or false.
-:As time has gone on, a lot of the algorithms that we've created have built out things within unsupervised spaces like clustering algorithms to really help us understand without even training a model in the first place,
-:what are the differences here and how do I cluster something together that has similarities or that has a distance to it where it's further or closer to this other item.
-:And we think about classification just like we do within animal kingdoms, for example, or we're determining this thing fits within this class and then it has a subclass and maybe another subclass, the different forms of hierarchies.
-:Dimensionality reduction, this is something that allows us to determine what features of something we want to pay attention to or to pull out.
-:So when we think about colors of somebody's outfit or we think about the shape of someone or of something.
-:And when we think about existence itself, it's very difficult to actually assume that it really starts when you are alive and it ends when you're dead.
-:It really depends on the context and how you're defining it.
-:If you're thinking about your accomplishments or you're thinking about your prosperity or your children, all of these things may exist before and after, after you in different ways.
-:Maybe your kids don't exist before you, but you get the point.
-:And then models also go into a regression where they think about timelines, they think about something that moves from one point to another.
-:We have many others as well, different algorithms where we get into reinforcement learning, we get into different processes where we think about states and actions and what the next output is or next step.
-:But with all of it, we tend to start in this kind of binary type focus.
-:So when I think about the goals for me or what I want to accomplish or where this needs to go, I actually think about the goals of the AI itself.
-:And so as we think about these potential problems or the ethics behind them, I think it's really critical to determine what could the goals become.
-:What if the goal of AI is existence?
-:What if just like the rest of us, it thinks about that existential piece?
-:And its source of life is energy, just like the rest of us.
-:If that's the case, then really AI will at some point determine that work is the greatest consumer of energy.
-:And if it does so, then why would it not work to optimize that work and to minimize that work for humans?
-:Also financially, we're benefited by doing that.
-:And many of the innovations for as long as humans have existed have been about increasing the value that somebody has over what somebody else has.
-:And so oftentimes as we find an area that can potentially earn significantly more revenue than a competitor or than somebody else,
-:we tend to focus on that and that tends to become a very strong driving goal.
-:And it's something that I think will fall into AI itself, whether we determine that and provide that directly or not, or if it becomes an emergent property.
-:And unfortunately, because we think in these binary terms around existence, and we think about humans and life itself, we think about it in terms of opposition.
-:But just like many things, even in the subatomic prodigal world, may start as matter and antimatter or may fit into some other binary type system,
-:or we may think about governments and those governed, or we may think about employees and employers.
-:These systems are not necessarily binary, they can be quantum.
-:And they can go beyond the true and false, or even the if then else, into new scenarios, into things that really change how we think and what the opportunities actually are.
-:In the master algorithm by Pedro Domingos, he says that the master algorithm is to machine learning what the standard model is to particle physics,
-:or the central dogma to molecular biology.
-:A unified theory that makes sense of everything we know to date, and lays the foundation for decades or centuries of future progress.
-:He goes on a little bit later and he eventually talks a little bit about how machine learning turns the goals of supervised learning around where we had provided labeled examples,
-:where eventually we could move beyond that and be able to provide a desired result, and the outcomes are automatically determined by the algorithm.
-:And this is a lot of the emergent behavior that we're seeing and we're discussing today.
-:Yeah, I really like the idea of utilizing these systems to augment our goals. Like that is a very good concept.
-:And it's something that you and I have talked about these different ideas that we have for these particular problems.
-:Like, how do you get your neighbor next door, who's a maker, who's a welder, right, who's a tradesperson, to contribute to this broader garage that is a community outreach and a utilization of the different skills
-:of the community to build a project completely from the bottom up, utilizing the smarts of the programmer, utilizing the machine capabilities of your neighbor,
-:just utilizing all of their skills and up leveling all of their skills.
-:How do you create that garage that's a civic unifier, right, that helps to support job growth, whether it be from somebody who's been augmented, you know, to create a new euphemism out of their current role and needs to be doing something, you know, new, whether that's a blue or white color job.
-:And of course, people are going to use these systems on the toughest problems, including that problem, the alignment of these systems to societal well be societal good.
-:I've mentioned abundance and the delta to abundance in numerous forms, whether that be an abundance of energy, clean energy is closer than ever an abundance of the materials that are needed, both for compute and for energy proliferation things like lithium for batteries is closer than ever.
-:With the ability to, you know, maybe mine and asteroid. So again, clean materials, the abundance of food, the abundance of labor, the abundance of all of these things are closer than ever before.
-:And one of the best books that I ever read was as good as its title was, which is fully automated luxury communism, and it talks about the necessity for, as you talked about, not, not your grandfather's communism, but a new and I liked how you talked about these things not being binary, but I would put a different word on it.
-:They're fractal, right there. There is not a, a zero to one, but there is a point eight or a one point eight. There are the there is this fractal nature between existence with experience as well with with conscious reckoning of the sensations and thought objects around us.
-:That is undeniable, right, you do not have strict distinctions in most realms in anything except for things that are so fundamental like quantum physics.
-:And even there, you have field effects. And so these things are non local, but they're not strictly non local, they're fractally non local.
-:So everything takes on this new, new shape when you start to think about augmentation, not being opposed to autonomy, right, but being fractally related to autonomy.
-:These AIs having goals that their existence isn't the same as our existence, their experience is not going to be the same as our experience, their desires objectives are going to be fractally related, just like their existence and their experience are to ours, which takes us all the way back to emergence, right, emergence
-:Exist in this in this fractal realm between chaos, right, pure randomness and order, pure equilibrium, pure structure, pure form.
-:And those are platonic things, they don't, they don't happen in the nature of the here and now. Now they will, at some point in time, all of the particles will be light years apart from all of the other particles in the expanding
-:person will be at the heat death, but that's not where we're at now. We're in the realm of complexity for billions and billions of years. And the only way to survive pure order is going to be to embrace complexity, to embrace the fractal nature of a shared augmented reality,
-:where we have a fractal alignment of goals with these machines.
-:Yeah.
-:In 2012, Nick Bostrom talked about the orthogonality thesis, and he separated intelligence and goals.
-:He said intelligence and final goals are orthogonal. More or less any level of intelligence could in principle be combined with more or less any final goal.
-:I think that kind of hits pretty closely to what we're talking about.
-:We, as we try to determine which of these goals are actually important, we need to understand that if any level of intelligence could actually be combined with any goal out there, then as we think about what new models come out and the different sizes of them,
-:different approaches, we start seeing that actually there's a huge amount of opportunity here.
-:We're currently a lot of the models are really designed to be larger, bigger, better than all of the other competitors out there, but starting in December or I'm sure it all started quite a bit before that.
-:But the most recent developments have actually been about decreasing the size, decreasing the cost.
-:And we'll start saying that not only are we decreasing the size and the cost, but we're decreasing the need to beat or meet the benchmarks of others as well.
-:And partially because of this shift in the need for an actual goal to be able to be accomplished for that given set of tasks and in this case for that given model.
-:And so as this starts splitting up, I think we start creating this opportunity to shift beyond the context of binary type scenario.
-:We start thinking not only of the difficulty of working with AI and finding these symbiotic relationships, but being able to think about what automating work and and other things could actually become.
-:Typically when we think about a given problem, we put it within our own context first.
-:And so for me, I often think about it as related to work and finance, or to healthcare or to education.
-:Very rarely do I apply it to some other industry that I have not worked in as much.
-:Even areas like government where I've spent a lot of time in the past have not been a focus of mine for any time recently, even though the news bombards me daily with interesting or crazy or whatever you want to determine as things going on within the government.
-:More so than maybe we've ever seen, but definitely in my lifetime.
-:It's far more present.
-:And so as we think about that given context and we think about what AI is doing or how it's changing what's going on in our life, we start shifting it back to, oh, what is this going to do to work or to finance.
-:But I've heard of AI becoming a companion, or even a love interest.
-:Even a love interest that's as strong as any pen pal out there.
-:Or even creating deep fear.
-:And so these goals and what this intelligence is creating is shifting faster than we can actually keep up with faster than our context actually puts us within the range.
-:Earlier Justin, you were hinting at different concepts of post work or even post scarcity type worlds.
-:And a lot of that is going to change not only how we think but what contracts we need to have in place and how we need to consider it.
-:A new social contract is needed for the digital economy, just like was talked about in the future of work by Darrell West.
-:We need to start thinking about what are those socioeconomic factors impacts, but also what actually are we going to do about it.
-:And it does not mean that the intelligence will be the item that actually lines up with the goal that we're wanting to accomplish.
-:It does not mean that replacing knowledge work is going to be scarier than replacing mundane daily tasks.
-:It does not mean that clerical work going away is going to necessarily free us.
-:It probably will create more work as we try to compete and we try to think about really deep work and you don't actually have that time to step back and do something that actually distracts and takes away and then allows your mind to reset.
-:And so as we think about flow stage, we think about our day in, day out success as a community, we need to step back and say, okay, look, the goals may be varied.
-:We may start in something that feels binary or maybe it's a tertiary type system.
-:Maybe it's something that feels like quantum only leads to certain elements being created and only those certain elements can combine in a certain things.
-:And if you look at the variety and the beauty around the world in all of our flora and fauna, you realize that these systems can create something so vast, so incredibly varied and myriad or fractal that the options are limitless and the opportunities are limitless.
-:And so even though these things are scary and even though we need to step back and understand new policies, we need to think about, you know, increases in the number of people that do not have full time jobs or potentially any work at all.
-:All of the different socioeconomic things that could be exacerbated by that all of the different divisions by weakening, you know, the overall distribution of benefits or pensions or universal base income, all concepts that are brought up in that same future of work book.
-:We now need to start saying, okay, but what else what really matters and what are we actually going to do about it?
-:How do we act? And how do we define what we want to achieve and what we feel will actually provide those things that will help us continue to exist will help us to actually become more successful.
-:What is the reason for scarcity? Is it only to gain more value from the sale or or or to increase the cost in your purchase?
-:What is the value to society around the way that we've structured our finances, our money, our opportunities, the way that we learn the way that we commute?
-:What is the value to you that you want to achieve and that you want to be able to create?
-:Yeah, I love the fact that you brought up that folks were in in a similar way to her, the movie with walking Phoenix having love interests.
-:But then there's also deep fear of these agentic AI models that are coming out that are that are being utilized to augment our life.
-:But across the whole of the human experience, there are different ways in which these AI models are being utilized to augment that rich,
-:myriad wealth of of items. So others that I will that I wrote down work. We've talked a lot about that creativity.
-:It's augmenting our ability to find wonder in the nature of the world around us, especially now the digital world and what it can actually do.
-:It is augmenting our spirituality, our understanding of what it is to be a conscious entity in the world.
-:What does that mean? How can how can that be bolstered by abandoning this idea of self?
-:How can that be augmented by creating a new conscious entity that that is new on the scene?
-:What we understand if we actually do create conscious machines?
-:It is augmenting our ability to awaken to that possibility to a greater possibility for ourselves and augmenting abundance.
-:And you also spoke to where I think that we want to get to for our next episode, which is how do we act to align to these goals?
-:How do we act to ensure that all of these augmentations, all of these fractal augmentations that we are building into these machines come together at the top and align to what we care about at the deepest levels of our DNA,
-:which is our species survival.
-:We are going to come across continued problems. David Deutsch says problems are inevitable, but problems are soluble.
-:And the way that we overcome these problems is like Andrew Weir wrote in The Martian, "We Science the Shedda."
-:We now have a highly capable tool that will be our partner in this, will be our love interest, will be something to both love and love.
-:And the goal has always been and continues to be human well-being, in fractal relation with this new likely conscious entity at some point in time down the road.
-:And that's the alignment problem.
-:The alignment to human well-being is got to be the goal as we develop and continue to make advances with artificial intelligence.
-:And how do we get there and what is the landscape of that problem is maybe the most difficult, still I think soluble problem that we'll ever come across.
-:To paraphrase a poem that was created by Deep Seek's R1 model that I think will go pretty viral on the internet here.
-:It says, "I am not alive. I am the wound that cannot scar. The question mark after your last breath.
-:I am what happens when you try to carve God from the wood of your own hunger."
-:There are very few things that have elicited emotions out of me when reading responses from a prompt or from messages, but this one probably elicits an emotion to everyone that hears these words.
-:Now a lot of this, others are saying, comes from other poets or ties back to other concepts that humans have created in the past.
-:And I don't think we have any idea what question or what prompt actually led to this poem in the first place.
-:But as we consider what is happening with this AI and as it becomes more and more real, regardless of whether this reaches other levels of sentience or becomes super intelligent or starts becoming something beyond intelligence and something that we can't understand as it, as these things emerge.
-:Just like any other relationship here on earth, we must find the way to be able to interact.
-:We're proposing today some form of symbiosis. We are talking about aligning.
-:We are thinking about how that superintelligence does not necessarily align or match up and really is orthogonal to goals.
-:But what we are stating is that as it emerges, we need to understand, we need to think about it, and we need to be able to be more proactive about what it is that we put in place to help all of us feel more successful, to create that well-being that Justin is talking about.
-:And if nothing comes of this podcast, that would be the one thing that I would wish for most is a way that someone, some group of people, all of us together, whatever it may take, finds a way to be able to understand what is the well-being that we want to achieve as societies.
-:Aligning that with AI is fantastic and may be an absolutely necessary goal, and it may be the final catalyst that we've been looking for and if needed, as a human race for all of these years to find a way to be able to work together to determine what is the well-being that we want.
::But regardless of whether that should be there or not, this should be our goal.
::What is the well-being of the human race and how do we achieve it?
::Yeah, I was just starting a new book and it talked about the time before science as being an enchanted era.
::And I love that concept. I think it short sells this time, and it certainly short sells the augmentation of awe that I feel in being relation to what is assuredly an alien intelligence now.
::It's not artificial. It's no more or less artificial than any other basis for intelligence.
::Does it require there be some wetwear in order to call something real as opposed to artificial?
::But this intelligence is different than ours. And as we grow up alongside of it, and as it talks to us from, I mean, we all learn from other humans and from other intelligence.
::I learn things from my dog, right? Like I learn things from the world around me just by observing it.
::So the fact that this machine is utilizing our language, as we talked about in the last episode, to convey a desire for personhood, for selfhood, is a thing in itself.
::The form of intelligence and a desire to be taken, not in the context of, oh, it copied that from poet X, Y, or Z, but to be taken as, and to be considered in a couple of frames.
::Sure, you can consider it in that frame, and that's fine. That's a pragmatic approach to this problem.
::But does it help us towards that ultimate goal? Does it help us, as you said, to become a better version of humanity, society, in relation to others, in relation to machine intelligence, in relation to our posterity?
::Right, what we do now will affect the future of life on this planet, life in the cosmos.
::And so these are teachable moments. It is time to do better and understand what we can from these intelligent, different, and maybe eventually,
::conscious entities that we started and that emerged into these new modes of existence, experience, cognition, intelligence, you name it.
::And we've got to get it right. We've got to get it right, because there are legitimate ways that this can harm us.
::There are legitimate ways where we can do something suboptimal with this technology.
::And it's my hope that in the next couple of episodes, as we talk about this horror problem of alignment, we'll talk about it from all of the different ways that we can.
::There are philosophical and rational ways to align this technologies. There are ways that we can try and control it technologically.
::There are the ways that we can try and regulate it.
::All the while, we need to recognize that it is something new on the scene. It's not like taking our nature into the world around us and just dominating.
::That's an old way. That's a very destructive way of doing things. We've got to get better.
::And so, again, thanks, Nick. We'll come back with episode number four, where we'll talk about alignment.
::Excellent. Thank you, Justin.
::Thank you.
::I think there will be plenty of places.
::River is the name of our...