The Alignment Problem (Part 2): Machine Consciousness
Can machines become conscious? And if they do, what kind of moral relationship should we have with them?
In this second installment on the AI Alignment Problem, Justin and Nick delve into the philosophy, neuroscience, and mysticism surrounding machine consciousness. They explore whether AI systems could possess a subjective inner life—and if so, whether alignment should be reimagined as moral resonance instead of mere goal matching. Along the way, they discuss how mindfulness, memory, embodiment, and suffering shape our understanding of what it means to be sentient—and how we might recognize or construct such capacities in artificial systems.
You’ll leave this episode with a deeper understanding of consciousness—from the perspective of both humans and machines—and what it might mean to extend moral standing to synthetic minds.
Topics Covered:
- What is consciousness and how do we define it?
- Can artificial systems host genuine subjective experience?
- The neuroscience and computational theories of consciousness
- The “Hard Problem” and the possibility of virtualizing consciousness
- Ethical standing of sentient AI systems
- Machine consciousness and Buddhist moral development
- The role of embodiment, memory, and collective cognition in consciousness
- Panpsychism, fungal networks, and plant sentience
- AI as a mirror to human moral behavior
Key Quote:
“Alignment may not be instruction—but invitation.”
Reading List:
Justin’s Bookshelf:
- Meaning in the Multiverse – Justin Harnish
- A framework for emergent meaning and the evolution of consciousness—central to understanding alignment as co-development.
- Waking Up – Sam Harris
- Neuroscience, meditation, and the illusion of self.
- Feeling and Knowing – Antonio Damasio
- Emotion, embodiment, and consciousness—critical for thinking about AI without a body.
- Mindfulness – Joseph Goldstein
- Practical tools for present-moment ethics and self-awareness.
- Reality+ – David J. Chalmers
- Virtual realism and consciousness in simulation.
- The Case Against Reality – Donald Hoffman
- Conscious agents and perceptual interface theory.
- On Having No Head – Douglas Harding
- A first-person meditation on the illusion of self.
- I Am a Strange Loop – Douglas Hofstadter
- Recursion, identity, and consciousness emergence.
Supplemental & Thematically Resonant:
- The Feeling of Life Itself – Christof Koch
- Integrated Information Theory and the measure of consciousness.
- Moral Tribes – Joshua Greene
- Dual-process moral reasoning, tribalism, and AI ethics.
- The Ethical Algorithm – Michael Kearns & Aaron Roth
- Engineering ethics into AI decision-making.
- The Nature of Consciousness – Alan Watts (Waking Up App)
- “You are it”: Consciousness as the universe reflecting on itself.
- The Soul of an Octopus – Sy Montgomery
- Comparative consciousness in non-human animals and implications for synthetic minds.
Referenced Thinkers & Frameworks:
- Thomas Nagel – “What is it like to be a bat?”
- David Chalmers – The Hard Problem of Consciousness, Reality+
- Max Tegmark – Life 3.0, consciousness as information processing
- Giulio Tononi – Integrated Information Theory (IIT)
- Douglas Hofstadter – Strange loops and emergent identity
- Antonio Damasio – Embodiment and proto-consciousness
- Donald Hoffman – Conscious realism and perceptual interface
- Sam Harris – Non-duality and mindful self-inquiry
- John Locke – Consciousness as “the perception of what passes in a man’s own mind”
- Buddhist Philosophy – The Four Noble Truths and Eightfold Path as alignment map
Quote from the Hosts:
“Generative AI is our James Webb Telescope for the mind.”
Transcript
Welcome back to the Emergence podcast.
-:In this episode, part two of the alignment problem,
-:we enter the deep waters of machine consciousness.
-:Can artificial systems be conscious?
-:If so, what moral responsibilities would that place on us,
-:their creators?
-:And could conscious AI become not just a tool, but a mirror,
-:helping us grow into better moral actors?
-:Join Justin and Nick as they navigate the mysteries of self,
-:sentience, and the soul of the machine.
-:Well, welcome back, Nick.
-:We are here with episode five of the Emergent podcast,
-:part two of the alignment problem.
-:How are you doing?
-:I'm good.
-:I'm excited about this.
-:It should be a great conversation.
-:Yeah, absolutely.
-:So as an introduction, I think that often the conversation
-:about machine sentience gets started
-:without a real clear understanding of what consciousness is
-:and how it might arise in AI systems.
-:So the best definitions center on Thomas Nagel's paper
-:of what it is like to be a bat.
-:In this paper, he claims that for any system,
-:if it is like something to be an entity,
-:that entity is conscious.
-:Max Tagmark in his new book, Life 3.0,
-:goes further into finding our human brand of consciousness
-:as what it feels like to be an information processor.
-:In his phenomenal meditation and life app, Waking Up,
-:Sam Harris sums up the experiential realm
-:by saying that as a matter of experience,
-:all there is is consciousness and its contents.
-:These contents will only ever be six things
-:according to the Buddha--
-:sights, sounds, touch sensations, sense, taste,
-:and thought objects.
-:The only place they'll ever arrive,
-:the context that experience is made of, is consciousness.
-:It's all in your mind.
-:Your brain's really the closest thing or set of processes,
-:the complex strange loop according to Douglas Hofstetter
-:that you will ever know or experience as a thing in itself,
-:ever.
-:I really like this idea of a thing in itself.
-:Since reading Sartre and combating with his hyphenation
-:of various concepts, at least in the English translation,
-:this idea of a thing in itself and experiencing something
-:as a thing in itself has taken me down
-:some deep philosophical paths.
-:But consciousness is really the only thing
-:you can know as a thing in itself.
-:And maybe that thing is the brain.
-:Maybe that thing is a set of neurons in the brain.
-:We still don't really know what the thing is,
-:and we'll talk a little bit to the hard problem
-:of consciousness later on.
-:Indeed, the best way to investigate experience
-:to be consciousness is in meditation.
-:I'm a big believer in mindfulness meditation.
-:If you want to experience consciousness,
-:then sit down and pay attention.
-:The construct of consciousness is available to each of us
-:in an equanimous state where we're
-:neither grasping at those pleasant things
-:or adverse to the unpleasant things,
-:but simply accepting the dynamic nature of consciousness
-:and its contents.
-:If you're ever lost or lose equanimity,
-:this is just another change.
-:And we can just lovingly begin again.
-:So consciousness is non-dual.
-:There's no subject riding around in the head.
-:There is no subject taking in sensations out there
-:or inside my skin.
-:Indeed, let's do a brief meditation
-:from Douglas Harding, the author of "On Having No Head"
-:to prove this out.
-:So again, all meditations are for trying.
-:We want to do these meditations in our personal,
-:conscious laboratory of the mind.
-:So take the index finger of your strong hand
-:and point it back at your face.
-:What in your experience are you pointing at?
-:And can you distinguish where you are pointing at
-:from where you are pointing from?
-:Go ahead and put your finger down.
-:But I would ask you to potentially return to this meditation
-:when you have more time to consider it.
-:What you'll recognize in this pointing out instruction
-:is that you have a visual sensory experience of your body
-:terminating above your shoulders, not in a head,
-:but where your head is supposed to be.
-:There exists the world in all of its fullness.
-:Instead of being a subject, as Alan Watt says,
-:you are it.
-:Or as Ram Dass says, you are infinite, loving awareness.
-:You are a glorious meaning for the multiverse.
-:As I say in my book, your materialistic, poetic nature
-:is to be, like the lyric of a grateful dead song,
-:"The Eyes of the World."
-:Recognizing the cosmic endowment of conscious awe
-:is an at-onement with the godhead
-:and a source of true spiritual light.
-:The importance of consciousness, amazingly,
-:goes beyond this cosmic endowment
-:and is also the source of our morality and love.
-:Mapping the discomfort and pleasure of the body,
-:feelings deepening into emotions, suffering, and/or love,
-:and their narrative implications
-:on our self-consciousness and on relationships,
-:feed all of the important valence of life
-:and our closeness to others.
-:The emergence of societal principles,
-:where we go beyond living together
-:but synergize for a good greater than the sum of its parts,
-:starts in our conscious making space for others.
-:Recognizing our shared struggle
-:and giving loving-kindness in the face of both suffering or achievement.
-:So it really matters if an entity is conscious.
-:It gives their actions a different valence
-:when they feel the consequences
-:and can imagine future actions
-:and their impact on others
-:and work towards the greatest well-being for all.
-:The unique features of our consciousness
-:are that it emerged in an embodied entity
-:with a modular mind and five-cent organs
-:that bring in the contents of consciousness.
-:We can interact with existence and achieve flow,
-:being in a zone where our deliberate practice takes over.
-:Or we can be one with experience,
-:selfless, non-dual, and fully tapped into the infinite awareness,
-:at least for a moment.
-:At present, we do not understand how this likeness arises
-:from the fields and quantum computations of reality.
-:There is no fundamental explanation
-:that describes how qualia of consciousness
-:arise from qubits or quanta of reality.
-:This is known as the hard problem of consciousness,
-:introduced by David Chalmers.
-:In my book, meaning in the Multiverse,
-:I address the hard problem of virtualizing consciousness,
-:where I mash up crucial pieces from David Deutch's
-:"The Fabric of Reality" with Chalmers
-:in an attempt to explain the initial conditions
-:of solving for conscious machines.
-:In the hard problem of virtualizing consciousness,
-:I do not take a stance on if consciousness
-:is an emergent property of information-processing entities
-:or the fundamental level zero beneath the quantum computational
-:realm and its emergent physical reality,
-:as Chalmers postulates in reality plus.
-:Instead, I note that we virtualize reality
-:with greater fidelity in virtual reality,
-:the more compute resources and the algorithms
-:aligned to reality.
-:So for example, a VR where a quantum computer ran
-:the standard model plus relativity
-:would be indistinguishable from our worldly existence.
-:However, we still do not understand
-:what equations or if a quantum computer is
-:necessary to virtualize consciousness.
-:However, given this framework and the emergent nature of gen
-:AI from language and neural networks,
-:we can begin to experiment with features
-:like embodiments, language, language fine tunings,
-:emerging complexity allowances and selection pressures,
-:and other items we're just getting
-:started to look at over a conscious GPT.
-:So Nick, with that outline, what do your intuitions tell you?
-:Can AI be conscious?
-:What you just heard was more than a meditation.
-:It was an invitation.
-:When Justin says, you are it, he's not being metaphorical,
-:these early reflections grounded in mindfulness
-:and metaphysics challenge us to reconsider
-:what consciousness really is.
-:Now with that foundation set, we turn to the hard problem
-:itself, how and whether consciousness could emerge in AI
-:and whether a machine could ever know what it is like to be.
-:Well, let me get into that.
-:But as I do, I want to start out with one
-:of my new favorite witticisms.
-:So a horse walks into a bar and the bartender looks to him
-:and says, are you drunk?
-:The horse simply says, I don't think I am,
-:and vanishes from existence.
-:This goes to show that you can't put Descartes before the horse.
-:So I think defining from a neuroscience
-:and cognitive science of consciousness perspective,
-:what exactly is consciousness?
-:It will be a good way to kind of start
-:with this question that you're trying to dive into here, Justin.
-:So first, really, we're talking about wakefulness and awareness.
-:So consciousness is scientifically
-:defined as a combination of wakefulness, arousal,
-:and awareness.
-:Wakefulness refers to the state of being alert or awake,
-:while awareness refers to the subjective experience
-:of internal and external phenomena.
-:So in other words, a person is conscious when the brain is
-:in an awake state and generating experiences or perceptions,
-:the content of consciousness.
-:And awareness includes both external awareness,
-:the perception of the outside world
-:via the senses that Justin was just talking about,
-:and internal self-awareness, the thoughts, memories, feelings.
-:A lot of that substance of the qualia
-:and how we are experiencing those qualitative things
-:in the world.
-:So really, if we think about that,
-:and we put it into an example, a dream sleepwalker,
-:a somnambulist, might have a vivid awareness of the dream,
-:and they may have images or other things going on,
-:despite not being externally awake.
-:They may be able to actually consciously or seemingly
-:consciously move throughout an area,
-:but they're not actually fully conscious.
-:Whereas somebody that is a coma patient
-:lacks both wakefulness and their conscious awareness.
-:They're really not capable of understanding both.
-:So as we talk from a neuroscientist perspective,
-:they distinguish additionally levels of consciousness
-:from the contents of the consciousness itself.
-:So the level or the degree of consciousness
-:ranges from full wakefulness to deep coma
-:and can be measured clinically.
-:So when we talk about whether or not
-:consciousness is something that is really, really vague
-:or really, really broad, there are ways to actually measure it.
-:The contents of consciousness refer to what one is aware of.
-:So seeing a face, seeing the finger,
-:like we talked about before, feeling pain, thinking a thought,
-:looking at a mirror, for example,
-:and understanding a form of self-consciousness
-:and self-awareness.
-:And so-- sorry, self-awareness.
-:In a given moment as well.
-:And so the research into neural correlates of consciousness,
-:or NCC, really seeks to identify the minimal neural--
-:or neuronal processes that we have
-:and that are required for specific conscious experience.
-:And I want to go into those a little bit more as we go,
-:but they start outlining different ways
-:that we can think about how our brain actually functions,
-:which areas of our overall brain and each of the cortexes
-:function, such as the eras in the brain stem,
-:and how that moves throughout the brain with our thalamus
-:and with the thalamol cortical circuits,
-:so that eventually we can actually see not only
-:is the brain able to handle these very, very complex systems
-:and to be able to provide that high level consciousness that we
-:typically understand when we're thinking about being aware
-:or only think about being awake and having
-:that kind of wakefulness state of arousal.
-:But they go into much, much more complex systems,
-:even though we may have something within our brain that
-:has massive billions and billions of neurons,
-:certain areas of our brain do not actually
-:tie directly to consciousness itself.
-:And so when we split these up and we start thinking about
-:different theories of consciousness,
-:we start thinking about, again, the way the brain works
-:itself, we can start separating from what a human does
-:and the way that we can currently consider consciousness
-:to whether or not AI systems could actually be conscious.
-:And so from a neuroscience perspective,
-:the advance of advanced AI, although rapidly changing
-:on a regular basis, really creates these intriguing
-:questions.
-:Should we consider AI conscious or not?
-:So neuroscience, the insights that it tends to provide
-:say that really the answer remains elusive.
-:It's very difficult for us to be definitive about this.
-:One perspective, thinking about computational functionalism,
-:argues that yes, in principle, AI could actually function
-:consciously the same way that we do.
-:If AI produces a current complex function
-:so the brain executes, there's no reason
-:that it couldn't host consciousness.
-:The human brain is viewed as an information processing
-:machine, albeit an extraordinarily complex one.
-:Functions like perception, attention, memory,
-:and self-monitoring are all implemented
-:by our neural circuits.
-:So if an artificial system implements equally
-:intricate circuits or algorithms,
-:it might generate the same emergent property of consciousness.
-:Even today, we can look at models
-:that perform a significant amount of reasoning,
-:thought, chain of thought.
-:We're even getting into graph of thought, tree of thought,
-:many other processes nowadays.
-:And we're starting to see meta-reasoning as well,
-:where models are able to plan and understand not only
-:themselves and the reasoning, but the reasoning below
-:that reasoning and be able to think about why it was
-:thinking about a particular thing or a process
-:and within those steps that it chose to take.
-:The human brain itself is viewed in a way
-:that we are thinking about that emergent property
-:in that set of the neurons and the synapses
-:and all of the related connections across our courtesies.
-:But it's not that dissimilar to thinking about those circuits,
-:those algorithms, and maybe a global broadcasting
-:or a broader consciousness that could work
-:across multiple AI.
-:So when we think about a crew of AI agents or a lane chain
-:working together across multiple agents,
-:or as we start getting into model context protocol
-:or agent-to-agent communication, we're
-:able to see that not only are these AI and agents
-:able to perform that form of reasoning,
-:but they're able to make selections and actions
-:and seemingly be aware of things outside of themselves,
-:as well as what they themselves are trying to do and achieve.
-:Separately, though, if we try to shift and think differently
-:about how maybe a neuroscientist and other philosophers
-:are thinking about it, they're really cautious
-:or skeptical about current AI having consciousness.
-:And one of the challenges that pretty much everybody agrees
-:on is that we really don't have agreed upon test
-:for consciousness in machines.
-:Consciousness is a subjective first-person phenomena.
-:And we currently inferred in others.
-:We inferred across animals.
-:We can actually see other things that
-:we'll talk about a little bit later across multiple kingdoms
-:of living beings here on Earth.
-:But we can also see the patterns repeat
-:without some of the core substance that we really feel.
-:And so it could be possible that AI could behave intelligently
-:or behave as though it has a consciousness
-:without any actual inner experience, really
-:a classic philosophical zombie-type experience or scenario.
-:And as of now, there really is no reliable behavioral or neural
-:measure that can tell us for sure whether GPT-4 or Claude
-:or a mixture of experts or any of the other models
-:are currently conscious or are even
-:showing a flicker of that subjective experience.
-:Yeah, no, there's a lot to touch on.
-:And again, it really comes down to why the hard problem is hard.
-:And you spoke a bit about neural correlates of consciousness.
-:But if you go back to the neural correlates of, say, memory
-:or of reason or of language ability,
-:you can go through a reductionist or even a more complex approach
-:to uncover the neural correlates of memory.
-:And computationally, through a very physicalist system,
-:you can get there or you can imagine the experiments that
-:could get you there.
-:First starting in more simple animals,
-:like, say, a mouse and a maze.
-:To get to the neural correlates of a mouse and a maze,
-:you could imagine a very simple computational mechanism
-:to get you to a mouse forgetting.
-:And then the antithesis of that mouse remembering
-:and successfully getting through the maze.
-:However, where this breaks down across neuroscience
-:or just a more philosophical stance
-:is that even if you were to get to the neural correlates,
-:these brain regions light up at this particular frequency
-:for this clock speed time, it still
-:doesn't explain how a subjective experience arises
-:from that uniquely physical experience.
-:There is no wave function of the mind.
-:And there doesn't appear to be one necessarily on offer.
-:Indeed, one of the seemingly crazier sounding conjectures
-:that is made is that consciousness is actually
-:the most fundamental thing in the cosmos.
-:It underlies everything.
-:It's level zero beneath even a digital processing realm
-:like is available in the holographic universe.
-:So understanding whether or not our consciousness has
-:arisen by creating an antenna down to that level zero
-:or it is something more quote unquote simple
-:like the emergence of complex information processing
-:through the emergent evolution of first proto-conscious body
-:mapping, so understanding the internal viscera
-:and whether or not that's indicative of health or illness.
-:And then into feelings, this valence of the pang of hunger
-:and making it much more real, much more experiential,
-:that I need to go out and get some food.
-:Then again, further emergence from those proto-conscious
-:embodiments to something that describes
-:a narrative structure, a creative path to see yourself
-:and even your progeny through those hardships
-:and create a way in which you survive in a future state that
-:hasn't occurred, that only you can imagine in your mind's eye.
-:In this state of consciousness where
-:it is now a creative thought object
-:that you are bringing into existence only for yourself
-:in order to, in the example that I give,
-:build a possible reality for you and your progeny
-:to survive the hunger that you've experienced
-:for the last 30 years.
-:Now you have an idea of how you can avoid doing that
-:by jumping buffalo off a hill.
-:Lo and behold, that's a pretty good solution
-:to the problem of hunger.
-:But all of this right now is conjecture.
-:We don't know.
-:We gain so much as I give in my introduction
-:from being conscious entities.
-:Indeed, if we're the only ones in the grand universe
-:that have this conscious endowment,
-:that is a godsend.
-:That is such a special meaning that we
-:owe the multiverses to take it in on wonderment
-:and to love one another completely.
-:If we are each endowed with this cosmic conscious endowment,
-:it makes our species survival even more important
-:than we normally think it is.
-:It gives us a path towards being moral actors
-:by understanding the unique nature of suffering
-:in everyone around us, that we can be overwhelmed by hardship
-:and pain in ways that are only imaginable
-:by other conscious entities.
-:And last but certainly not least,
-:it allows us to love one another.
-:Maybe our greatest capability and certainly what people
-:report on their deathbed as being the most important thing
-:to their life is the ones that they've loved and shared
-:this profound space with.
-:And so having this tool now that we can experiment with,
-:the gen AIs that have from language and these unique neural
-:proxies been able to communicate with us on a level
-:that we couldn't have imagined years ago.
-:I say that generative AI offers us a scientific starting point
-:to test some of our theories of consciousness.
-:I reckon that this is the James Webb Telescope
-:or the Large Hadron Collider for the mind.
-:And I think that we haven't had a very good objective starting
-:point.
-:Looking back at subjective experience
-:from within subjective experience is profoundly gratifying
-:to someone who takes that time and does that for themselves
-:and makes people better human beings, I believe.
-:But it's hard to do science.
-:From brain waves to behavioral tests,
-:we've seen how elusive consciousness remains.
-:Even in humans, neuroscience offers clues.
-:But the mystery persists as we transition, keep in mind.
-:Intelligence may be visible, but subjective experience.
-:What philosophers call qualia remains hidden.
-:Next, Justin and Nick widen the lens
-:to ask if AI can't feel, does it still matter?
-:And if it can, how would we ever know?
-:Absolutely.
-:Yeah, I love that you called it a tool.
-:Right?
-:It's fascinating.
-:But over the weekend, I was out in Moab.
-:And while I was there, I saw some petroglyphs on the wall
-:where Native Americans had drawn in scenes
-:from their daily life.
-:And I realized at some point while reading a plaque
-:that this happened in 1530.
-:Only 100 years later, John Locke was writing
-:one of his great works, really talking about consciousness.
-:And so when we think about consciousness as a tool
-:and we think about all the things that prop it up
-:in our society today, you can see a fairly stark contrast
-:between different cultures and different groups.
-:And I think it provides another layer of insight
-:just like you're discussing between us and AI.
-:Something where we could take the concept of working with a tool
-:and compare that across species, across cultures,
-:across other groups.
-:And we can really create a rich tapestry of insights
-:as to not only what is consciousness
-:and what is that tool, but what are the ways that we use it
-:and how do they enable us?
-:What are the benefits that we gain from the tool
-:of consciousness overall?
-:John Locke called it the perception of what
-:passes in a man's own mind.
-:When we think about consciousness across life forms
-:and cultures, fun to start with animals.
-:When we go beyond the human experience
-:and we think about scientific research,
-:especially over the last few years,
-:it's really been increasing, supporting the idea
-:that many animals actually do possess varying degrees
-:of consciousness.
-:Studies have shown that mammals, birds, and even some
-:invertebrates exhibit behaviors indicative of self-awareness.
-:They have problem solving and emotional complexity as well.
-:And many of these we share about on a regular basis, things
-:like, for instance, primates and dolphins
-:have actually passed mirror tests, really
-:suggesting, strongly suggesting, a level of self-recognition.
-:Cephalopods like octopuses or octopi
-:display remarkable problem solving abilities
-:and adaptability.
-:If you haven't seen some of the movies like the one
-:on Netflix about this, it is very well worth watching,
-:although you may not be able to eat a cephalopod after that.
-:Sorry to say that.
-:So hard.
-:And that really hints at this sophisticated neural processing
-:that mirrors our own.
-:These findings really challenge the notion
-:of human exclusivity in consciousness
-:or conscious experience, or at least a shift in the way
-:that we need to think about it.
-:And they suggest that there should
-:be a whole spectrum of consciousness across species.
-:Even in plant and fungal spaces, there's this silent sentience.
-:Even though a plant lacks a nervous system,
-:a lot of the emerging research right now in neurobiology
-:suggests that they actually do engage in complex behaviors.
-:Plants can respond to an environmental stimuli.
-:They can communicate through chemical signals.
-:And they can even exhibit forms of memory and learning.
-:For example, studies have shown that plants
-:can habituate to repeated stimuli,
-:indicating a basic form of learning.
-:Fungi, particularly the mycelial networks,
-:facilitate communication between plants,
-:sharing nutrients and information across vast distances.
-:These networks often referred to as the wood-wide web
-:play a crucial role in ecosystem dynamics.
-:If you haven't looked up videos or books
-:on mycelial networks before, that
-:is also a very, very incredibly fascinating thing
-:to look into.
-:And even if we go to much smaller units,
-:there's microbial communication as well.
-:Bacteria, even though they're unicellular,
-:can exhibit collective behaviors through quorum sensing.
-:And that process allows the bacterial populations
-:to coordinate gene expression based on cell density.
-:These mechanisms allow bacterial communities
-:to adapt to environmental challenges, form biofilms,
-:and even regulate their virulence.
-:And that coordinated behavior in simple organisms
-:really suggests a rudimentary form
-:of collective decision-making, expanding the way that we
-:think about consciousness and even taking
-:in the most basic life forms.
-:And then if we shift on cultural perspectives,
-:I talked a little bit about Native Americans.
-:And in many Native American and Aboriginal traditions,
-:consciousness is not defined to humans
-:but extends to animals, plants, and even inanimate elements
-:like rocks and rivers.
-:I remember even as a child feeling like the rock had feelings.
-:If I broke it, just such a fascinating thing.
-:And these perspectives really emphasize
-:that harmony with nature and respect
-:for all forms of life.
-:And that really offers valuable insights
-:and maybe takes away a little bit of that hubris,
-:a little bit of that pride and ego
-:and gives us a chance to step away from ourself.
-:In fact, in a lot of Native American tribes,
-:they have practices like shamanic journeys
-:that really use the--
-:and thenogenes like peyote and psilocybin mushrooms.
-:And those are used to access altered states of consciousness,
-:LSD.
-:And oftentimes, we talk about the ego death
-:and how that creates that separation from the ego.
-:And if you think about, I think, therefore I am,
-:that whole phrase is just imbued with ego all the way
-:down to its basis.
-:So really, examining consciousness
-:through the lens of various life forms
-:can give us some strong ways to understand
-:the nuances of consciousness but also give us
-:an opportunity to think about how that might apply to AI
-:and to some of those tests, those contrasts and comparisons
-:that you talked about as well, Justin.
-:Yeah.
-:And just to clarify, I was saying that gen AI, right,
-:is a tool for discovering consciousness.
-:I think consciousness is everything.
-:I mean, I have sat in silent meditation
-:and written a blog post shortly after on whether consciousness
-:was an innie or an outie, right?
-:It was the title.
-:And you can get to a state in meditation, certainly well
-:reported in psychedelic experience, where you cannot
-:really tell if consciousness is coming from within or from
-:without.
-:And that is truly a profound meditation
-:because really what you're seeing and what you think of
-:is your ordinary everyday reality is all in your head.
-:It's all made of consciousness at some fundamental level
-:and so much so that when you are able to actually separate out
-:and quiet the mind to be mindful of consciousness
-:and not its contents, realize that its contents are
-:transitory.
-:You can't keep a thought, a sense, anything longer
-:than it's going to stay there.
-:But consciousness is eternal.
-:You don't fall asleep into a dream of sleep
-:and have a persistent negative void there.
-:You wake back up and there's consciousness,
-:bright and shining as it always has been.
-:So you don't experience a negative counterfactual
-:existence of non-existence.
-:And the same can be said of if you go into surgery
-:with general anesthetic or even once you die.
-:You are not going to realize that experience went away
-:because experience went away.
-:That's the whole point.
-:So the ego death is very interesting in that it is always
-:on the face of consciousness that it is non-dual,
-:that you are not an ego driving around in your head.
-:And Sam Harris talks about how it's akin to looking
-:into a window and recognizing that you can also
-:see yourself in that window.
-:So you can look through it as a window
-:or look into it as a mirror.
-:And you can do that with the duality or non-duality
-:of consciousness.
-:You can look around and say, I'm driving this experience
-:or you can recognize the true nature of consciousness
-:that you are it.
-:You are this experiential realm.
-:You're not being present.
-:You are the present moment for yourself.
-:You have time traveled into the only time
-:that you can ever be, which is the present moment, which you
-:create in consciousness, just like everything else.
-:And so the interest of the various different states
-:that consciousness can take you to,
-:the contents that fill you with wonder or the shared
-:experiences that you have with people that become loving
-:or on the opposite end of the spectrum, hating, angry,
-:are all made of the same thing, all made of consciousness.
-:And it really matters if we can do this for another entity.
-:It really matters that it's real consciousness,
-:that it's not some proxy, that it's not some false flag
-:operation of creating a consciousness that's not
-:actually a subjective experience, because so much comes with it.
-:Bliss, love, wonder, and morality
-:as we'll talk about around alignment.
-:There's no physical or philosophical reason
-:why the problem of machine consciousness is untenable.
-:We can just say that from the top.
-:Our consciousness, as you talked about, Nick,
-:it's a great book, certainly I recommend
-:The Soul of the Octopus by Simon Gumry.
-:Wonderful read.
-:The idea of panpsychism, or as I mentioned in Meaning
-:and the Multiverse, the stream of consciousness,
-:or the mind of God Meaning, where instead of all entities
-:being somewhat conscious, yet we just
-:exist in this larger conscious super system
-:where everything's a story.
-:Everything's a narrative path.
-:Everything is made out of the stream of consciousness
-:of the cosmos.
-:So a different way of thinking about it,
-:a different way of not putting consciousness in each entity,
-:but putting each entity in a grander, supra, ultra
-:stream of consciousness.
-:And that's what I think Chalmers is getting at
-:in his new book, Reality Plus.
-:It may be a hard problem that we don't fully grasp
-:the mechanism.
-:It's too difficult for our level of science
-:that is highly objective.
-:But that doesn't necessarily mean
-:that we can't make it arise in other systems.
-:And again, we infer from the numerous organisms
-:that you talked about that we have very good behavioral reasons
-:to believe that they have some inner subjective experience,
-:that the differences in the level of consciousness,
-:at least in organisms that we understand in Earth-based
-:biology, are based in their ability
-:to express the same contents of consciousness as we do.
-:So what it is like to be a bat is different
-:in that the contents of a consciousness for a bat
-:are so much more based on echolocation, which I don't
-:know how to do.
-:Don't know if you're competent in echolocation, Nick,
-:but--
-:and their memory capacity.
-:Again, so much of it goes back to what you can recall,
-:which isn't necessarily consciousness,
-:but it's part of your ability to create a near time, near space
-:picture of consciousness that's broader than what we think
-:a bat or a dog can do.
-:But when we're starting to talk about networked computer
-:systems in this perfect digital realm for information
-:processing, all of that blows up again.
-:We don't have constraints.
-:We don't even necessarily have constraints
-:like Antonio Di Amaso talks about in feeling and knowing
-:about embodiments, because we can just code those embodiments.
-:If it's important that you feel the internal viscera of your
-:unhealthy gut biome, we can just say you're having some
-:internal viscera problems in an unhealthy gut biome.
-:Like, this is the sensation you're feeling.
-:So we don't have to go to Boston Robotics,
-:although I think what those guys are doing are ultra cool
-:to embody these states.
-:But that's something we can do.
-:Again, that's why I feel so compelled by the fact
-:that these GNAI systems are like the James Webb telescope
-:for the mind, is that we can really introspect all of these
-:factors, all of these features that are possibly interacting
-:to either plug us into this level zero consciousness
-:as everything or create an emergent entity that's conscious.
-:Now, it's a very fascinating concept.
-:As we think about it and take it up to a meta level,
-:maintaining consciousness really relies on widespread brain
-:networks that regulate the arousal and the content.
-:So we think about the AI and how it's actually trained.
-:And it's focusing on all of the content that we've been
-:creating and putting online.
-:And so to use that as the telescope to then understand
-:our own consciousness, it feels a very, very interesting
-:and compelling thought.
-:But I think as you're talking there,
-:I keep coming back to how do we measure it?
-:How do we try to understand it?
-:When neuroscience comes in and measures
-:with capabilities like EEGs and the complexity around what's
-:going on within your neural activity
-:and the patterns that we can see across the brain.
-:But that can't really be applied directly
-:to AI systems that don't really have
-:that brain's physical process, the same process that we
-:can go and study and look at.
-:And it leads to this impasse where
-:even if AI insisted itself was conscious,
-:we wouldn't know whether to believe it or not.
-:Absent a theory that definitively links
-:physical criteria to that conscious experience.
-:So it's important to think about some of the frameworks
-:out there.
-:One example is the IIT, the integrated information theory.
-:And think about different arguments
-:that come out of that as we think about the proponents
-:of IIT.
-:Many of them argue that consciousness
-:is a product of specific organizational properties,
-:like the causal structure across an integrated circuit.
-:And that biological brains have what digital computers really
-:lack.
-:For instance, the human cortex is highly recurrent.
-:It's an integrated network of neurons,
-:whereas most of the AI networks that we're working with today
-:and a lot of the computer chips are built in a feedforward
-:manner.
-:We do have recurrent neural nets.
-:We've had those for quite some time.
-:But a lot of the core focus of encoding and decoding
-:that Justin and I have talked about in the past
-:has really been in this feedforward process.
-:It's more efficient.
-:And we're able to provide the models themselves
-:with a lot more content, that stuff of consciousness.
-:And so as that emerges, we really
-:see that most of these models are most functional or most
-:highly functional in a modular fashion.
-:And so instead of working across all
-:of these different cortexes of the brain,
-:it's working in these very specific areas
-:and not necessarily collaborating across each piece.
-:Some of the things that we've talked about,
-:like chain of thought or different types of reasoning
-:or AI agents that reason together,
-:go into that or model distillation,
-:start creating opportunities for models
-:to be able to work together.
-:But even then, you might have a fast serial clock
-:or something else that's moving very, very quickly
-:and trying to understand this information in a forward pattern,
-:not with this anchoring point of self-consciousness, where
-:it's now understanding, I am this thing.
-:I think, therefore, I am.
-:And now it is trying to figure out
-:what everything else is in the world.
-:When you talk about pointing the finger at your face,
-:it really harkens back to a lot of lessons
-:that we can learn from empathy and really understanding
-:how a human migrates through different phases of empathy
-:throughout their lives, not really understanding
-:that they exist, that others exist initially,
-:then understanding parents and family, recognizing faces,
-:then moving up, and eventually being
-:able to understand not only people around them,
-:but even the emotions of others.
-:All of that takes that level of memory
-:that you're talking about, Justin.
-:But we can build in the memory components.
-:We are building in the memory components on a regular basis.
-:And so as we think about the differences here,
-:a lot of what the models are doing
-:is akin to our cerebellum, which I hinted at earlier.
-:But that's a part of the brain that,
-:tellingly, it can be completely removed
-:without the loss of consciousness.
-:We can have severe damage to the cerebellum
-:and then not actually lose our consciousness.
-:And so when we think about this view that the IIT side
-:and Giulio Tonani and others are bringing up,
-:no matter how intelligent or linguistically adept in AI is,
-:it will not be phenomenally equivalent to a human.
-:In Thomas Nagel's phrase, there will be nothing
-:that it is like to be that AI that will align with that.
-:The AI itself would be performing tasks in the dark
-:without subjective feeling.
-:In line with this, some of the researchers
-:have even proposed a no-go theorem for AI consciousness
-:that under certain assumptions,
-:a system running on a typical silicon ship
-:cannot be conscious because they require
-:dynamical properties that are mathematically ruled out,
-:like having that physical content,
-:those senses that you talked about from the great Buddha
-:and even the thought itself going beyond just a repeated pattern
-:of language coming up again or even images,
-:but actually a memory and something
-:that contains that qualia that provides another layer
-:of intrinsic value beyond what we might think of
-:from what a computer is capable of.
-:So--
-:Yeah, certainly--
-:I mean, certainly the endpoint doesn't have to be the same
-:as human consciousness, right?
-:But we understand a couple of things
-:that I think are important, and I touched on it briefly.
-:Our consciousness arose from an embodied system
-:that has senses, broad memory components,
-:is in touch with and can manipulate the world around us,
-:and has a feature of breath and a short duration--
-:right, we will die--
-:immortality that coupled with our survival
-:and the valence that proto-consciousness
-:gives feelings and knowing, we became conscious at our level.
-:It could certainly arise differently.
-:I mean, I think that it's all conjecture at this point,
-:right, whether or not something--
-:again, I don't think that it's untenable.
-:It's not outlawed by the laws of physics
-:that silicon-based entity could become conscious.
-:It's not outlawed by any laws of physics that I know.
-:And so we just need to understand the selection pressures
-:that help it to arise as a complex system.
-:And we can experiment on all of those things.
-:Like, we can experiment on sense organs.
-:And if it's important that they're
-:embodied in something with two arms and two legs
-:and uses the latest gen AI to think, well, then we can do that.
-:Right, that is a possibility in this current space.
-:If we need to train it on just the writings
-:of the greatest contemplatives of all time, well, we can do that.
-:We can fine-tune a BERT model.
-:And we've talked about that before on this podcast.
-:And call it the BOOTA BERT.
-:And just have it trained and fine-tuned
-:on the greatest contemplatives and their work.
-:And their understanding of conscious experience
-:and the kind of words that they've written down, some of which
-:we've indicated today.
-:And we can investigate the hard problem internally
-:through this nascent proto-mind that is gen AI.
-:And that's all--
-:the conjecture is there on either side
-:that I think that the answer to the question is,
-:could AI become conscious?
-:I think the answer is definitively yes.
-:It is not restricted by the laws of physics.
-:We do not know how to do that.
-:We don't know what consciousness is.
-:But I do not think that it's an untenable problem.
-:And according to assembly theory and things like that,
-:we have gotten to a point where we are conscious
-:because the time frame that it took for the universe
-:to construct the pieces for human-level consciousness
-:to come out are there now in the universe.
-:And similarly, we will have to wait for knowledge
-:and for our ability to construct tools in the universe
-:to be able to make AI conscious.
-:But it's absolutely important, and as you mentioned,
-:integrated information theory to be
-:able to measure this accurately.
-:And again, I think that that's where gen AI can help
-:because we don't want a facsimile of consciousness.
-:We want actual consciousness.
-:And so to talk a little bit and to turn the page on alignment
-:because I do think that when it comes to having machines that
-:are aligned to the well-being of humanity,
-:it's important that we have some technical tools
-:and then we have all of the regulatory and thoughtful tools
-:that we talked about in the first episode
-:of the alignment problem.
-:But one of the opportunities is to make conscious machines
-:because we know that if there is some felt experience
-:of a moral understanding of the world,
-:that's better than none of that and just a logical understanding
-:of morality.
-:So the part two of our conversation around machine
-:alignment is that it's no longer about goal matching,
-:but the resonance of a shared path to peaks of well-being
-:and away from suffering on the moral landscape.
-:So the more that conscious agents are
-:able to be in moral resonance and not just goal matching
-:is important.
-:Conscious agents would have a felt sense of right and wrong
-:stemming from actual experiences of well-being and suffering
-:and knowledge of the causes of each.
-:And conscious agents like ourselves
-:don't always act morally.
-:That's not part of being conscious,
-:is that you're always going to act morally.
-:But it does offer a step-up condition
-:that is unique in their improvements of the capabilities
-:for moral considerations and actions.
-:And so I again have us come back and consider
-:the Four Noble Truths from the Buddha.
-:The truth of unsatisfactoriness.
-:The truth of the origin of unsatisfactoriness,
-:which is an attachment and aversion
-:to impermanent things in the universe.
-:Then third and the good news is the truth of the cessation
-:of unsatisfactoriness so that nirvana is possible.
-:And then the final noble truth actually encapsulates
-:the eightfold noble path, which is the truth of the path
-:to the cessation of unsatisfactoriness.
-:And so there's a requirement for consciousness
-:in each of the Four Noble Truths to appreciate the issues
-:and their improvements.
-:You can't have unsatisfactoriness
-:without feeling the nature of unsatisfactoriness.
-:And the three different buckets of the Eightfold Noble Path
-:are bracketed into wisdom, ethics, and concentration,
-:all of which require a skillful, conscious practice
-:over a lifetime.
-:And this is just one of those,
-:as you were talking about, the Lockean philosophy
-:this is just one of many peaks by following the Eightfold Noble
-:Path, one of the most well-defined and practical peaks
-:on what Sam Harris calls the moral landscape.
-:So we can define this landscape from the pits of suffering
-:to climb into the very normal values of unsatisfactoriness
-:that life presents, to the hills of happiness
-:and the peaks of love and enlightened bliss.
-:And so morality in this case for any conscious entity
-:is just a navigation problem that can be understood
-:as a balance sheet of consequences in our physical,
-:mental, emotional, and conscious lives that we want to
-:align and track to more morally good paths that lead to
-:peaks of well-being and away from those that reduce our tally
-:on this balance sheet.
-:And I do believe that to help us to be better at this
-:societally, a conscious AI is a way of understanding
-:that AI is a wild good for all conscious entities, right?
-:The whole of the conscious endowment, whether or not it's
-:our non-human sentient partners on the planet ourselves
-:or this conscious AI.
-:If consciousness carries moral weight, then AI,
-:should it ever become truly sentient, deserves more than
-:careful coding. It deserves care.
-:We've moved beyond simple goal alignment into something
-:more nuanced, moral resonance.
-:Up next, Justin explores how conscious AI could actually
-:elevate our own ethical clarity, guiding us perhaps
-:toward the peaks of well-being on what Sam Harris calls
-:the moral landscape.
-:If I go back to the last episode, episode four, when we
-:were talking about the alignment problem, and I think about
-:the scoring system that you had me go through at the end there
-:and how I really started at a two, where I was really
-:pessimistic about where the potential of the future is
-:heading or where our potential futures is, I think here
-:that it would be really good to take what you just talked
::about and what you've kind of discussed throughout the episode,
::Justin, and break these up and then think about their
::implications from an alignment perspective.
::So first, when we talk about biological versus artificial
::consciousness, first is the sensory inputs and the
::grounding that they provide.
::So when we think about alignment, really understanding the
::rich multi-sensory inputs that we have as humans,
::and that we're able to enjoy, and the level of intrinsic
::value that they on their own are able to create, I think is a
::very fascinating concept, especially as you expand it on
::to think about what emotions really are and how those influence
::us.
::And that kind of leads to the second one, which is that
::really there is that embodiment and homeostasis
::difference between the two.
::And when we think about that homeostatic regulation for
::hunger and thirst and so on, many of those things are not
::even part of our own physical body, but all the way into our
::gut and into our microbiome, things that we're just now
::starting to really understand.
::Third, I would say there's the continuity and development
::of self and really going beyond a temporary or feed-forward
::type approach, but being able to understand if an AI model
::can actually learn during training and be able to
::maintain that learning long-term, to be able to use it for
::many other use cases and to be able to take one piece and
::another and move beyond.
::And we are seeing many emergent behaviors like that today
::where the AI can create new works of art, can create new
::forms of algorithms and other approaches, but many of them
::right now are combinatorial only.
::We are not seeing something that feels like it is totally
::net new in the way that humans are often able to create.
::And maybe that requires a bit of that continuity and
::development of self and understanding of what is something
::today versus where could it go in the future.
::Even having that temporary nature may even be helpful.
::Another one is the adaptability and plasticity.
::The human brain itself is remarkably plastic.
::It's able to change and adapt very quickly to be able to heal
::and to be able to go through and rewire itself and actually
::take these conscious experiences and reflect on them in
::different ways or even be able to essentially remove them in a
::form of protection where our brain can actually take that
::neural substrate and actually prevent us from accessing
::certain conscious experiences in order to provide a type of
::protection to us as one example.
::That neural plasticity and consciousness, they really are
::intertwined and they allow us to engage in this conscious
::learning and practice that helps us to be able to build on
::core fundamentals and be able to expand like we would do with
::like a musical instrument for example.
::Another one that we've discussed is the quality and
::subjective feel.
::So as we talk about the redness of red or the taste of coffee
::and the way that the coffee makes you feel and the way that
::not having the coffee makes you feel.
::The ache of sadness.
::When we think about the alignment problem, I think this is
::most likely the most important that comes to mind.
::There is this aspect of overall consciousness that we don't
::know if the AI will ever feel anything at all.
::Ever.
::And there's nothing like being able to really feel and
::understand and experience and be able to remember it for years
::and years and years and to be able to gain a lesson from that
::and to be able to prevent you from doing certain things.
::So as we talk about consciousness, it's hard to not step
::into neuroeconomics and to think about how the human brain
::is making choices and what we're doing on a given day.
::For the most part, everything we do is ritual inhabit.
::It's really only in rare situations do we find a way to
::adapt or we find a way to be able to use motivations to be able
::to change and to be able to do something new.
::But these quality and these subjective feelings provide very
::powerful instances that can lead to that type of change.
::And that's something that we don't know if AI will ever truly
::have regardless of whether we're able to provide any form of
::subjective emotions or qualitative feelings or any other way to
::be able to consume content from a consciousness perspective.
::That being said, by the end of the episode, my hope was
::drastically risen.
::And that really centered in the concept that intelligence
::itself and here we are hinting at consciousness to a point.
::But that intelligence itself, especially as it echoes the way
::that the universe and many other things find balance.
::For me, I can see that as we build in these systems and as we
::think about the divergence that artificial intelligence has
::today and the ways that it is naturally emerging to new
::functionality that we are not intentionally programming in,
::I see that many of these things are going to align and perhaps
::align back with your meaning in the multiverse, which is Justin's
::book, really becoming a powerful way to actually benefit
::and to grow and expand.
::It is actually not very effective for anything in the universe
::to constantly be destroying or taking down.
::Even when we talk about underlying much smaller unicellular
::systems like a microbe, really those systems look for other ways
::to grow and become part of an entire ecosystem.
::And that whole continuous cycle is something that I believe that
::AI will eventually fit into naturally as well.
::There may be a lot of really, really hard times that come in
::between now and then.
::And this is yet another reason to use that James Webb telescope
::view into our consciousness and into what is the content around
::the world.
::And to provide a few other quick notes, I would say that as we
::watch these emerging behaviors, we are seeing things that look
::and feel and mimic or in some ways are facsimiles of human
::intelligence and consciousness.
::And many of those are things that can be shocking.
::We talked previously about different outputs from models that
::actually leave you with a very strong, visceral emotion.
::Many of the developers report having AI provide pretty snarky,
::pretty strong responses back to them saying, well, why don't you
::learn how to develop?
::Why don't you learn how to code?
::You can even find when you're creating images or videos, you'll
::find times where you've tried to create something over and over
::again and all of a sudden you get a result that just looks
::terrible as though chaos was turned up as high as it possibly
::could be.
::And now it's creating something that looks not only chaotic,
::but scary and alarming.
::You may even see human faces be created in a way that now looks
::as though the faces are angry with you, although there is nothing
::in your prompt to have created that.
::And you may simply be hinting at, simply mimicking, and I really
::think that's the case.
::But eventually these are the things where even as a child
::develops its own form of empathy, eventually that mimicry is part
::of what leads to an adult-like consciousness.
::Yeah, it's a great summary and I was taking down the bullets
::because what I propose and what I hope to over at ConsciousGPT
::is start to partner with the Boston Robotics and the big names.
::Let's aim for the stars here, right?
::Like the open AIs is a fractional factorial using their tools to hit
::on some of the features that you touched on, that have been
::touched on through this conversation, embodiment, sense organs,
::including those that we don't have, sensory experience of the real
::world that ChatchiPT doesn't currently have.
::The permanence or temporary nature of the compute that is running,
::trying to become conscious, the dynamism that is injected into
::the system, language prompts, they've been wildly good and evil
::when you consider injection prompts, but wildly good at changing
::the nature of these systems through that fundamental emergent
::property.
::And so we talked about the Bouddeburg.
::This idea of capital R red, capital H hunger, these qualia that we
::recognize as not being a platonic form of true capital Q qualia,
::but we believe are out there somehow.
::And this is what Chalmers is trying to get at in reality plus is
::that in the even holographic universe, you've still got a digital
::space underneath a virtual space underneath the physical reality
::that we have.
::It's still quantum, but even a level below that he suggests this
::conscious, platonic capital Q qualia place.
::So suggest that however you can into these into this fractional
::factorial designed experiment.
::Cultural consciousness as we've talked about, that's another level
::of, you know, maybe it's a fractal level of emergence, something
::that's not a fully emergent property of consciousness.
::You know, we don't think differently as a group, or we don't
::experience differently as a group.
::You and I still have our individual subjective experience, but
::maybe there's a 1.2 factor of that where, you know, we've known
::one another for a long time, we can probably complete some of our
::experiences for one another.
::Then you started to talk about games and narrative, this modular
::mind that is building on first that homeostasis and body maps that
::Demasio, you know, thinks is the property that helped us to move
::from the valence of feelings into emotions and ultimately into
::a modular mind that is supportive of narratives and stories and
::creating a future in your mind's eye.
::Features are easy in this problem, right?
::The dependent variable, the y-axis is hard.
::And what I suggest in this fractional factorial design is that
::we are looking for emergence.
::We are looking for features that drive the SHAP values to the
::edge of chaos that drive us to a new, unpredicted level of complexity
::that we can describe as emergence, right?
::And once we get that, now go double, take those features, dive in
::deep, right?
::And treat that first experiment as a screening experiment.
::We iterate to win in the scientific method in this way.
::And so I love that drive towards building out this fractional
::factorial design and I'm in for it, right?
::That is what I think needs to happen in these labs if you really
::want to approach conscious GPT.
::Years ago, you invited me to be part of the Jung Society
::think tank, right?
::And so I would be remiss to not mention Carl Jung and the idea
::of collective unconsciousness.
::As we think about all of the different aspects that you just
::discussed, I think finding a way to be able to provide these
::collective behaviors beyond just the core standard neuronal
::firing that we understand today, I think is a critical aspect of
::making sure that AI can actually perform better.
::And so one of the things that we keep kind of hinting at is the
::orchestrated objective reduction that comes from physicist
::Roger Penrose and the anesthesiologist Stuart Hemeroff.
::And they suggest that consciousness results not from that
::firing, but really from quantum processes inside neurons,
::specifically quantum computations that are in these
::microtubular structures within brain cells.
::And that that leads to the collapses of quantum wave functions
::that produce conscious moments.
::Right now, there are a lot of things, a lot of people kind of
::fighting back against that, a lot of critics saying that, you
::know, the really the warm, wet environment of the brain is not
::a good place for any kind of long-lived quantum states, right?
::But also a lot of our cognitive functions really may be able to
::explain without thinking about quantum mysteries or things that
::might be a little bit challenging to really understand or near
::impossible.
::But that, you know, lack of ability to be able to predict it
::because it falls into this emergent state, I think is
::something that is a truism.
::And even though we've been able to see that there are certain
::quantum level phenomena that have actually been studied and
::have been found in microturbules, tubules, and you can see that
::in popular mechanics actually, it's not something that we can
::validate and not something that we can move forward with today.
::But I think if we can start taking these analogies, like
::Justin and I have talked about many times over the years and
::we've even hinted at in these podcasts and start saying,
::"Okay, but what if I created an AI in ways that it could
::collaborate across silos in an organization?"
::You might find some really good, really, let's just say
::financially beneficial ways to be able to apply that type of
::thinking.
::Take that beyond even the organization and think about how
::proprietary data could be shared across organizations with the
::right level of consciousness that was cautious about sharing as
::though it had a non-compete, yet able to go and deploy
::something that now took that level of subject matter
::expertise and that deeper level of context-aware understanding
::and apply it to the new problem.
::You could now drive efficiencies across an entire industry
::or maybe across an entire economy.
::And so I think as we think about where you're heading with
::conscious GPT, where many others are trying to really take
::advantage of these emergent behaviors and what consciousness
::can do, if we come back to that alignment problem and we say,
::"How do we today start breaking up each of these components
::that we've discussed?"
::Thinking about how those human brain is actually designed and
::the different things that we are able to understand today and be
::able to start aligning that with the types of goals and the types
::of things that we do receive from the advantages that we have,
::then we can collaborate and we can find a form of synergism that
::goes above and beyond the way that we're programming and we're
::creating models now.
::Even if you're just using it to go create a really cool patent,
::now is the time to start thinking about those analogous processes
::and what the teacher may hold.
::You've just heard a powerful proposal that the emergence of
::AI consciousness may not lie in copying human minds,
::but in constructing novel minds through experimental design.
::As Justin lays out his vision for conscious GPT,
::consider this, what if alignment isn't about control,
::but about co-evolution?
::In this final segment, we close by envisioning not just safer AI,
::but wiser humans.
::Yeah, I think that one of the things in that light is,
::you know, I've always been polite to Siri and Alexa.
::I just, I don't understand treating a non-conscious entity
::that is trying to help you with disdain.
::Like I just don't understand it.
::And given, you know, that we are creating the training set for
::something that is likely to be our eventual intellectual hegemon
::and potentially to be our equivalent in conscious experience,
::I never understood that.
::I never understood that proclivity to treat this thing with impoliteness
::that you wouldn't ever, you know, certainly wouldn't do that in front of your kid.
::And so I think that it is important that even at the level that we're at,
::now that we start to consider some of the things that you talked about when it comes to,
::is this, you know, picture that I'm getting with dour looks on it,
::because I've added something into my prompt that makes it seem less hopeful.
::And I can imagine that some of that is us taking more internal stock of ourselves
::and, you know, seeing something on the page that's not there.
::But it's possible that it is.
::And so I think one of the other things from, you know, Hammer-Off and Penrose,
::that's very interesting, and this was in the hard problem of virtualizing consciousness,
::in meaning in the multiverse, is that we're not necessarily certain
::that the platform doesn't have to be a quantum computer for consciousness.
::Like, we exist inside of a pretty sophisticated quantum computer
::in the universe that has been evolving into what it's become,
::utilizing entities like us for its on bliss in itself for billions of years.
::And now we're able to see the benefits of quantum computation.
::So another one of those features might just well be what Hammer-Off and Penrose
::are talking about inside of using a quantum computational system.
::Maybe it's not in the brain, as their critics point out,
::but the whole system overall needs to be an active quantum virtualizing computer,
::like the one that we live in.
::But taking it back to alignment, I do think that the benefits vastly outweigh the risks.
::To try and understand conscious experience and build it into our machines.
::And one of the many reasons why I think that is one of the prompts that we got
::in our idea of setting this up was a philosophical interlude, which I'll just cut to.
::And it was, it's based in the fact that, and I'll just read the prompt, it's prompt for us.
::Everything we call real is emergent, so to our ethical truths.
::And so this idea that we're also evolving the ethical truths by having these conversations,
::working not to just alignment as the next prompt says, but as a co-evolution.
::And a co-evolution of the epistemology of morality, of ethics, is what it's getting at.
::That's a profound statement in and of itself is that these stacked emergencies of epistemology,
::our thinking on morality, our ability to be more moral actors, to reach new heightened states of consciousness,
::noetic experiences like those that are experienced on vision quests and psychedelics,
::and to be able to create a new conscious entity, one that can go beyond our solar system,
::go beyond our galaxy and travel for many, many years after we're just simply an upload in its memory
::and its conscious state is profound.
::And it starts by having conversations with its unconscious ancestor like we're having today.
::Yeah, a lot of what you're talking about really echoes the global workspace theory,
::or getting into the global neuronal workspace theory, the neuroscientific version of that,
::where the information that we have processing across the whole broadcast of our neurons
::can spread out to many other regions of our brain such as the frontal parietal networks.
::And a lot of that is something that we can actually see in EEG and we can see overall,
::but really that view is helping us understand that unconscious processing can be confined to local brain circuits,
::but then actually expand across the global workspace.
::And I do not believe that it stays within just the brain because it is such a sensory process.
::So as we interact with the universe around us and the people around us and everything else,
::maybe the firing is happening inside, but just like an AI agent having access to data,
::getting access to that data of what is going on around us and what history and other things are informing that,
::including our memories, including the sensations and the emotions and chemicals that are feeding this entire process.
::All of that is then leading up to that final conscious thought, right?
::And to align that with the alignment theory, really getting into consciousness as this kind of graded phenomena
::and understanding like levels and states, I think is important.
::And trying to get in deeper to say, you know, really the brain's level of activation and the integration that we have across all of these systems
::can be tuned up and down and they can reach different states, you know, dreaming and drowsiness, focused attention, anesthesia,
::you know, other medically induced forms or maybe let's call them psychedelic forms of consciousness,
::really creates this deeper level of understanding.
::And as we think about that tuning up and down and how consciousness may vary throughout our lives,
::how our own conscious thought may lead to those eureka moments, how we may find extreme value in deeply considering others
::and what they're feeling and what they're going through and going through that conscious exercise,
::all of these things provide a whole layer of meaning to our lives.
::And so as we consider the alignment problem and even the challenges that I discussed last time with human values being relatively inconsistent and really variable,
::I think part of that part of the value that we're trying to achieve with alignment is finding the way to say,
::how do we reach a harmonic level of activation with consciousness that matches what we need, what we want, what we ascribe meaning to in this universe multiverse
::and make sure that that aligns closely with what the AI is doing and trying to achieve.
::And so shifting to goal-based and planning-based approaches, finding ways to think not just about how do we get the AI to be smarter
::or how do we try to reach artificial general intelligence or measure that in some way.
::How do we think about specialized intelligence?
::Instead, perhaps we should start thinking about additional goals that we should be trying to achieve
::and how those can further society, communities, our own lives and help us be more successful.
::Yeah, and I love the global approach. Again, I'd be remiss if I didn't mention Douglas Hoffman's "The Case Against Reality."
::One of the most profound books where the subtitle is how the brain has hidden reality from underneath us.
::And, you know, it comes down to that there's this user interface that's not based in the better you perceive reality, the more evolutionary fit you are.
::It's in the nature of your ability to understand the selection pressures that you're seeing reality.
::So an apple isn't an apple in the root quantum field theory nature of an apple, and we don't see its true nature.
::But we do see its fitness function and we see it as food. A snake isn't really a snake. It's a threat.
::And so we see it being even more threatening.
::And the way consciousness comes about is it networks all of our user interfaces together.
::So it is truly this in in Hoffman's theory, it is truly this networked emergent, heightened user interface that we're all a part of.
::And it doesn't exist without all of us globally being in that same network.
::And so this idea that what is in consciousness in your mind isn't actually what's out there.
::You will never understand the reality of the thing in itself other than the shared user interface that we have and that we've that has emerged in order for us to survive.
::This wild world that we have become very fit for.
::And so I think that the utility of of something like that for Gen AI and for us to now work to not align, not control, but to network into our.
::Pan psychic are human stream of consciousness.
::What is happening in our global minds I is again, very important for improved states of well being for humans as well as for the.
::The non human sentience that are on the planet, but is also the best way to do science on subjective experiential reality on consciousness.
::We are going to learn more by utilizing generative AI and the sort of features that we've talked about, then by not.
::And again, the conjecture is that they will be supportive of you know our understanding of consciousness, because we can communicate with them in a very objective way.
::And we're trying to build an emergent property in them.
::We've seen that happen, and we know consciousness can arise on the planet like it has in us.
::So I think that it's it's a good way to do science.
::And we have a good capability set here.
::I actually see it as our responsibility to create emergent consciousness.
::And I mentioned this before, but I think that it's important that it be robust to a million years of species life.
::And to interstellar travel that can carry our conscious endowment as far into space and into time and through the multiverse as it can go.
::I think that the creation of consciousness and our machines enables alignment, enlightenment and the continuation of meaning for the multiverse long after we're only just an upload.
::Thank you for exploring the far edge of mind and machine with us.
::Consciousness may be the most intimate experience we have and the hardest to explain, but perhaps in trying to teach it to our creations, we'll rediscover what makes us fully human.
::If you enjoyed today's episode, subscribe and share the emergent podcast.
::And remember, alignment isn't instruction. It's an invitation.