The Linguistic Singularity – How Language Shapes Intelligence
🎙️ Episode 2: The Linguistic Singularity – How Language Shapes Intelligence
What if the key to intelligence isn’t circuits, but language itself? 🤯 In this episode, Justin Harnish and Nick Baguley dive into the profound relationship between human language and artificial intelligence. They explore how neural networks didn’t just evolve—they emerged—when they cracked the code of human language.
🔥 In This Episode:
🚀 Why language is a complex adaptive system
🧠 How neural networks learn language—and what emerges when they do
📈 The moment AI stopped being autocomplete and started reasoning
✍️ Real-world AI applications: ChatGPT, Claude, Bard, and beyond
⚖️ The ethical dilemmas of AI-generated language
🧑💻 Featured Topics & Guests:
• The science of language acquisition and how AI models compare
• Steven Pinker’s The Stuff of Thought – language as a window into cognition
• The transformer revolution – how models like GPT-4 changed the game
• Metaphor as intelligence – how AI and humans both build meaning through analogy
• Emergent properties – what happens when AI begins to “think” in context?
🛠️ Tools & Companies Mentioned:
• ChatGPT, Bard, Claude – leaders in generative AI
• Modern BERT, Transformer Models – the evolution of language models
• Crew AI – AI-driven multi-agent automation
• Vector Stores & RAG Systems – next-gen AI memory systems
• Boston Dynamics & AI Robotics – where neural networks meet real-world action
📚 Resources & Further Reading:
• 📄 Generative Pre-training of a Transformer-Based Language Model (Alec Radford et al.)
• 📄 Generative Linguistics and Neural Networks at 60 (Joe Pater)
• 📄 Unveiling the Evolution of Generative AI (Zarif Bin Akhtar)
• 📖 The Stuff of Thought – Steven Pinker
• 📖 Artificial Intelligence: A Guide for Thinking Humans – Melanie Mitchell
• 📖 Was Linguistic AI Created by Accident? – The New Yorker
• 📖 The GPT Era is Already Ending – The Atlantic
🎧 Join the Conversation:
💡 What’s your take on AI and language? Are we co-creating intelligence, or are we just really good at making pattern machines? Send us your thoughts!
🚀 Next Episode Teaser: Can humans and machines co-create the future together? We explore the next frontier of human-AI collaboration!
🎙️ Subscribe, rate, and leave us a review!
Transcript
Nick, you've been working with AI tools for a while now,
automating workflows, building models.
Well, let me ask you something.
Have you ever stopped to think about why these tools are so powerful?
Yeah, absolutely.
I mean, it's kind of eerie, right?
They don't just follow commands.
They can riff with you, come up with their own responses.
Sometimes they seem more creative than I am,
but what's on your mind?
Well, here's what really gets me.
These machines aren't learning by following instructions,
like we once thought that they would.
They're learning language.
And language, well, that's our thing, right?
That's human's thing.
That's how humans make sense of the world, share ideas,
and even build culture amongst millions and millions of us.
So what if the key to human level intelligence isn't circuits,
but language itself?
Yeah, that's kind of mind-blowing.
That's amazing, Justin.
I've never really thought of it that way.
Machines aren't just actually simulating intelligence.
They're emerging intelligence by learning our way of understanding the world.
Exactly. This episode is all about that.
How did generative AI, think chat GPT, BARD, CLAWD, and the rest,
go from clunky, auto-complete tools to writing poetry,
answering philosophical question and even reasoning?
It wasn't an accident.
It happened when machines cracked the code of human language.
Yeah, and that's a story worth digging into.
Today we're breaking down why language is a complex system
that goes far beyond words on a page,
and why it might even be the secret ingredient
in creating machines that truly think.
Yet today we'll talk about neural networks, complexity science,
and of course, emergence.
And we'll look at the unexpected magic that happens when machines aren't just trained on words.
They start to learn meaning, context, metaphor,
and think for themselves.
Alright, stick with us.
You're about to hear how language became the spark that lit the AI revolution,
and it might just change how you think about humans and machines.
Welcome back to the Emergent Podcast.
I'm Justin Harnish, strategist, complexity enthusiast,
and relentless question-asker.
And I'm Nick Bagley, data scientist, AI nerd,
and a good guide to how this tech actually works behind the scenes.
So let's dive in.
What if the key to human-level intelligence isn't circuits,
isn't even code, Nick, something that you're infinitely familiar with,
but language itself?
Yeah, fascinating.
Alright, well look, you know, we're entering an era where AI isn't just imitating intelligence.
It's emerging into something that we barely recognize.
You know, take a moment and for yourself think, you know,
why do you think that language is at the heart of the shift?
Language is more than words, you know, it's a mirror, a thought.
It shapes how we see the world, how we connect, and how we collaborate.
For AI, learning language is almost like learning to think,
and that's a game changer.
When I was a kid, my mom always called me the absent-minded professor.
As far back as I can remember.
And somewhere along the way, somewhere in my teens,
I started realizing that as I thought, I often had a difficult time completing the thought.
In fact, in my early days, math was pretty difficult for me as I moved beyond algebra
and into higher-level math.
And a lot of it was because it was difficult for me to show my work.
I could get to the answers, I could get to them faster than anyone in the class,
typically faster than the teacher.
But I could not actually remember what step I was on and what I was supposed to go to next.
Instead, I would allow my subconscious to really go through the process
and figure out what it needed to do to calculate whatever the answer was to the problem.
And I started applying that to the actual thought processes in my mind.
I don't see in images when I think.
I see in words, or I think in words.
Sometimes I actually see written words, but that's about as close to images I actually get.
And so when I'm thinking about a problem, especially if it's complex
or if it's something that I need to remember many parts of it,
I've actually created a system in my mind where I just use ellipses and I just say dot, dot, dot.
And I know that I can pause that thought.
I've thought enough of it to this point that I can move on to the next stage
and essentially place that away almost like using a memory palace
and decide that that whole section is solved for.
Then I move on to the next and move on to the next.
Fast forward 20, 30 years of essentially excruciating stomach pain and other difficulties.
And I found out that I'm allergic or really intolerant, but I'm allergic to nightshades.
And one of the side effects of nightshade allergy is called brain fog.
And it affects your working memory, your working recall.
And so it turns out that as I've transitioned away from nightshades,
I'm now able to recall things much, much, much faster than I ever did before.
But I've already created these mechanisms that allow thoughts to emerge,
that allow me to shift from one context to another
and to be able to solve for a given problem within a time and hyper-focus on that problem,
but then eventually solve for a bigger system or a bigger solution overall.
And today we'll talk about AI that's doing exactly that
and how even the next generation that came out two, three weeks ago
is now changing what we're able to do as an AI today.
Yeah, it's the latest generations of AI have really swollen in their capacity
to do the things that they've always done best,
which is pick up on immense amounts of trained data.
And we'll hear from some of the books that we've read on this topic today
that the claim is that that's all they're doing.
Right?
So we've named this podcast, the Emergent Podcast,
in light of the fact that we really believe that one interesting part
of these new AI modules is that they are developing intelligence
through the process of complex emergence.
That's why we named the podcast in such a way.
But there's a counterargument to that in that the mind palace is that these machines
on this latest hardware with these latest algorithms
and trained on essentially the whole Internet of data
across all languages, across languages that we don't even speak to one another.
We directly input to computers for output,
Python, SQL, coded languages, computer coded languages.
And it's just the fact of the big data that is going into these large language models,
let's say focusing on that, that is responsible for the capacities that they have.
Absolutely.
And if we bring it back to language, I was a linguistics major when I started school.
And it turns out that language is a lot more than words.
It's a lot more than syntax.
It's a lot more than grammar.
Really, language is a mirror of thought.
And we've talked about this before and Justin brought it up previously as well.
Language shapes how we see the world.
It shapes how we connect and how we collaborate.
Things like empathy are actually drastically hampered without language.
For AI, learning language is almost like learning to think.
And that's a game changer.
When we think about models like chat GPT or really GPT 4.0 and beyond and Bard,
they didn't just appear overnight, right?
They're the product of neural networks grasping the nuance of language, metaphor, context, even creativity.
Justin talks about attention.
One of the major things that really opened up this entire world was six years ago.
As far as AI is concerned, it was in ancient times.
And that was Bert with an encoder model, right?
GPT itself is an encoder model only.
It does not have a decoder.
Many of the latest architectures and we'll talk a little bit more about modern Bert today,
but many of the latest architectures do not actually use a decoder.
And they're getting more computationally efficient and they're becoming more and more intelligent as time goes on.
This is really the probably the most incredible portion of what's happening with AI revolutions today.
This process is not static.
All of these things are changing and they're fine tuning.
We're fine tuning models to make them better and better as time goes on.
And we're really redefining how machines understand language, making it faster, more accurate,
and less dependent on brute computational force.
Yeah.
And I really like that, especially at the end, right?
Making it less focused on brute computational force because, again, we certainly have the best hardware that we've ever had to run these models.
We have the best algorithms that have evolved through scientific method and corporate need and know-how to the point where they're at now.
But they're also intertwined the best of them with the language.
And you mentioned that language is sort of a proxy to thought.
And this is Stephen Pinker's book on language and how it has developed and what it is.
It states that very clearly that language is the medium we express thoughts and feelings with best, but it is not to be confused with those thoughts and feelings.
Now, we know that language is acquired in children and eventually by us as adults through inductive analysis.
That's right. We're getting signals from reality, the real world, and we are building those into language about that reality and about how it impacts us.
Whereas the generative transformer models like chat GPT are brute memorization and pattern numusoid.
Right? We're getting into this place where we're encoding all of these different snippets of language and understanding them in context across vast dimensions,
dimensions that we would never be able to, in all of our study, in all of our understanding of how language works,
we would never be able to get to the number of encodings that they have.
And so as a proxy for thought, again, languages are made of these ethereal notions, these notions of space and of possession.
Right? First person, third person, perfect, possessive, different sorts of tenses in space and time.
Are you near? Are you far? What goals do you have? What intentionality do you have as a human being towards this thing or this relationship or this other thing, right?
Or this other thought? And then finally, causation, right? Starting to really get into the world of philosophy and how we relate to causes and effects in space and time.
But again, some of these categories of these ethereal notions, and I love that language from Pinker, is that, you know, this gossamer form of things that are happening in your brain,
like most of us don't always think of the work. Most of us have this faint gossamer image of the thing we're talking about, a movie going on in our mind's eye that sort of covers the rest of consciousness,
the sensations that are coming in to our brain other than the thoughts that we're having.
But these categories, space, time, causality, have developed as language has developed over hundreds of thousands of years.
They don't encapsulate some of the new things, the generative AI that modern physics have started to unveil about the true nature of reality.
Things like quantum, things like qualia, relativity, and statistical understanding, thinking fast and slow, even epistemology.
And this idea that language is the stuff of thought, isn't covered in a metaway with the current use of language and these ethereal categories that we've come up with by speaking language in the wild, truly in the wild, for hundreds of thousands of years.
Yeah, you know, language itself is a self-organizing system. We've talked about that a handful of times, but it evolves, it adapts, and interacts with culture and context.
This really echoes what's happening with AI right now. And that's really kind of your main point or one of your main points there, Justin, right?
Models like modern bird don't just learn the fixed rules. We're going beyond what GPT-4, as we started going in and creating the encoder that could use self-attention and continue to learn, you know, 60 billion parameters, maybe more, take all of the core rules that we build into the overall system, and now actually have an attention mechanism that's able to adapt to dynamic inputs.
And just like humans, it can actually adjust language in real-time conversations. At this point, we need to start considering a lot of really important factors, things like what happens when the machines don't just reflect our language but start influencing, start creating something new and something beyond.
Are we co-creating intelligence with these tools? Are we giving them too much power over how we communicate and how we think?
We'll probably touch on that a little bit more, but when I think about new tools like modern bird, for me, I start in the technical and really a lot of it is about that better performance.
It works on modern hardware and can really perform well on CPU. It has improved memory efficiency, strong multilingual capabilities, but it also has a modern architecture that now has flash attention too for faster processing.
Rope, which you should check out for positional embeddings, and better tokenization.
So this new system is able to randomize the attention process and be able to understand what are the key components that it should be paying attention to rather than just the goal of predicting the next potential word.
And all of this allows it to be far more efficient where we're talking about roughly two trillion tokens, but compared to the number of parameters and everything else that we would have fed into GPT-4, this is a gigantic advancement.
And even though it's performing on a lot of the benchmarks similar to what the V2 Large version of Diberta does, it is so much smaller that it's incredibly fast on a small laptop.
These changes are happening so quick and the outputs of them, like we've talked about before, are things that we really can't even measure. We don't know where they're going.
And so now is a great time to step back and really think about that collaboration and what we're going to co-create, why we're going to co-create it, and what are the things that we need to do to essentially govern it and consider the impacts.
Yeah, and I'm interested in the technical side continuing to make advances. And again, I feel like anybody who's in a space where they're like, it won't get there, it doesn't understand, or it will never understand is probably a place where people might lay down a further hard claim.
Or going to end up on the wrong side of history.
Furthermore, though, and I think that this is what this conversation can really bring into the picture is that these large language models can also develop through a greater understanding of the language side, the technology of language, how it's acquired, what it is, what these components are that
that language uniquely brings into the conversation.
That large language models and the way that large language models are emerging as a new alien way of understanding language are building themselves out as.
And that's an interesting conversation on the way that language is acquired in Steven Pinker's, the stuff of thought on page 148, as it so happens.
I went through a matrix of all of these things where he was laying out how we acquire language and then in my head and writing down on paper how generative AI was acquiring language and just making a matrix of did it beat us or did we beat it.
And in no instance did I think that our use and acquisition of language actually beat the way that large language models were emerging to become reasonable, cognitively better through their use of language.
So I'll just quickly go through them.
You know, we learned it through semantics.
I basically have this as a tying with AI's training.
So we have to have an embodiment.
We have to have sensors so far.
And we talked a little bit about this before generative AI haven't had sensors in the world to understand the world that's just been trained on language itself.
I called that a tie because again, this is language fully encapsulated with all languages in AI.
I had it beating us in the fact that that vector distance is able to retrain contents context and form where you can have memories like you were talking about memories before.
Memories can be language free.
There's a lot of people who don't see any words, right?
Don't have a stream of consciousness, which seems far fetched to me, but that's reported that they don't because my stream of consciousness is way too streaming and, you know, always on.
But you can have language free memories and still get the gist even more get the gist.
And so, you know, being trained fully on language embeds memory and language together in a very interesting way.
So I gave the I gave the capability there to AI the ability to code switch and enter in new languages, including the languages of science, you know, helps AI build out, you know, greater access to.
New societal norms.
It can access the internet.
It can access science through its use of code.
And so, you know, it takes the edge there on what was mentioned before our inability to add to language beyond these ethereal notions that we've been using as we've evolved for 100,000 years.
And lastly, and most dynamically for large language models is their ability to speak in code.
But to me, that's the difference maker that that we have to always keep in mind when we're talking about the languages that these large language models can can speak in is is that they can actually do something that's only that's that's.
That's there for reason for building things not only in the digital sphere, but in the physical sphere with, you know, 3D printers and whatever other output devices that you have tied to that code.
And it's very reason based and capable of allowing that a next level of language that we just can't communicate.
Yeah.
Yeah, it's fascinating.
But I want to step in and essentially argue with you.
I will say that the first generation of generative AI from what I would consider truly emergent generative AI.
So not what was originally created by Tim, but.
I'm really old one for Joseph, but that's okay.
Maybe we'll skip that then because that's not the story.
He and good Tom.
Okay.
Okay, so, you know, it's fascinating.
And I really love what you were talking about there, Justin, but I want to argue with you.
So really the first generation for me of generative AI really came around with GPT 4.0.
rative AI going back into the:But this is when it's really tried to start first emerging.
So during the first story, it's just a fun story to tell, but the way I understand it is that Ian Goodfellow was up in Canada.
And he was out drinking one night, I believe, during college.
And there were lots of arguments and things going on.
And so he went out and he came up with this algorithm where he allowed one model to essentially argue with another.
And at the time we often talked about this as a detective and forger.
And really the forger was working right from scratch to try to determine how could it create a Monet.
But it had no idea what a Monet was, what colors were, what paint was, anything at all.
And as it went to create a slash with the paintbrush, if it generated something, then the detective could determine, does that look like a Monet?
Yes or no.
And these two neural networks would continue to work back and forth over and over again until they were able to eventually create a forgery of a Monet.
Fast forward to GPT-4 and as we've talked about, it starts encoding and really trying to predict what is the next word.
And what we've seen is that as we move forward all the way up to modern BERT coming out,
that now the genitive AI is able to work closer to how humans prioritize attention in conversations.
And so this matrix that you've created I think is fascinating.
I think it is correct, but I think it is missing some major things.
I don't know about just and harnish specifically, but most of us have not actually read all of the available books and languages out there,
or all of the things in English, before we were able to speak.
Before we were able to hold a conversation that was not only believable, but could be creative, could create poetry, could create interesting and fascinating topics and conversations.
I have a niece who is about two, two and a half years old who speaks English and Russian fluently.
And it's fascinating that that can happen and can happen so quickly.
So when we think about the overall training size and we think about what these models are doing when they're actually becoming emergent,
we are starting to realize that if we can get closer and closer to how humans are actually prioritizing attention in conversations in their day to day,
then those models can improve and love how you touched on sensors.
Because when we think about language, really we should be thinking about far more than just the words that we speak.
Nonverbal communication, if we go back to Albert Mehrabian, sorry about the name pronunciation, Mahrabian,
said that approximately 93% of communication is considered nonverbal.
It's the 738/55 role with 55% being conveyed through body language, 38% through tone of voice, and only 7% through spoken words.
So there's a disadvantage when you're here on the call listening to the podcast where you are really only receiving our spoken words and our tone of voice.
Here in the room, in your daily life, you have the context of all of that body language.
You also get context of the room, you eventually get context of the person.
You start getting more and more context around subjects and now you have some abilities that Gen2 and AI just does not have today.
One key area that we're really focusing on now as we shift from generation 1 to 2 to 3 of AI all within the last two years
is it going beyond Gen1 of the ability to reason and act to Gen2 where we narrow down the reasoning down to one very specific solution.
We give you a prompt with a role.
We give you a prompt that also includes your experience and your background and background to the context of the situation
and the expected output that we want you to provide and then tools to be able to help you act and tasks to be able to help guide exactly what we expect as the final output.
That Gen2 is now making way into something where we need to understand how to bring in ontologies and additional semantics, other context.
The current Gen2 AI struggles with subject predicate object, with understanding all of the information that's necessary for that given subject, for that given object
and that predicate, that action that you're trying to take, whatever that may be.
Part of the reason that it struggles is because of the context windows.
We've been moving to create more knowledge bases, vector stores, RAG systems that allow for retrieval automation generation or automotive generation and augmentive generation
and allow us to be able to create a memory system with cache, with the context window, with databases, again these vector stores, these other solutions.
So that you have full knowledge management and now you're able to provide all of the context necessary and just the context necessary for that moment that you're paying attention to, for that solution that you're trying to come up with.
And this may seem like these are smallish steps to go from one to two to now the third generation, but as this third generation really emerges and we start seeing the outputs of it,
we're finding that it's very difficult to find a use case, a problem, even potentially a role that is not something that we can solve for.
And so it creates a lot of questions around what is safe and I really think we need to shift the mentality to how do I collaborate?
How do I help this improve my world, my school, my training, my learning, my own thought processes so that I can move faster and faster
and not just move clerical tasks, but even complex tasks, complex reasoning, things that go a little bit beyond maybe my skill set or my training or my own experience and gain the experience of others.
And so one of the massive trends I expect to emerge this year and others who have said this and I'm sorry that I can't quote them specifically,
but is that we will start seeing solo pre-nurses, individuals themselves that will create agents and agent work flows that will handle for all of the marketing, for all of their sales, for all of the revenue generation,
for all of the technology that they create, for their chief executive officer, chief operating officer and all of their friends.
It will start becoming a situation where one company with one core idea, whether it's mimicking another group or not, will now be able to go out and generate an entire organization because of this new generation that we're moving into.
Yeah, the possibilities here to augment yourself are absolutely fascinating. And I think that this is the reason why this technology hasn't blasted off in industry or even with individual entrepreneurs is that we're still forming the use cases.
We're still in R&D mode with this brand new alien user of language, this new thinking ish agent that is new on the scene capable of more than just bureaucratic Excel spreadsheet stuffing,
but capable of generating new narratives, generating new stories that compel millions.
So very much in the nascent phases of its capability to do that.
One of the things that I think was maybe not as delivered into the broader media about where we've been in the last two years with Gen AI, especially what they're doing at Open AI with GPT is that it is actually native multimedia.
So what I mean by that is, and this goes back to the idea of sensors and the use of video and its ability to generate video or to generate pictures images is that it's not taking our voice or it's not taking the video, the prompt and creating a bunch of words that
it's putting into video or it's not bringing in information from audio and then making it into text.
It's a native multimedia input product.
And so this thing doesn't just understand text.
It's not like you're translating interpreting all the time into letters and numbers.
You're utilizing the native media that the data is coming in on.
And I feel like that was underplayed in the media.
That to me is probably the second most profound thing other than what they do, the ability to generate reams of content in all domains, as well as or highly capable.
Let's say.
So to me, that's part of the argument for these things.
If not being better in their acquisition, if not being, but being different, not better.
Maybe my matrix is a little bit self-convigilatory to the tech world, but maybe at least different good, you know, would be a would be around.
And I think that would be a would be a ranking is because they are going to be able to take sensor data.
You know, if if Boston Robotics wants to use a brain that's autonomous for their robots, they're starting with a generative and utilizing that functionality and now giving it sensor data where it's going to be able to natively take that in.
I think it's so important for people to realize that these tools aren't about to take over your role in the best case, right?
In in the best case, they are going to automate your role.
They are going to be a synergy where one plus one is greater than two and allow you to do some things that you couldn't do before to enhance those things that maybe were a weakness and to really give credit and and a new emergent, you know, thoughts for you in the things that are your strengths.
Like what we're talking about today, you and I have lived our entire relationship in this plane of metaphor, right?
We're constantly going into what this means so much so that we get ticked off at one another for going to that place of metaphor to readily, right?
Calling something a quantized X, Y or Z when it's not right?
Like when when we're not talking about that same sort of space, but that is and again going back into the literature going back into Pinker, right?
It's either just some meaningless epiphenomena of language or it's actually your ability to grasp thought reality and and to build your own emergent language for what you see in the work.
It's it's your own complex algorithm for describing this interplay of thought and the things that you see, smell, taste, because that's essentially the synesthesia that we sometimes find ourselves in when we're trying to describe to another person who's fully consciously aware of what you're doing.
Consciously aware of something like a taste or a smell.
I read this book a while ago, The Power of Scent where this gentleman was both a researcher on the science of scent and a perfume sampler for one of these large houses.
And and these guys are the Somaliase of Perfey, right?
They basically are one in a billion people who is capable of this sort of stuff.
And one of the descriptions that he gave of this perfume was an apricot dipped in gold.
That's how he described it.
And so the power of metaphor and the ability to bring it back to these large language models to augment our language with new metaphors and, you know, at least by the claim of Stephen Heker, new understanding and new meaning through those metaphors.
In the case that it's not an epic phenomenon, it's meaningless, is very profound and again brings us back to why language, why language plus neural networks, and why do we think that's a merchant.
And it's that like these these metaphors that only generative AI is being able to come up with.
It's only being able to be used by generative AI in code, right?
If you write a metaphor in English, generative AI is going to try and write that in code.
If you tell it to do that, this is a new and different way to be able to understand language, understand our reality, understand these new physical laws that that science is going to continue to challenges with in a way that we're augmenting and growing language, technology, science.
Even consciousness in these new and very interesting ways.
Yeah.
And everything I said there was a metaphor of some sort.
That's right.
Yeah, I think metaphors, analogies, other ways to interpret the world are incredibly helpful.
They can be good thought experiments and they can be good constructs.
When I think about creativity, for instance, and we think about what generative AI is literally generating really actually creating something new.
And I try to contrast that with other forms of creativity or other subsets or even things that are adjacent or analogous or metaphorical.
I realize something like original thought is actually very difficult to attribute to generative AI today to think that it made create something that is completely new.
But when we think about the human brain and how we process things, it's actually essentially impossible for us to generate something that has never existed without some sort of analogy or some sort of tie to some other form of context.
Something that we have.
So when a child says, I can imagine a car that flies and that in the future, these cars will fly all the time.
Well, the child is using the concept of flying and the concept of the car.
I think the original thought experiment around this talked about, I understand gold and I understand a goat.
And I can imagine a golden goat.
And I can create something that we can worship.
Let's all go have a party.
And so as we as we think about this, some of those major gaps that that we're trying to address also come not just in the ability to create something that is truly original.
I think a lot of that is happening or feels like it's happening right now.
But very soon we need to think about how do we make it practical?
How do we have it be pragmatic?
So modern Bert actually represents a leap forward.
It optimizes the efficiency so far without losing depth that it's the kind of advancement that allows AI to actually run on edge devices without sacrificing performance.
And this opens doors for broader adoption across healthcare, finance, legal, many, many, many other use cases.
Imagine a scenario where AI is able to process and read through an entire legal document.
Let's call it a contract.
It is then also able to process and maintain in its context all of the precedents for all contracts that were ever similar in the past and understand the semantic differences between them, including things like local laws versus regional or national laws and regulations.
It's actually able to keep all of those regulations and laws in place as well as the contractual controls themselves and then be able to actually watch a courtroom proceeding or thousands or millions of courtroom proceedings and understand how a particular judge, even how a jury has considered things in the past.
And now synthesizes all of those insights in real time.
That's where we're heading.
We're probably not going to hit it in Gen 3, but only because somebody needs to take the time and money and everything else to make that.
And there are limitations in our world today where you're really not going to walk into the room with Gen AI in your earbuds.
But give it a minute.
Yeah, I want to turn the page here a little bit and talk about a couple of the innovations that that you're seeing right now, Nick, that are utilizing language and this idea of compound set.
You know, what's been called compound AI or a genetic AI to really do the work.
And I love that idea of a solopener, right?
Somebody who is building agents into their business, right?
But let's talk about how maybe you've seen it be used in industry and what do you see that are real innovations in it being able to augment an individual who's interested in starting a business or somebody who might be researching in their scientific discipline or, you know, a child who's, you know, a child.
And then you can use, you know, out there to try and learn in a new way because they have this tool that can help to augment their growth.
So maybe take, you know, we can kind of rapid fire, but let's go entrepreneurism business.
Let's go into the sciences and then let's go into education.
So in an entrepreneur perspective, I think in a lot of ways I'm my own use case.
As I look at different opportunities out there, I'm constantly trying to understand how I can use AI, Genitive AI and the solution tools that around that, including transformers, other many, many other models.
And some of the things that I talked about before, getting knowledge basis, knowledge management systems in place so that I can not only solve a problem, but I can actually define the problem in the first place.
And then the other thing that you need to do is that entrepreneurs actually understand what do people want?
What are they really going to pay for?
Right.
And so oftentimes I will go out and I will start with a simple prompt in GPT-4 and just try to understand a given use case or a given area.
It really helps me to be something that my brother-in-law called a gray collar type individual.
Every color, color that you can think of at this point, so it's pretty washed out and pretty gray and it's been around the block.
And so this means that when I think about something from a legal perspective, you know, I've reviewed thousands of legal documents over the years.
When I think about something that might be interesting for me in academia or might be interesting for me in another industry like healthcare or finance, I've been there.
I've built companies there or I've worked for companies in those places and I've had friends and deep conversations about it.
One of the keys to understanding this core information and how to get to the next stage as an entrepreneur for me has been reading Wikipedia.
I'm constantly reading Wikipedia.
If there's ever a word or a thing that I don't feel like I understand or don't understand well enough, I click on it and I read through it.
And then I click on every possible link in that given page until I understand all of them.
And I want to make sure that as I pull that back to the problem set that exists in the world today, that I understand not only what are the opportunities out there, but what are the ones that are practical.
So I will give prompts from there.
Once I have GPT-4 kind of outline what is the strategy, where are we potentially going?
I will then give prompts that tie back to other books, to other things that have read in the past, things like crossing the chasm.
Other ways to understand the adoption curve, to understand when people are going to select a given product and why.
It's often not about being the only one that is doing something or back to that original thought type concept, but maybe doing it better or doing it when they need it or doing it for the right price or going to the market in the right way.
And really just understanding what do they truly want? What will they actually pay for?
Those things are actually outside of my skill set.
I myself am actually a laggard. I don't like adopting new technology.
I like using new technology. I like being right on the bleeding edge.
I do not like it in my own life.
And so for me, it's very difficult to comprehend why somebody would make a change or why they would actually go out and buy the coolest, latest, greatest thing.
And so now I can use these generative AI tools to help define that.
To go out and search articles and publications and many other things as well.
You talked about that Gen 2 version and really what that looks like is, I start with the concept.
I build out a custom business plan. It actually creates all of the core structures of what I would want to understand for the culture and vision of the company, the strategy, the mission statement, and exactly what it's going to go and accomplish and win.
Many, many other details as well, including financial models, what would be interesting to investors, to customers.
Then I go out and I actually create that in crew AI or in another system that allows me to have multiple agents filling different roles.
And I define different roles on what I would need to build a core team.
And each of those agents now have specific tasks that I need for them to go and understand the market, to go research and crawl the web to understand all sorts of articles that relate back to that, to read those articles, to process them, to summarize them, and then to be able to create actions on what are the next steps and where do we go.
All of this creates this whole new ecosystem that lets us as an entrepreneur step back and say, "Okay, I'm ready to build."
Yeah, the interesting addition to that too from one of our mutual friends, and I hope I'm not getting too much away here, but is the idea that you can create a crew of engineers who their whole job, and they are all agents, is to tweak on efficiencies.
So doing the machine learning operations, MLops sort of checking in and out of get, taking this model and building efficiencies into it based upon the little tweaks that they see.
So having these generative AI algorithms do machine, like traditional machine learning, taking a server off the shelf, in this case, it's a whole nother AI agent, right, taking it off the shelf, tweaking it a little bit for efficiencies, putting it back up into the server farm, you know, back up into the, you know, the get repository.
And now it's more efficient, right, taking it down, you know, networking them together.
Just fascinating that, you know, now we're having our tech run this part of the tech stack, this essential part of the tech stack, but that, you know, isn't very typically a strength of a data scientist.
Matter of fact, it's probably one of their most like glaring weaknesses in my experience is, is to get into the nitty gritty efficiency details of building out a full end to end product, right, when it's not research and science and understanding the data and the algorithm and the outputs from, from that, they're, they're really.
And so having these agents to be able to do that, in addition to all of the other things that as a technical person, you might not be very good at the finance, the marketing, understanding your market, understanding your customers and being able to pinpoint them.
Absolutely fascinating.
I think that, you know, in, in the realm of science, in terms of a quick hit, it where we're starting to see now where the modeling capabilities of these algorithms are truly getting us time to market gains, where the capabilities to be able to generate a protein that fits into
the folding structures of another protein that's needed, either for drug delivery, or for a gain in, in some other healthcare related field is, is helping to improve our, our capabilities.
And even just the scientific method for any domain of science, your ability to investigate the graph, this network of knowledge that your contemporaries and your predecessors in science have created, right, but where have they created it?
In journal articles, the lowest level of tech.
You know, I once in my time in the semiconductor industry talked about how we do maybe a plus work in designed experiments.
And then we maybe do B plus work in running those experiments and understanding the nature of those experiments and the data that comes out.
And then we do another a type of work in the analysis of all of that electrical, chemical, physical data from the first step until the test at the end.
But then we do F work in encapsulating all of that into a PDF that you can't and a computer can't read.
And that's what we're still doing in science.
So our capabilities to again be multimedia, multi data source in our reporting of the things that we're discovering and to have machines who are 24 by seven, 365 reading and contributing eventually guesses and next steps into that graph.
Fascinating.
Absolutely.
Yeah.
Yeah, I'll just riff off of what you're talking about a little bit and just add to like the healthcare industry, for example.
So when we think about healthcare, one of the core challenges is being able to code and some of the antiquated systems or very bespoke systems that exist in there, right?
We've got a HL seven X 12.
I can't even update myself on what the latest are, right?
And many of these older technologies as well as the data structures that are around them are things that really we couldn't access before.
But now when you go in and you provide your prompts, you can come in and start out by providing not only the expertise of your given role, but of that particular field and providing that context as well.
And so now you can come in and provide key principles, for example, and say your code is perfectly concise.
It's efficient.
It's accurate.
It's demonstrably correct.
You can follow best practices across all of let's call it deep learning and AI or software development, right?
You follow the best practices when we talk about software development specifically, go into specifics, provide object oriented principle, programming principles, provide object oriented programming principles like instantiation, abstraction, polymorphism, encapsulation.
Give them frameworks, provide things like solid and ask those models to do it.
But when you get into the specific context and you can even call it out as such and say you have spent 25 years working in healthcare, you've been familiar with HIPAA and all of its iterations.
You understand HL seven X 12.
Here's a website or a link or here's a blog or here are resources for you to go and understand.
Here's data for you to start processing ahead of time.
Then when we take a system like crew AI and you provide an expected output, that crew actually will go and use the expected output as their training modules afterwards for each of the tasks as well as for the agent itself, depending on how you structure it.
And so now you can train against it, you can test against it, and you can allow them to continually iterate as you talked about taking that machine off of the shelf and then having that machine upgraded and now it's creating something new by going through that expected output and constantly creating a reinforcement loop.
And so these feedback mechanisms, this process tied to a specific industry like healthcare starts opening up whole new worlds where maybe you know everything there is to know about medical billing and maybe you know about EMR systems and maybe you know about, you know, like you name it.
But maybe you don't know as much about insurance and exactly how to get something across the finish line.
Maybe you aren't as aware of clinics as you are of hospitals or regional hospitals.
And now you can have that AI come in and provide all of those pieces that you're missing within seconds, fully detailed and faster than you can read.
ogaming languages alive since:That's right.
Yeah, Sam, give him a call.
So finally, Nick, in our quickfire thing, you've got kids, a couple of boys, and you know, they're in the thick of this, like we're, you know, we've been prone to, you know, call folks natives when they're in the thick of it and this technology didn't develop while they were adults, but instead while they were children.
So, you know, your boys are going to be, you know, beyond digital natives.
Now they're going to be generic natives.
Right.
So talk about what you're already seeing with these first couple of revs of generative AI in their education.
Yeah.
So, so my specifically would really like to understand, given that this is the language emergence podcast.
Yeah, podcast. You know, how were, how has that changed the way that they use language because they already use it in interesting ways.
You know, I know your voice a little bit.
And so, you know, maybe specifically their education in English.
Yeah.
Yeah.
Yeah. One of the things that I think that was also or has also been missed in the media is just how important the spread and the adoption of these generative AI tools are without knowing it.
I was working on generative AI one day and I was just working on GPT-4 and I mentioned it and one of my boys was like, oh, yeah, yeah.
No, I use it all the time.
I use it early in the day in these days, right?
And they adopt faster than I do.
They adopt faster than, you know, most people do, but it is not a question about whether our children are using these tools or not.
And so when we think about the education system and some of the questions that you're posing there, Justin, the challenge is how do we as adults and how do we as those who are not native to this generative or to this artificial intelligence world really adapt?
That's the question.
It's actually not about the kids.
Now we need to change the system.
We actually need to revolutionize how it works and what it does.
I'm working on a company called CampusAgence.ai where we're trying to create conversational AI that can help students on campus determine where to go, what events are going on to be able to help them with any kind of academic performance, to be able to understand what is going to help them be more successful, not so they can cheat.
And that's really the key thing that I hear when it ties back to my kids, for example, not so they can have a crutch, not so they can't critically think on their own.
No.
They're still going to need to be able to create the right responses or the right prompts.
They're going to need to read and understand the responses.
And so when they think about this for an English class or when they think about this for even a math problem that, you know, generative AI, GPT-4 can actually do most of the mental math fairly decently on its own.
For them, this is like using a calculator.
It's something that they go in, they type it or the old word processor, right?
The promise of something that could help us and, you know, fill out the text before us, right?
All of this is just part of who they are and what the world is, but it's not something they can use in their given class.
It's not something that curriculum is defined for.
It's not something that we know and understand what are the lines between overuse and that crutch versus something that actually helps them create a better report, create a better tool, create a better solution.
How critical is it for us to understand spelling?
If you go and look at a word that's spelled with the first letter and the last letter, correct, and you do that across an entire sentence, you'll be able to read the sentence for most people.
But when we take that same type of scrambling and it happens naturally in the brain, it creates something called dyslexia that creates it very, very difficult for us to be able to process that information.
And there are many other forms of this brain evolution, variants like dyscomputation.
I don't know exactly how you say that one and others that really change how we compute things and really neural divergence.
And this is part of what makes us beautiful and amazing.
Justin has talked about the failure of that complex system is in as much the ordered space as it is in the disordered space.
And really the opportunities for emergency for that singularity, that moment when language becomes something more than we could have ever considered or thought of naturally and calculated.
Calculating.
That is the type of a moment this podcast is about.
And that is the type of thing that we're seeing with Genitive AI across schools and opportunities.
The use cases today are things like creating a lesson plan, guiding someone on where they want to go, figuring out how to help admissions be improved, understanding what faculty does day in, day out.
Tomorrow, it's thinking about not only what a professor did, but what that ended up doing across an entire life in a generation and how compared to all the other professors and what the content played into that, how it could be improved and how we can optimize our entire education system to be able to create and generate something that is not only better for the student but better for the system.
That actually emerges things that go beyond what we would consider education now into what education could be in that realm of metaphor, in that realm of singularity, becoming something so amazing that really we cannot contemplate well enough to create on our own.
Yeah, I think that's great. And again, when you talked about spelling and the spelling tests and the spelling subject that I took as a youngster in elementary, I was bad at it.
I was really bad at it. But I'd love to read. And now I am a phenomenal speller.
And there's two reasons that I'm a great speller. The main one is because I have so much more association with language than a normal person.
I read, I'm an abnormal reader. And so that makes my spelling better.
The other thing that makes my spelling better is I live with English as a second language person. My wife is native language is Arabic.
And so every so often I get asked to spell a word out loud and nothing will make you a better speller like an ad hoc spelling bee at the dinner table.
But this is something that, you know, kids in education now, you know, especially those who are being augmented with this new tool in their classroom, they're going to have to do a different thing, which is write a great prompt.
And some of us non-generative natives are going to be worse at that than the natives because they're going to understand how to talk in promptees to give a system prompt, understand the role that you want generative AI to achieve.
And do multi-shot, give it the first part of the description and the second part of the description in a subsequent prompt.
And so if you want to talk about the difference between reading to get good at spelling and prompting to get good at instruction following and algorithm following, right, that's a great skill.
That's an amazing skill. If I was an embarrassed instruction follower or algorithm giver, it would be good for my ability as a project manager, as a product manager to be able to give those instructions to my team as a manager.
And so I really think that, again, and that's an emergent thing that we can learn by doing, by doing prompting better.
And so that's one of the things that I think the education system is starting to grasp onto is that this thing uses language in a new way.
Maybe texting is a new way to learn language that the typical English dogma doesn't love, but it's different. It's new.
It's the way the language is being used by the young folks.
Similarly, language is now being made more algorithmic in promptings. And so it's new. It's something that we should embrace because language like other complex systems are emerging, are evolving, evolution, these selection pressures are complex systems and they're over.
Last time I brought up the book about the survival of the friendliest and it really was a fantastic book.
But in there, one of the key elements of success for any given species on earth is the ability to communicate and the ability to be able to collaborate with others.
It's not always inner species. One example is when you think about how your pets interact with you.
We've created this symbiotic relationship, this collaborative relationship with animals that could actually be very dangerous to us in many different ways.
And this is because we've created a language back and forth with those beings.
We've been around them for long enough. Eventually we need to find a way to be able to call for the dog, to be able to give it commands.
We need for the dog to be able to give us notifications that it's hungry or that it needs to go to the bathroom.
Otherwise it complicates our lives in ways that we're not very happy about.
Same thing with cats, with birds, many other things as well.
The species on this earth have found different ways to communicate and sometimes that's about danger, but more often than not it's about opportunity.
And when we think about these models and where they're going and what they're doing, we need to remember, like we've talked about today, it's really the key takeaway.
Language is more than communication. It's the scaffold for human intelligence.
And now it's becoming the foundation for machine intelligence.
Next time we'll explore how humans and machines can co-create a future together.
Yeah, in a wrap for me, I just want to read a brief quote from the gentleman that I've been quoting a lot.
And I highly suggest all of his works, but this stuff of thought by Stephen Pinker is amazing.
And he starts this quote at the end of the book with this language that I just think is beautiful and it is again a system prompt.
And so he starts with the system prompt, the view from language.
So again, the view from language reveals a species with distinctive ways of thinking, feeling and interacting.
Human construction of the world is very different from the analog flow of sensations the world presents to them.
Human characterizations of reality are built out of a recognizable inventory of thoughts.
Thanks again, Nick. Always a pleasure.
Yep. Thank you, Justin.
Next episode.
Excellent.
Great.
That was fun.
Yeah.