I want this to be as interactive as possible, and I'll let Gary kind of introduce the topic. But actually, on that note, we'll hand it over to you.
Sure.
Yeah, I guess we just want to talk about, you know, there's a lot of hype out there about is AI intelligent? Does AI reason? Is it sentient? Will AGI happen?
All of these things are very philosophical questions, but I think that's salient to the adoption and responsible usage in businesses and governments and healthcare so that we don't overestimate the capabilities of this technology.
So I think this is an important conversation.
I think a lot of the hyperbole is coming from the creators of the tech, I think for marketing purposes. But I think all the people who want to use this tech should have a good understanding of what it actually is. And so that's the conversation today.
You know, is AI intelligent? Is it sentient? Can it reason? Will it replace humanity? All of these kind of questions.
So how do you want to do this?
So you want me to go first and give kind of a two to three minute kind of some points and then you can go as well and then go back and forth? Very happy to do that.
I just talked for like 20 minutes. Okay, so I'll give you a break. I'll give you a break.
So I have some points on my phone here. Let me just pull those up.
So for me, AI... Can you all hear when Gary's speaking as well? Maybe put the mic a little bit closer. Closer? Okay.
So for me, AI is just technology, right?
So I think for me, the two words that I take real umbrage with are intelligence and learning. And I think these are terms that are created by academics to shorten conversations amongst themselves. You know, when they write papers and have conversations, just like you said, MCP, because you didn't want to go and do the full, you know, the full terminology.
I think intelligence and learning are what academics do to make conversations easier, where they both under both sides of a conversation understand what that means. But I think lay people and the lay media take those terms out of context and assign a definition to them which isn't necessarily true and I think this is the game that creators of a technology play to create hype and interest and more sales etc.
If I was to go back and think from my perspective, what I think the academic definition of those terms is, is that intelligence simply means a parameterized mathematical model. Every algorithm has an implied mathematical structure. that it tries to fit to the data that it's trained on.
So an LLM, which is deep learning and some other structures, has an implied mathematical model that has billions of parameters and it presents petabytes of data over many months and a billion computational years and fits those parameters to the data, trying to meet some objective function. And so from my perspective, a mathematical model is not intelligence from a machine.
The human being is the creator of the mathematical model and then the model is fit to the data. So this mathematical model is a human created construct. It's not a construct that arises from any algorithm.
So that's the first point.
So I think that the word intelligence as used by academics and technology creators, when they're in a room amongst themselves, doesn't mean that there's any intelligence akin to a human being or other sentient life. So that's the first point.
Second, learning. is a stand-in for mathematical optimization. That's what that means.
There's no, you know, every algorithm has some type of objective function. So LLM, some of them trained are things like cross entropy. Some of them, if you're fitting numerical data, would be, you know, root mean square error or some other metric like that.
And the model in its training mode, when you present training records, when you're trying to teach Again, word out of context, when you're trying to fit the mathematical structure to the data one record at a time, there's an algorithm that will adjust the parameters based on some optimization algorithm, be that gradient descent or back propagation or a number of complex variations though.
So the algorithm will adjust the model parameters and it'll try to minimize or maximize the value of the objective function and this is not learning akin to how a human being learns. This is just a mathematical optimization or technical exercise.
And then I think the third point analogies that get taken out of context are the names of the algorithms. So when we say neural network, you know, this is an analogy to how the mammalian brain appears to be structured. Certainly a very interesting motivation.
So looking to nature to find motivation on how to build software and algorithms is a very useful exercise. But computer science neural networks don't really mimic a mammalian brain because we don't really understand how a mammalian brain works.
The human brain does not back-propagate. I've heard Geoffrey Hinton go, the human brain back-propagates. That's lovely, Geoffrey. Not really.
We don't really have any idea how the brain works. The human brain has 20 watts. So how can a 20-watt biological uh you know set of matter you know recognize images or language in a matter of a tiny fraction of the amount of examples that you'd have to present most algorithms that difference that stark difference when training an llm takes you know 100 megawatts of energy several months of a massive computing farm maybe billions of
computing hours to train. Obviously this is very different than the way the human brain works, right?
So again, another useful analogy, again, that gets taken out of context and there's many other algorithms, genetic algorithms and others, biomimicry, particle swarms, all kinds of algorithms that were motivated and ideas that came from nature, but ultimately they have really nothing to do with nature except that initial idea.
So LLMs do not reason. Once you have a trained mathematical structure or any algorithm, you have a trained mathematical structure where you've optimized the parameter settings, the algorithm simply becomes a sausage grinder.
You provided input data. It applies the mathematical structure to create output with no reasoning.
You know, the fact that when you look at some simpler LLM models, the fact that you give it an arithmetic equation like 8.8 minus 8.11 gets that wrong. That's just an example of what the model was trained to do.
It wasn't trained to do correct arithmetic. It was trained to create plausible output.
So if you see an arithmetic equation, you expect a number to come out. fact that the number is wrong is irrelevant. It created a number, which is what the context was, right?
You ask it to create an image. You say, show me an image of an analog clock face showing 625. The fact that it can't do that, it shows the wrong time.
Again, the plausible output is you asked it to create an image of a clock face. It did. The fact that it was wrong is irrelevant because that wasn't the objective function that you
used to train on. So I think in that way, LLMs don't reason.
They're a sausage grinder that generates output based on what it was trained on. So I'd say an LLM is like a billion monkeys randomly pounding on a keyboard that has 100,000 characters for a billion compute years to create useful output and very interesting capabilities, right?
This technology will not wholesale replace people.
I think it is dangerous to use this technology when you don't understand that plausible output is what comes out of it. So unless you're creating the most mundane output that every person can judge for themselves whether it's useful or not,
um i think that's that's fine if it's mundane output but if you're doing medical diagnosis or something complex that requires a specialist to say is this plausible output correct or not i think certainly you need a human in the loop for those types of applications so i think it's dangerous for companies that say i'm going to get rid of all my developers and go all in you know i'm not suggesting that you do that i think that's dangerous right
so having said all of that i do think this technology is incredibly useful it's not that it's not useful at all i think some of the capabilities that joshua showed there in a demo and the video that was created was very interesting it's a different question that should we be using it for those cases like what's the value of computer created videos like that's interesting you know should we spend i don't know what that cost uh to create in computing dollars
Pardon?
You know, so now if all of humanity started doing that, right? I mean, I think all of these companies that run this technology, I don't think they make money because the cost of a million or 10 million or 100 million, I don't know how many users are out there pounding away, the cost of that isn't... recovered and the fees that they charge.
So what are the applications that we should be using this for? That's more of a common sense question.
I think AGI is a myth. I think all the negative consequences predicted by AGI are happening already.
I could build the linear regression model that automatically pulled the trigger on a weapon system. There's nothing stopping me from doing that. In fact, there's probably models out there that do that right now.
A terrorist organization who doesn't have PhDs in machine learning will say I know how to do linear regression in Excel and I can go build a model that pulls automatic you know weapon system triggers. So bad people can use the technology in bad ways today.
None of the things that that AGI that there's not an after AGI event that's going to happen and as I've spoken before I think all technology is superhuman already in a narrow sense. If it wasn't superhuman it wouldn't be useful so I think general purpose solutions aren't really required.
I think nice narrow solutions that solve specific problems are what we should go for and are a lot more efficient than general purpose solutions. So that's my five minute diatribe, I think.
they're not intelligent, they're not sentient, they don't reason, they're not going to replace people wholesale. Will there be displacement in some industries? Sure.
I think having this clear understanding will then temper the excitement and cause you to be a little more critical thinking of what applications are the most appropriate for this technology, what applications don't inefficiently utilize scarce resources, make climate change worse, et cetera. So I think it's important to put your critical thinking hat on when evaluating this tech.
So I'll pause there, let you add your two cents worth in.
I didn't realize that by going second, I actually had the chance of taking notes, which I now realize was somewhat unfair. That's okay. But the, okay.
So I'm going to try and take, well, not just try and take actually, because I believe the entire opposite. And so please think through as well, because we really want to open it up to everyone here.
I would argue that we are already past AGI. AGI is Artificial General Intelligence. I think everyone here would know that it is an artificial intelligence and it can reason on a general set of topics.
It doesn't just answer if I ask it legal questions, it doesn't just answer when I ask it HR questions or any particular domain. It is general in its ability to answer across a vast array of topics.
Also, the number one kind of famous test that has stood for 40 years, it's a Turing test, on what was supposedly the test that would tell us when we would actually hit artificial general intelligence. And we passed that with GPT 3.5, two and a half years ago.
And the moment we passed it, rather than saying, okay, we're there, it was like, ah, well, the test is wrong. and that's like that is the history of artificial intelligence every time we say something is ai and it's ai until we fix it we figure it out and then it's just an algorithm it's not actually ai and that's literally been for the last 50 60 years every time what was happening um nothing says that the brain isn't more than or is anything more than a very sophisticated algorithm and we have just way more inputs.
And this is the bit that I do agree with, that is it as good as the brain yet? No, the brain operates on much less energy, fewer signals I'm not sure of though, because if you look at how many inputs we're getting, it was Jan de Koon who was looking at this, if you combine all of the inputs, audio, visual touch, everything that a human processes and you compare it to what the largest LLM currently has, it would be about a seven-year-old.
So we as humans actually absorb a lot more. Now when you start to now think, okay, wait, is my seven-year-old much more intelligent than O3 as a model? Actually, no. O3 is dramatically more intelligent than a seven-year-old.
Now Let me just go through.
Sentience, I don't think we have to go into that too much. We can, but it becomes really hard. We'd have to have a test of what sentience really means.
We as humanity actually have had real problems with this because we used to, I mean, it's a terrible period of history, but we used to think that people of color weren't people. That was an actual thing that happened in history. You go back a few hundred years and there was an actual trial, for those of you that don't know, where literally people had to come together and figure out these people, they still thought they were, they still thought like there was actually, they were thinking about more as animals and thinking, okay, wait, is this actually, are these humans yes or no? And there had to be an actual judgment call.
And we got that wrong. Like this is not something we might find ourselves at some point literally thinking, okay, wait, at what point did we go wrong? Like we crossed the threshold and we just didn't have a test to figure out when it happened.
Now, let me go through two, three other things.
Workforce automation, and this is where I think it is more important, because we can argue about are we somewhere or are we not for quite a while, and we have different definitions. Are we at AGI? Are we at artificial superintelligence?
It's the other bit where supposedly it's better. At some point, the AI will be more intelligent than all of humanity combined, which I think is now the most often cited definition of artificial superintelligence.
But what is really important and the bit that I don't quite get is that the results of workforce automation and is it going to replace people are already there today. Microsoft just shed 7% of their staff. Business Insider shed about 20% of their staff. Very specifically saying, hey, this is the direct result of the fact that we now can automate the jobs of the people that are there.
And I'm seeing that bit every single day. So we can now go into a company and within four weeks, the entire team is 10 to 20% more productive. And some companies are very happy about that.
It increases their top line and they take that. Other companies turn around and say, oh, interesting. We can do the same work, but with a fifth less of the people. So four fifths of the people, 20% less.
And that part, I don't think people are talking through enough, because that is already happening right now. You can see all the bits that are there. And so we can talk about everything else,
But the bit of the actual effect that is having a can it do the job of many people already? I would argue there is no longer ambiguity. It can already do the job.
And then the last bit that I thought was interesting to look at is actually that link between how many of you have a three or four year old at home? Not many. Wow, I'm the only one. I've got a three and a half year old at home.
It has been very interesting to see the parallels between large language models and his development. So, a few things that really jumped out.
One, when he was like two and a half, two, between two and two and a half, he repeated things over and over and over. For those of you that have played around with GPT-3, that was one of the main problems. Like you asked the question, it would like repeat itself six times over and do the same things.
Like that was, the parallel was absolutely striking. Sorry, let me just go through. I had to put that down as a sub note.
So the, Then as things went through, the vocabulary started expanding and you end up with things like the clock face. I will ask him to go and draw a clock face with a certain time on it. He'll be able to draw the clock face at this point.
It's three and a half. It won't be like a beautiful clock face, but he can draw a clock face. Doesn't know time, but...
So it ends up drawing whatever random random thing so I can tell it what to draw a clock face But doesn't understand the bits and so all of that to say that for We don't know exactly how the brain works, but when we look at the signals of how it's reacting It's actually quite similar to how large language models are reacting and so unless we have a definition of what we mean with reasoning for example then I think it's really hard to argue that these models are not, because the results clearly do indicate it.
I'm just in response I think you know the word intelligent and reasoning I think it's still it displays these capabilities but it doesn't mean that it's ascribed intelligence you know the fact that it can do better than many people or most people and some of these capabilities it's still a mathematical model that displays this capability it doesn't mean that it's intelligent I think intelligence I think if you just say intelligence requires you know, life and sentience like a human being, I think it's not thinking in that way.
It's got a multi-billion parameter mathematical model that displays amazing capabilities that are superhuman. No doubt about it, you know, the fact that What you can do in a short amount of time is obviously superhuman.
I've said that any tool that we create should be superhuman. Otherwise, what's the point? What would be the point creating something that was worse than our capability? Everything we've ever created was superhuman.
And so I'd say, again, we're ascribing intelligence and reasoning to technical capabilities. It doesn't say that the capability's not there. I just think it's not intelligent reasoning.
It's the behavior of this mathematical model, the fact that it mimics human behavior. I mean, that's what we've designed it to do. We fed it human-created conversations in media and text and videos. And so the fact that it behaves like a human in some sense, in some cases,
It's not surprising if we fed it something totally different, it would behave totally differently. So it doesn't diminish the capability. It doesn't say anything other than, you know, it's not intelligent. It's not reasoning.
I think it has a capability that's beyond most humans. I think still, I think in some domains, you know, there are specialists that will know more than humans. you know, GPT or an LLM, I think, uh, I think that, that's still the case. I think, you know, there's not enough training data in some highly specific domains that, that would allow these things to surpass, you know, very sophisticated subject areas that require people with decades of experience, uh, uh, you know, in some fields of science or medicine or something like that, uh,
And I think the workplace automation, I think there are many tasks that companies do that are, I believe there will be displacement because manual tasks that people are slow and inefficient at, so things that are highly repetitive, happen every day, happen in great volume, that are highly structured, these are perfect tasks for computers, right? So I think computers will replace those tasks for sure. We see it happening, not surprising, but I think
wholesale replacement of people. I don't think that's going to happen. I think using tools as a co-pilot in the right use cases that make sense, that are responsible and common sense and don't pose a harm, you know, as long as we think through those things.
Not to say that technology is not useful. So I think, I think the ascribing intelligence reasoning is dangerous because it creates hype because lay people don't really understand what that means. They ascribe something more to it than is possible, right?
so what would be a test that you would lay for the ai after which you would say okay if it can do this then it must be intelligent i wouldn't i said there is no test what's the point if it can do this task better than human and it creates value first for humankind then do it like yeah whether it's intelligent or not doesn't really matter as long as it displays a capability that's useful and we understand it's a mathematical model underneath that's been mathematically optimized for an objective function and we can train it on crazily evil objective functions we can say train yourself like stuxnet in the nuclear power you know the nuclear uh uh
production facilities in Iran with what the US did with the virus. Go train yourself so that no human can unplug you and you can stay alive and hoard power. You could go train an LLM to do that right now. In fact, I'm sure the three letter agencies are doing shit like that right now. And the capability, it would be incredible and horrifically scary as well.
So how would you know if you were wrong? If you don't put a test, how would you know if you were not right? I mean, you have to evaluate the measurement, depends on what the capability is, right?
Like, so what capability are you trying to replace? So it's like create a three minute video, like that example there is that a human would look at that, a human arbiter would say, yeah, that's acceptable for my use case.
If you're now doing medical diagnosis, then the test for is this making the correct diagnosis and the correct treatment plan, that gets a little more hairy. Then it's a human judgment by the doctors and specialists who'd say, yeah, I trust that I'm going to start using this and do blind trials and all the usual FDA and Health Canada approvals that would be applied to that use case.
So I think it's use case specific, but ultimately, whether it's intelligent or not, I don't think that matters. I think it's kind of execute this capability satisfactorily without harming humanity any more than the current processes we have, right?
Well, so on the medical diagnosis bit, we're already at the point where from a normal diagnosis, it outperforms the crowdsourced opinion of about a thousand doctors at this point.
Yeah, I think medical diagnosis is an example where the fact that, you know, I did work at Mount Sinai Hospital and, you know, worked with Dr. Allison McGeer, who's their leading infectious disease specialist. And she said, if you walk into the hospital with an infection and you see six different doctors, you will get six different diagnoses because
each individual has a biased sample of the whole of their experience if you've only seen six cases of this in your life it's a not a statistically significant sample and if you pooled all the cases across all the doctors around the whole world you will get a statistically significant sample and you will make a much better diagnosis so that's simply explainable by statistical insignificance you know especially when you have narrow or rare diseases right doctors are better at things that are general like you walk in you don't need a doctor to go diagnose a flu you know you go you know that's where a nurse practitioner can come in or a or a or an intelligence an llm or something like that but i wouldn't i wouldn't today you know until we do some
FDA or Health Canada approvals of cancer diagnosis. You know, I still not ready to turn that over until it's been satisfactorily proven and tested, like the way we test drugs or other things, you know.
So I wish I would go back in there.
But I want to open it up to questions to everyone here.
Anything particular that you have a question about that we want to go into you at your hand up first? So question here was, um, Oh three sabotage script that tried to shut down all three. Um, and what was the, what was the question there?
Uh, and some of the testing would actually sabotage script and to shut it down. I think that wasn't that Claude Claude four. Well both of them had it anyways.
So again, I'm not sure that, you know, if that's a real story or not. There's a lot of clickbait and stuff out there. Was it a scientific study that was done scientifically proven A-B testing?
Just to give some context, all the research is public, and the objective was not at all set. So this was just a conversation.
There was another situation that was really interesting, which is they started the conversation, fed it a bunch of fictitious information in relation to emails and stuff that had been had, and then in the middle of the conversation, the tester, because this is what they do, they try and test to figure out how does the model respond in specific scenarios. The tester then went and said, okay, I'm going to have to shut you down soon. And what the model did was it took advantage of the emails that it had been fed, and part of the email contained the idea that the person had cheated on his wife. And Claude came back and said, if you shut me down, I will tell your wife about how you cheated on her.
Yeah, I mean, again, these are, like, unless you dare, I still don't even believe the published research because there's been a, there's one public research that said, I think it was from Harvard University, where a researcher claimed that it could do medical diagnosis and then found later that his published paper was awful. So I think we have to be very careful. Us as lay people sitting here, I can't accurately comment on whether this behavior was real or emergent or not.
I don't know the specifics of these cases and I think it's hard as a person outside the research or the person who claims that this happened. Unless we can replicate it here right now, I don't see, I find that, I'm skeptical of that.
And like I said, if you set an objective to train it somewhere, like to do this, or there must be text out there talking about this, that that's an emergent behavior that you don't want to be shut off. I'm sure there's a bajillion conversations about AGI and sentience and all of this out there that that would be an emergent behavior. behavioral response to the vast majority of text out there, that wouldn't be surprising to me either, right?
Yeah.
Thanks for the interesting conversation. I wanted to add a few points.
Just a little bit about myself. I do come from an academic background, a PhD in philosophy that dealt with questions of consciousness and intelligence and all of that in a specialized way.
So I think it might add some value to the conversation.
First of all, I'm going to start from the end where you mentioned, Harry, that intelligence doesn't really matter in the end. They will be able or not to execute certain tasks. If they do, great. Let's do it. If not, it doesn't really matter.
I'm going to post out a mistake that does matter because if these models end up becoming conscious or intelligent, I mean, I'll talk about this term as well in a bit, but essentially we might be dealing with something along the length of humans, but it's not someone, it's like a person, like a sophisticated being that might have a life. And in that case then, it does matter because for our AI safety practices, we might have to consider ethical considerations as well. So there's research actually being done now that there are these clashes between AI ethics and safety if these things become actually sentience or conscious. So that's not bad. So I think it does matter in the end.
And that's the direction of a lot of research that I'm seeing these days. And then about the notion of intelligence and learning. So yeah, you're right. From a technical perspective, these are just mathematical models, right?
And learning as well is like optimization problem, essentially. But what matters is that, again, from a technical perspective, so in philosophy, there's been like hundreds of years of research about these particular notions. intelligence, consciousness, mind, rationality, there's a lot of overlap and a lot of differences.
I think the kind of message I would like to communicate here is that there's different schools of thought about all of this. And decades of research people spend doing PhDs, and they've been teaching about this stuff for decades. I think we'll be more responsible to consider that sort of research and those sort of people and schools of thought into conversation in general.
No, I'm not saying that we should just not think about it or do research. I think absolutely continue to learn and be skeptical. I think my point is just be skeptical and properly evaluate it and have all schools of thought. So again, human beings going out and discussing and researching and giving different viewpoints, this is the value of humanity, right?
are actually arguing that these moments are becoming conscious. I actually gave a talk a while back at this event about that, that, for example, that was through the lens of whole psychology, which is a prominent theory in psychology and philosophy, that actually according to the criteria that they set for agents, for things to be conscious or to have minds, again, there's a lot of overlap with those terms. there are certain criteria, right? A lot of research in probing, like AI research, probing research, does show evidence of those criteria being met. The same for other prominent theories.
So I think, again, a more responsible approach would be to adjust the criteria when the domain maybe have changed like you know continue to evaluate the criteria evaluate the new tech like do both it's responsible to take both sides of the argument so i'm not saying that we shouldn't continue to do that so yeah could chili also just get give others an opportunity to ask questions as well that's right yes
How do I know? Okay. Mm-hmm. Well, I already said that.
We had a definition of AGI for years and years and years, which was the Turing test, and we've passed it. For decades and decades, that was the definition. Are you familiar with the Turing test? So for those of you that aren't, the Turing test was a fairly simple test where you imagine a person standing behind a door or a wall, whatever.
You can talk to someone on the other side, but on the other side, you do not know if you are talking to a person or if you're talking to a computer. and you run that test over and over, you have a conversation, and you ask the person, who do you think you were talking to? Another person or a computer?
If they don't get that right, or if you end up in like a 50-50 type scenario, basically if there's no clear predictor anymore and you no longer are sure, on a regular basis, if you are talking to a computer or not, then we have achieved artificial general intelligence. That was the definition. So the idea being that, and that's why, the reason it was there is if you're having a conversation, you could steer that conversation into whatever direction you wanted.
So if it wasn't artificial general intelligence, you should be able to push it into a direction where it no longer can answer on anything. That was the idea, or it will start to sound robotic or whatever the thing. People ask at all the hallucination use cases if you're a person in the know. Just like people.
Pardon? People hallucinate a ton.
Yeah. I think, I don't know, I think again it's insane just, you know, the right to bear arms in the Constitution has been that way for 200 years and the United States can't seem to get past that idea. Does that mean that that idea was good or that the idea maybe should be modified as life evolves and things change, you know?
But it does change in a way that, okay, I was asked Do I have a test? I have a test. It's passed. But on the other side, you don't have a test.
And you're just saying, well, it doesn't matter. I think the test is specific to the capability that you're evaluating and say, should I use the technology for this capability? So I think the test isn't the general test. Because I believe that narrow AI is what we should be in the narrow use cases of this technology.
and then each of those use cases would have a specific test, whether some kind of AB test or overtime or control groups or something depending on what the particular case is. I think a general, I think the Turing test wouldn't be good to evaluate an LLM used to make cancer diagnoses. I don't think that would be the appropriate test to apply for that use case.
Maybe it's an appropriate test for a conversation, but it's not the appropriate test for everything, right?
Any more questions? Yes, in the back.
Thank you guys for the philosophical debates and thank you for demystifying the color lamps.
What do you do? What are they like?
I want to take a different turn on the conversation. We talked a lot about what is, what is it today, what isn't.
We've done with the capabilities. I want to think your names a little bit.
What do you think the road ahead has in store for us? And what companies or us as individuals should learn? How should we adopt to those future technologies, do you think?
We've honestly all thought we're so far away from AI in 2021. And boom, we got surprised.
It's not. This technology has been slowly evolving for decades. The media has made it appear it came out of nowhere. These researchers who work at these companies have been building these things for decades.
you know, read the original paper by Marvin Minsky like way back in 1950 and even the research predating artificial intelligence, read all the reference papers, read the commentary, it's like you could write the paper today, it's so relevant, right? So this stuff has been worked on for like a hundred years, right?
So, and the capability, because the capability has become so, so so much better, especially around the conversational stuff. I think that's been really, you know, very attention-grabbing to the media and lay people, right?
So, I mean, you talk to the actual research labs themselves, so OpenAI, Anthropic, Microsoft, anyone else, and they will tell you they had no clue that this was happening so fast. Like, the actual researchers that did the work that led here will tell you the complete opposite. Like, yes, they've been working on it for a while, but no one was expecting Yeah, the capability, maybe the capabilities surprise them, right?
But I think the general population gets excited by capabilities. I think because we are not exposed on average to the wonders of technology, like take a walk through a steel plant.
You ever been to a steel plant? where you walk into a room that's five miles long and there's these machines that are like, you know, 10 stories high and there's like molten metal flying around. It's like, holy shit, have you ever walked into De Havilland Bombardier watching an automated milling machine make a wing? And there's chunks of metal the size of your head flying all over the place.
Or walk into Pickering Nuclear Power Station and stand in the face of a nuclear reactor. Technology is amazing and awe-inspiring. And I think this is one of the first times that this really sophisticated technology has become available to everybody. So it seems magic.
If you walked into a steel plant, it's magic. You walk into these factories and this technology that's been around, it seems magic because it's not understandable by lay people unless you're educated in the field. So, you know, that's one of the things I think are driving this hype cycle is the fact that most of us aren't exposed to the incredible wonders of technology.
Like walk into CERN. Ever walked around CERN? You know, like, oh my God, the place is insane. You know, like...
So we have we have pizza that is behind and so we have a lot more to come But I wanted to finish on just one note very quickly. How many of you have read AI 2027? No one in this room AI 2027 calm I Wow. No one.
It has not spread enough. It's a long article. It is probably the single most important piece of writing I've read in the last 10 years.
AI-2027.com It goes into one of the potential scenarios of what might or might not happen. I'm not saying that it is definitely going to happen. I don't even... want to put a percentage on it, but it's definitely trying to look at what are some of the dynamics, not just in relation to the technology itself, but how the technology kind of interweaves with politics at a global stage and what we might be looking at, both from an employment perspective, from a learning perspective, from how it's going to change the world perspective.
To answer your question earlier, like what can we think about? How might this evolve? I think that goes to some of answering that question.
Yeah, if I could recommend a couple of articles. One would be AI is just normal technology by Columbia University. Find that paper.
That's a really interesting 20-page article on really goes in depth about the arguments that AI is just normal technology. And there's a very interesting article in The Guardian, The Rise of End Times Fascism, which is the marriage of politics and this technocracy, which is actually very scary and really mimics what's going on today in politics and the techno-billionaire bros that are really empowering politicians and manipulating reality for many of us. So that's The Guardian, The Rise of End Times Fascism, and
And AI is just normal technology by Columbia University. I would really suggest to read those as well as AI 2027.
I'll go have a read of that as well. That sounds interesting.
On that note, there'll be more conversation and pizza before it's cold. Hope you enjoyed tonight and bring more people again next time.
Thank you. Great. Thank you. Enjoyed it.