So I'm Manuel Gustavo Isaac. I'm basically a philosopher, but I work as a Senior Program Manager in Science Anticipation at the Geneva Science and Diplomacy Anticipator, and I'm a lecturer in Emerging Tech Ethics at the Swiss Federal Institute of Technology in Lausanne, the EPFL. And I'm most kind of delighted to have the opportunity to share with you my thoughts about AGI, trying to move a bit beyond the buzzwords.
So if you follow a bit the AI Hub the last few weeks, these might be some sort of landscape that you're familiar with. So basically the idea is that from Tech Guru manifestos, so you can see the new intelligence age by Sir Alton, to Nobel Prizes via test real critics, there's basically not a single week that goes without AGI making the headlines. And it's not easy, depending on your familiarity with the topic, to navigate the risks and opportunities
it's not synchronized with what i see in that screen um ranging from long-term human extension extinction to very short-term we will go that's yeah that's a core aspect of the talk as well um but i don't know why it's not synchronized anyway um Yeah, that's it. That's normal.
That's how PowerPoint works. It shows you one slide ahead. No, that's one slide behind.
Just a bit of background. AGI, we'll go back to that in some more details in a minute, but it's typically some sort of very far future topic in some sort of sci-fi scenario. What I would like to do in that talk, in that short presentation, is to bring it back to nowadays considerations. I'll try to do that in a what-what-what approach.
So what is AGI, basically rough coverage of the topic.
So what is the situation? What can we learn from that? And eventually, now what?
How can we act on what we've been covering in terms of of topics.
What is AGI? Trying to move a bit beyond the buzzwords.
A good way maybe to try to dive into that very critical and basic notion is to start wondering what might be missing with regard to reaching AGI and here you might wish to ask the experts.
I've picked up three leading ai figures of the field that i would consider to become pretty reasonable with regard to the expectation as to agi so jan lecun the meta guy and also the three ones that are very specifically non-ngi right
Oh, they all deem it to be, well, they are not into the hype. They are not into the hype and especially kind of going Marcus, obviously.
But still, I would deem them to be kind of reasonably reasonable. They are the most vocally, vocal people very specifically on the side of saying that we are nowhere near.
Yeah, well, it depends on the, on the, on this. Maybe, maybe LeCun. Benjo is a bit kind of more, yeah, I don't know.
The Nobel Prize winner today is, like, this year, is not on this list. Yeah, I know, and that's, well, that's on purpose. That's very purposely so.
Because, because he's really into the, into the hype, right? I would not consider to be.
Okay. I'm going to insist, can you just tell us what AI is?
Well, it stands for Artificial General Intelligence, and we'll go to it just in the next slide, I think.
So basically, while they have their own view as to what is missing, they do not really agree on the nitty-gritty and the details. But most or all of these three, they say, well, basically, it's not in principle unsolvable problems. problem sorry these are not in principle unsolvable problems but the reason maybe why they defers to their kind of expectation is because there is no widely agreed upon definition or characterization
To do that, I'll go with a very naive taxonomy of three different types of AI, the ANI, AGI and ASI. AGI is artificial neural intelligence. AGI is in turn characterized as those systems that cover a wide range of cognitive tasks at a high level of proficiency and ASI would be a higher level of proficiency than every human's combined ability at any task.
and it's characterized as those types of systems and models that cover a narrow set of related cognitive tasks at a high level of proficiency such as you can think of automated driving systems or or medical assistance and so on.
AGI, artificial general intelligence, and in that context you can as well further distinguish between two types of AGI, namely human level AGI and so-called superhuman AGI. So a human level AGI would be one that is as proficient as an average individual human at a wide range of cognitive tasks and a superhuman AGI would be one that is more proficient than any human individual at a wide range of cognitive tasks again.
And one kind of critical note to just have in mind and background is that so far it remains a pre-political notion that is not operationalised and that is, so to have some sort of grasp on how it could be fleshed out further, you might wish to compare it, if you wish, to those definitions that are provided by OECDs about an AI system, an AI model, and this kind of more operationalised and detailed notion. So that remains at a very kind of abstract theoretical level, this kind of characterisation, and very, yeah, liminal.
So if you go back to the taxonomy with the three steps, I would kind of put chat GPT in what the DAI act called general purpose AI, and that's just a step before this kind of human level AGI. With ChatGPT or with one of these chatbots? ChatGPT layout is as proficient as an average individual at a wide range of cognitive tasks.
And so today... Yeah, and along those lines, I think one of the articles that I kind of put on the first slide about the hype, it was in verse last year, and it was about saying exactly along those lines, that ChatDPT and this kind of model, 3.4 and above, have already reached that kind of characterization of AGI. But what is kind of critical with regard to that is just to be a bit kind of clear with regard to the different types.
If we are speaking about the human level, it might be that with regard to, but it depends, which is, well, that will be, sorry, that will be just in the next phase of the talk. But what amounts, how many tasks do you need to cover? What is the general covering in that regard?
Is the absence of the word generative, is that because we go on the principle that they all degenerate? No, they need not be all generative. They can be different types of systems.
So these need not be restricted to... And especially, well, given the way in which DSI was characterised, it might be that things will have to evolve quite a lot in order to reach this kind of characterization of the capabilities of a system right so the last one that is about above all humans capability gathered together and so on so that's Yeah, and as was just mentioned, a chatbot such as ChatGPT would be kind of maybe included in the low level, the human level like of AGI as characterised as per this definition.
So now going back to, well, turning to the second step of that talk. So what? And maybe what is wrong with AGI, if anything?
And so that's where we'll kind of bring that kind of very broad vague, maybe, theoretical consideration to us as humans, right?
And in order to do so, the question becomes then what does AGI mean, not in terms of future technological or technical capabilities, but what does the notion of the very idea, the concept of AGI mean for us, who we are as humans? And the baseline I'll take to tackle that question is to reflect on how the very notion of AGI and Markov
relate, impact, infringe, threat, or enhance a kind of understanding of human dignity in the HR understanding, in the human rights understanding. And the reason why I would like to do that, or propose you to do that, is because human dignity is typically construed as that basis, I mean in the human rights context, as that basis for all others all other fundamental rights, including freedom, social peace, social justice, and so on.
So that's a very kind of cornerstone notion in what we take to be a fundamental aspect of us as humans. Human dignity, the problem is that it's as messy as AGI, if not more, even in the context of human rights scholarship.
So I came up with this kind of very sketchy characterization of human dignity based on the bit of research that I've done on that book and other kind of resources.
So human dignity is the inherent moral worth and status that is possessed by all human being based on certain specific capacities consciousness, intelligence, moral agency, and so on, and that entitles them to fundamental rights serving as the foundation for social justice.
So you can have in mind that it's a pretty critical role that it's playing there.
So the question becomes, does AGI, that is the theoretical notion, not the sci-fi scenario, the idea of AGI, threaten our human dignity? To do so, I'll just address the very kind of aspect of AGI that we touched upon just a minute ago. First, the general notion in AGI, the general kind of of the phrase AGI.
So the issue with the AGI is that general is a gradable adjective and the challenge is to kind of set the threshold of generality for AGI with regard to the number of tasks that needs to be covered as to count as a wide range of tasks as was previously defined, and who is the average human in the characterization that I've provided you with in the context of human-level AGI.
The more problematic notion, especially when it comes to human dignity, is the very notion of intelligence. And it's a very high-stake notion that's really entrenched, especially in Western cultures. That notion has been taken to be distinctive of who we are as humans since Aristotle. especially the rational aspect of intelligence.
Another problematic aspect, sorry it's again not synchronizing, is that intelligence is a very strongly value-laden notion that has always served throughout the history of Western civilization at least, either to discriminate those that do not possess intelligence to a certain sufficient amount that we would deem necessary to each humanity, or it has been kind of typically fetishized and that's critically at stake in the context of ASI or even AGI being the super human version of it.
And if you are interested in that take on intelligence, I would highly recommend that paper by the director of the Levering Centre for the Future of Intelligence at Cambridge. The problem with intelligence that tackles these two, in the context of AI, that tackles exactly these two aspects of it.
And the last worry that I would like to single out is that intelligence itself is a contested and fragmented concept. So by fragmented, I mean, so that even though the belief in the independent existence of multiple types of intelligence such as social intelligence, emotional intelligence, rational intelligence and so on is considered in the cognitive science as a so-called neuron myth You have the reference here for very empirical reasons that have not been operationalized in that context.
It remains so that AGI, as characterized in AI more generally, if you wish, singles out a very specific and very limited aspect of our human intelligence, which is that we can consider to be rational, problem solving, task completion focused, output focused aspect of intelligence. And that's not all of what we are. That's a very limited portion of what human intelligence is made up of.
that the reason why you have this kind of very narrow-minded approach to intelligence in the context of artificial intelligence is typically explained by Luciano Floridi, the kind of leading digital ethicist, AI ethicist from Yale, and his wife, Kian Nobre, who is a leading cognitive scientist at Yale as well, in the problem of anthropomorphizing machines and computerizing minds. So that's a very symmetric way that leads to this kind of misconstrual of intelligence in the context of artificial intelligence.
So last bit of the talk, now what? What can we do? What should we do on the basis of the way in which we fleshed out AGI and the problems that we've been able to highlight in the context of its contrast with or its tension with human dignity.
So it seems we are now facing a critical dilemma here. And that is critical in the sense that it leads to a no-win scenario.
So, on one hand, if AGI encompasses at some point or will be able to encompass all types of intelligence, then it means that humans are just machines. They can be computationally reduced to mechanical processes.
On the other hand, if AGI focuses on the rational, output-focused, and so on, the way in which I characterized it, intelligence, then human intelligence itself is dehumanized. And that kind of dehumanization of intelligence is just like enshrined in OpenAI's charter, the very new version of it, where they characterize AGI as autonomous system that outperforms humans at most economically valuable tasks.
So if human intelligence is just what is economically valuable in terms of work, I think, well, that's for the benefits of all humanity. Still, it remains to be some sort of problematic reduction. So the first option would go against the perceived uniqueness of the way in which we've characterised what human dignity amounts to, and the second option would actually lead to diminishing our inherent moral worth and status.
To conclude, and that's the very last slide, I'll provide you with an ethical rule of thumb of how we can act or react in the context of this kind of big AGI talk.
So the first thing to do, I believe, or a kind of advice to do, is to resist to the computational reduction or the dehumanization of human intelligence. And you can do that by being critically aware of any anthropomorphic framing, trying not to relate to these systems as these were fellow humans.
You can go into venturing for or proposing alternative branding of that very notion of AGI, such as what was done by Dario Amodai with his advanced science engineer capabilities, or I do not remember the exact phrase for it. And you might wish also to consider new narratives and imaginaries that would kind of help us revigorating our kind of humanity in this context of this human machine interaction.
Second, we should or could reclaim the still distinctively human and human intelligence with all its limitations and failures by responsibly sharing the cognitive workload with machine. And that's especially the case in the context of creative processes.
I'm not at all a kind of tech pessimistic person. I'm really excited about all the new opportunities, but I think it's kind of critical to negotiate the relationship we'll have with these kind of systems and to appropriately share the cognitive burden between the different kind of stakeholders, if you want to adopt this kind of parlance that are involved in these processes.
And lastly, that goes slightly against the first recommendation, I think it will be also kind of critical to respect the non-human artificial agents that we might wish to engage with and interact with. As a form, that's a bit of a... as some form of extended kinship and the reason why is that if we start treating these agents such as Tesla's perfect slave, the Optimus, the new version of Optimus 2 which very much looks like a perfect slave if we start treating these artificial agents as slaves basically
a fear that it will eventually backlash on our inter-human relationship. Not because they should be kind of granted some moral status or moral rights or what have you, but if we start treating these as some sort of subcategory that we may just kind of yeah treat our slaves it might end up with us treating some of us among our population as being uh promptable uh with some queries and just not having to yeah but i think it's it's critical to preserve and to in the africa without entropy or anthropomorphizing them i think it's kind of important to say please when you address a chatbot or thank you
And thank you for, on that note, thank you for your attention.