AI assisted RFPs

Yeah, thanks everyone for coming out. Another great MindStone event.

Natan, thanks for kicking it off. Shafiq, excellent talk.

Introduction

Yeah, and today I'm going to be diving into democratized autonomous systems and artificial intelligence. And really quickly, what I mean by autonomous systems and AI. Autonomous systems meaning anything that has the agency to act on its own.

So we can think about things like Siri, where you can interact with it. It gives you audio inputs, audio outputs.

Things like Tesla self-driving cars. So you can imagine that using computer vision to detect which objects are what.

And then, of course, the very kind of hot topic, or the main topic in autonomous systems right now that's driven by AI is robots. So really excited to be diving into this today.

Keynote Agenda and Personal Introduction

A quick agenda from my end here. So I'm going to give a quick introduction to myself. Then we're going to talk about the current state of autonomous systems and where we are right now and kind of what consumer facing products we have. And then I want to get into where does the ascent of AI play into this?

And then final point, which I will Treat it as more of an opinion piece.

That's really cool. When am I going to get my own?

I think the golden question is, as we kind of trend towards AGI and we build out more autonomous systems, the question becomes, OK, how do we make it useful for ourselves? How do we get our own kind of personal system and autonomous system? And yeah, I'll kind of give my take and my prediction on point four there.

So quick intro from my end. I'm Rehan. I am a lead at Accenture, and I lead AI within Accenture, mainly around the consumer goods and retail verticals. I'm also the co-founder of a newsletter called Digestible AI, where I seek to break down AI news for everyone that wants to go and understand AI. And I'm also an instructor with a boot camp called Triple 10, which specializes in business intelligence. And more personally, I'm a builder, I'm an educator, and of course, I'm a lifelong learner.

History and Evolution of Autonomous Systems

So far, where are we when it comes to autonomous systems and robotics?

So in the 1990s, I don't know if anyone knows what Kismet is. Kismet is one of the first autonomous systems where we were able to interact with it with natural language. And this was incepted in MIT's AI lab in the 1990s as part of a research project.

And this was very, very clunky. It looked like that. It was kind of scary, I'm not going to lie. But then it was clunky, but we were able to interact with it with natural languages, which was pretty groundbreaking at the time.

Then fast forward a little bit to 2011, where we have IBM Watson winning Jeopardy, right? Apparently IBM Watson in 2011 won $50,000 through Jeopardy. Yeah, how crazy is that? Beat out some of the leads here, like Brad and Ken. I cannot recall their last name. I'm sorry. But IBM Watson won Jeopardy, and that was a pretty groundbreaking point for artificial intelligence when it came to democratizing it.

And now we're in 2024, where we have these things called figure one humanoid robots that can literally do anything. There's more to come on this. I have some more on Figure. But Figure really quickly is one of the most prominent startups when it comes to autonomous systems and robotics right now. And they're partnering with BMW. They're partnering with OpenAI. They have funding from Microsoft, Amazon. So they're a pretty big player now when it comes to robotics.

Consumer Adoption and Personal Use

All right, and I don't have much to say about this tweet, but Joanna here says, you know what the problem is with AI, right? It's in the wrong direction. I want AI to do my laundry and dishes so I can do art and writing, not AI that can do art and writing so I have to do my dishes.

And this is where I'm going to get at later about how do we adopt this, or when is this going to happen for us, when we adopt these kinds of robotics and kinds of autonomous systems for our personal gain. Yeah, and currently it's kind of happening, but maybe not at the scale that we want yet. But it's happening.

So a classic example, Roomba is a household item. How many have heard of Roomba? I'm assuming. Oh, a bunch of people. OK.

So Roomba is a pretty prominent household kind of robotics tool that vacuums for you. Sony AIBO we have for companionship. So Sony AIBO is basically a robotic dog. There are some pretty crazy videos on it out there.

Samsung Bali was reintroduced within CES 2024. And this is a kind of security and also just something that you can play with. So that's Samsung Bali.

And then I saw this this week, which I thought would be very prominent to kind of include Starbucks and Naver Labs. So there is a Starbucks outlet in South Korea. that had tried to use Naver Labs' robots to deliver orders, take orders to customers. And there was essentially one barista working, and the rest of the employees were robots. And I was like, wow, this is actually pretty cool.

So I mean, we've seen a couple of these, right? I think even CES had a point where there was like a barista where you can talk with it and it would take your order. But now it's coming at scale, it's gonna be bigger and I'm really looking forward to something like this and how this kind of works out.

Industry Innovators and Partnerships

Yeah, and as I mentioned before, I wanted to just highlight these key players, right? So FIGURE, right? FIGURE has huge partnerships now with OpenAI and BMW, and you can imagine why. So FIGURE is now leveraging a lot of OpenAI's multimodal models to create humanistic or humanoid robots, right? So what does that mean? Things that you can interact with in natural language. It's able to use computer vision to identify what kinds of objects are around it, what's a dangerous object, what's not.

1Also, they have a partnership with BMW, and this is a particularly interesting case for me because they've deployed these robots in the BMW factories in South Carolina. What year is it? It's 2024, right? And this is earlier in the year, which is pretty awesome. And what they're doing at BMW, they are just taking a bunch of the kind of brunt and also we're like harder, more and more dangerous tasks, right? So that BMW employees don't need to deal with those.

Sanctuary AI is another one. I think their main robotic function is called Phoenix, and they have a pretty big partnership, again, with Microsoft. And Microsoft is essentially partnering them to provide them the infrastructure and the artificial intelligence and the models and the compute behind it to power something like a Sanctuary AI robot. Cool.

How Autonomous Systems Work

Very quickly, I just want to touch on how these autonomous systems work, very, very high level. So three points here, machine learning, sensors, and edge computing. I want to dive in first to machine learning.

Shafiq gave a good overview on Compute Vision today, which is really good. And Compute Vision is a huge component of what these autonomous systems will use. So I mentioned before Tesla self-driving cars. So those are used with an object section, image kind of segmentation.

And then we have specifically also NLP cases. So NLP, when I talk about natural language processing, how do we take text as input, text as output? How do we deal with audio input, audio output? And that's what's enabling these autonomous systems to work very, very well right now, especially with the rise of these newer kind of multimodal models that can take in images, they can take in audio, take in text, and give you a good output back.

The next two points, I want to quickly glance over sensors and edge computing. Sensors, essentially, how does the robot or autonomous system understand and interact with the environment around it? And then edge computing is, this actually is fairly important now. So edge computing, right? How do we deploy machine learning models at the edge or in something like a smartphone or in something like a robot where it doesn't have to take up the latency or the amount of time to make a call or to inference the model? So if you imagine a machine learning model, like a computer vision model, being deployed on a cloud service, it might take a little bit longer for it to take in an input and then give you back an output. What edge computing does for us is if we were able to deploy that model right there, right then and there locally on the machine, then we'll get a lot faster in first time, better latency, and just better overall interactions with our autonomous systems.

A really key driver to this now actually is open source models. So a lot of the open source models, there are big ones, there are small ones, but if there's a catalyst to deploy them at the edge, which I think a lot of companies are now experimenting with, there's going to be a huge driver in using those types of models for edge computing and then also autonomous systems. All right.

Industry Perspectives on AI and Robotics

I just want to come here, glance into some industry perspectives.

So the first one is of Jensen Wang, CEO of Nvidia. And then the second one is from Brett Adcock, who is the CEO of Figure.

So Jensen Wang has actually stated, I think this week or last week, whenever his last keynote was, that multimodal LLMs are breakthroughs. And what those will enable us to achieve, essentially, everyone or everything will be an autonomous system in this world.

And when you think of that, you're like, huh, there's a lot of different things in this world, just unique things. And if you think that everything will eventually have its own kind of brain, I think that's kind of scary. But I don't know. I think it's also pretty cool at the same time.

He also describes this concept called generative physical AI. where he talks about, you know, how do we help robots and autonomous systems understand what's going on in an environment? And he uses NVIDIA's Omniverse, which is a platform for digital twins, if you guys know what digital twins are. They're essentially a simulation of the real world that also abide by the laws of physics. So you can think of, like, cars are tested in digital twins, right? Robots are tested in digital twins. And he's basically saying, you know, now that we have multimodal LLMs and they're getting better and better, they're going to, and we have a lot, more catalyst to train within these digital twin environments. So autonomous systems and robotics should take off in the same vein.

Flipping over to Brett Adcock. Brett Adcock says, every human will own a robot in the future, similar to owning a car or a phone today. I mean, we all have iPhones, right? We all have phones. So he's saying it's not going to be out of the blue to own a robot. And could you imagine that, right? Imagine it's just going to be as accessible as buying a computer or buying a smartphone. I think this is going to be pretty cool. So just a perspective from brett.cock.

Demonstration of Autonomous Capabilities

Really quick, just want to show off this quick video of the art of the possible when it comes to Figure and OpenAI. So as I mentioned before, OpenAI and Figure, they have a pretty big partnership. And I'm going to show off a quick demo here about Figure 1's humanoid robot and OpenAI's GPT-4, I think, at this point. to just do quick demo speech-to-speech reasoning.

Let's see. Hey, figure one, what do you see right now? I see a red apple on a plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table. Great.

Can I have something to eat? Sure thing. Great.

Can you explain why you did what you just did while you pick up this trash? On it. So I gave you the apple because it's the only edible item I could provide you with from the table.

Great. So based on the scene right now, where do you think the dishes in front of you go next? The dishes on the table, like that plate and cup, are likely to go into the drying rack next. Great.

Can you put them there? All right, so how do you think you did? I think I did pretty well. The Apple found its new owner, the trash is gone, and the tableware is right where it belongs. I agree.

Thanks, figure one. You're welcome. If there's anything else you need help with, just let me know.

Yeah, so this is kind of where we're heading in 2024. And some quick points on this. So the combination of using these multimodal models. So figure one here was able to identify what was edible. So it had to identify, OK, that's an apple, that's a plate, that's a glass.

So it had to kind of go through that reasoning to understand what to hand the gentleman. Also, it's not perfect, as we can see. There's some pauses in conversation. It's not a natural conversation. But we're making very significant strides in getting there. And I think, again, these multimodal models, more LLMs that come out, more open source LLMs will drive all this kind of growth in robotics and autonomous systems.

The Future of Autonomous Systems and AI

All right, and these are kind of my final points here. All this is great, but what's next?

And a quick quote I have here is technology advances to ease human pain and work. For example, we have a phone so I don't have to go knock on my neighbor's door and talk to him. We have email so I don't have to go to the post office and send a bunch of mail to different people.

And in the same vein, I think autonomous systems is next for this. So just like how we can instruct LLMs to act how we want, is there a future state where we can have an all-purpose autonomous system or robot? That would just be for ourselves.

We're already trending towards more personal AI voice assistants. So naturally, its physical manifestation is next. And I'm really excited to see in the next decade or two how that comes about.

And I've been harping on this too. As the quality of models rapidly improve, even if we keep it at the same rate as they already have been, so will the quality of these autonomous systems. And I mentioned before, open source LLMs can be a big catalyst for this too.

And my personal prediction, we are two decades away from having our own personal autonomous assistant. Kind of in line with the leaders in the industry who predict AGI, or artificial and general intelligence, to be coming about in 2030, 2040. I think we're going to get there in 2050. I don't know though. But if we get to the 2030, 2040 mark, I could see us having personalized autonomous assistance not too far along from that.

Discussion and Audience Q&A

So yeah. I mean, what do you guys think? Yeah.

Yeah. No, I know. No, and I'm kind of in the line with what Brett Adcock says, too. Just like, if we think about it as a computer or a phone, it's going to become so commonplace in society that no one's going to think twice about it.

If we think about the first kind of smartphone or first telephone that was out there, people probably thought it was dangerous. It's just speaking to the trends. And I think, you know, I think there's still question marks still about AI, right? I think we all know that.

AI is not perfect. We know that as well. And it's just, you know, as, but we're only we're not very far into the democratization of AI. AI only became accessible to everyone mainstream in 2022.

Whereas a bunch of researchers, a bunch of engineers, a bunch of data scientists were always in the AI frame, but now everyone outside of those functionalities know about it. 1So even just how AI is moving, I see the sentiment. But I think it's going to become as commonplace as a very handheld device.

So in terms of lawsuits, I don't know. There could be a few. And I'm not going to deny that. Yeah?

So I guess, how do you quantify or define AI, machine learning, deep learning? Yeah. So AI is the broader umbrella. Within that, I mean, you can think of it as kind of a Venn, a big Venn diagram. So AI is the top one. Inside that, we have machine learning. More specifically there, we have deep learning. And then large language models will be in the field of deep learning.

What was the other one? I'm sorry? Yeah. Yeah, yeah, yeah. So, yeah, yeah. So there are, but loosely now everyone will call it AI, which it is AI. They're all subsets of AI. Yeah.

Yeah. Yeah, are these development teams, are there internal arms of these companies that are like regulating responsible development so that it... It doesn't get out of, like, I don't know. I feel like maybe this is a misconception about AI, but I feel like even that's hackable to a degree. That might be wrong. But, like, what have you seen as trending as far as, like, AI ethics within these companies that are collaborating?

Yeah, absolutely. I mean, it's top of mind for big tech. It's top of mind for consultancies like where I work. Top of mind for things like Nvidia, Figure, right? It's a huge point. It's absolutely a huge point. And we at Accenture, we really harp on responsible AI, right? That's like the very first and foremost thing we do with our clients, for example. So it's top of mind and we deliver assuming that, like, we deliver to prove out that this is gonna be a responsive solution, right? And that's like a huge driver, especially as we're getting into the age of democratized AI, so, yeah. So it's, in short, very, very big, big, big emphasis on it. Yeah.

So what do you mean by democratizing AI? Because if you look at the internet, the way the internet expanded, although everybody owns a smartphone that's connected to the internet, we saw the consolidation of Big Tech and everything. And you're seeing the same trend with Big Tech too. So do you think everybody owning a robot, does that translate into democracy?

No, yeah, so when I think of democratized AI, I think of everyone in the world can go and interact with an AI product, right? Which we can do now, right? With ChatGPT, that's the first example.

when it comes to big tech and kind of the monopoly that they'll they have around it uh i think yeah you're right because they're they've been in the forefront of innovation for decades now right the the google the meta the the microsoft right they've been kind of at that forefront and they're positioned so well to capture ai right or or use it and harness it where it's like we are the key leaders in AI, come try and stop us.

OpenAI was a research lab. And the thing that caught eyes from OpenAI was like, OK, we're going to get funding from Microsoft. They got $10 billion from Microsoft. So now there was some legitimacy behind what OpenAI was doing. They were backed by Microsoft.

And then they had the product of what ChatGPT is. And when you think of it, every single iteration of ChatGPT is actually a significant improvement to the previous one. Um, but yeah, in short, big tech is, is positioned to, to kind of hold control of the models. But by democratization, I mean, everyone can access it, right?

Similarly, whereas maybe it could be figure, it could be sanctuary that will kind of have an Nvidia that houses the robotics and we would just be, I guess, like the consumers of them, right? Yeah.

So, um, what do you think about, Yeah. Yeah. Right. Yeah, absolutely.

The Sony AIBO that I showed before, they're pricing that at $3,000 or something ridiculous like that. So think about a $3,000 robotic dog. Would you want that? I don't know. I'm not sure. But some people do.

And at first, it's going to be a very luxury item. I think of the iPhone curve, too. But that's all computer-run.

Right, right. We're going to get to a point where we have, aside from big tech, there will be other players within AI that have open source models, which will drive some costs down. But of course, the amount of hardware, motors, I'm not sure where to go with that either. I think if they figure out a way to get economies of scale, they can bring it down and democratize it further. But that's still a big question mark.

I'm very novice in tech and AI, but you had mentioned latency earlier on. How do we make autonomous figures processing active on the stations quickly? Yeah, it all has to do with edge computing. So if we can deploy models at the edge and within the robots themselves rather than the cloud environment, then we're going to get lower latency, better latency.

Hey, guys. Thanks so much. We'll just have one more question, and then we'll move on to the next talk. Yeah.

Yeah. Yeah. Yeah. I wanted to make sure everyone asked their questions. Yeah, that's all good.

What's your favorite movie that has anybody do it? You know what's funny? I don't watch any movies. But you know what's crazy? Black Mirror, right? A lot of the kind of, yeah. Black Mirror is a Netflix series, right? A lot of that stuff is coming true, and I'm like, shoot, Black Mirror's actually coming true. And I saw the first parts of Black Mirror a couple years ago, and I was like, this is crazy. And now I'm in 2024 thinking, This isn't that crazy. It's actually happening to us. So Black Mirror is kind of like my favorite futuristic show. I don't really have a favorite movie for robots.

Closing Remarks

Yeah. And if you guys want to connect with me, feel free. I'm on LinkedIn. And you can also find my newsletter, Digestible AI. Check it out. It's good content. And yeah, really appreciate the talk. Thank you, guys. Yeah.

You say it fascinates you, but is it not scary? Yeah. I mean, both things can be true. All right. Thanks, guys.

Finished reading?