Halo: An Unbabel spin-off 🚀

Introduction

So a little bit what is Halo. So Halo is a project that is being developed under this agenda that is called the Center for Responsible AI.

It's basically a consortium of 21 products, there's 10 startups and the whole idea is to produce, develop products from the idea in a collaboration way. uh let me put okay i don't have it here but when i mean a collaborative way it's because you are joining a startup that is developing the product we have research centers where we challenge them with research projects so a little bit more in-depth questions and then we have what is called industry leaders or takers where basically the products then will go so They bring the challenge, the startup tries to develop and we have this collaboration with the research centers.

There are several products being developed there, some of them are co-located here actually, like NeuroShift and like Vizor AI, so some names you might recognize, but I incentive you to go to the website.

Why responsible AI

Why responsible AI? Because the whole idea is also to do this in a collaborative way and being aware of responsible AI pillars. So we try to advocate that products should be developed having into account explainability, fairness and sustainability.

These are the partners of the consortium, so plenty of research centres, some start-ups that you might recognise, some are more well-known, for example Swell Health, already a unicorn, we have a legal firm, Vieira da Almeida, and then we have for example Bial, Grupo Pestana, Sonae, so this is just to share and hopefully you can take a look.

Targeted problem

What is the problem that we are addressing? We are starting with a heart problem.

So people and patients like Stephen Hawking, they are diagnosed with a disorder that is called ALS, so homeotrophic lateral sclerosis. It's a neuromuscular disorder which means that in the course of the disorder people start having difficulties in walking and speaking because it affects the muscles like vocal cords.

Stephen Hawking is actually an outlier because he lived a long time fortunately. People when they are diagnosed they have a live expectancy up to three to five years.

Communication challenges in ALS

How do they communicate once they are having problems with augmented alternative communication devices like the one that I'm showing you here? This is an eye tracker device, so patients are facing a computer, there is a camera that is looking into the eyes and with eye movement they go letter by letter to select.

Nowadays it's still very difficult and frustrating for the patients to use because even though you can predefine some sentences and you have some kind of auto-completion, it's very poor. Like our keyboards have more technology in that sense than what these patients have.

Here the video is speeded up like three times and the question was just, are you married? So just for you to understand the pain.

And a bigger pain is, for example, if you are here and you have your eye tracker calibrated, if you go there outside, the light changes, you need to calibrate again. And it's cumbersome to move.

The product pod that is developing this, I'm here representing Embebble. We have five research centres, each of them are working in a different area, where they are specialised. Instituto de Telecomunicações, Instituto de Sistemas e Robóticas, Fundação Champalimau, From Now For Portugal and InescID. And as a taker, we have Hospital São João.

We are still not in the stage of putting these out there and having a massive group of patients using it. So we are talking with Hospital São João, but they don't have the product yet.

But with the Pela, that is the Portuguese association of ALS, we talk daily. And we have families using it.

It's great to have you on the stage. And so what we'll do is, Paulo will be typing in Portuguese, because Luis speaks Portuguese, and you'll be able to see the monitor so you can see the interaction. So Luis is not able to speak, so he will be communicating exclusively through Halo.

And maybe we can put the computer on the screen just so that we can, there we go. Perfect. So Paul is asking, how do you feel in the AI for Good Global Summit?

Live demonstration

There we go. So, okay. And Luis answered.

I feel honored and very excited to be here. It is an incredible experience, right?

You can see the device. I'm wearing my device on my arm. Luis is wearing his device on his head.

Collaboration with OpenAI

Now, an interesting thing, and we're announcing today a partnership with OpenAI. We're using their new voice LLM, which enables us to do something really incredible that completes the circle.

Most LS patients tend to have their voice recorded in one way or another. either through voice messages or other means and so what we're able to do is to recreate louis voice using the voice llm so that actually when he communicates he's communicating with his voice the rest of the video is when we open actually the audience questions for the audience so i went there with the microphone and people could ask questions and louis was answering

So what we were seeing was Luis. Luis obviously as you saw couldn't talk or move so the only thing that he was able to do still to have some action was to the muscles of the face.

So that's why he was wearing this headband that has sensors of electromyography. So we tried before with EEG, like electroencephalography, but it's like too noisy. It's not controllable. So we end up with EEG.

So when he wants to control, he moves the eyebrow. And what was happening is that he was being prompted with questions and then he has a generative AI model.

In this case, we are using open AI models, but we hope to have soon our own model. that we call an AI persona. So that model is personalized for the context, for the person, for what is the relationships with who the person is talking to, has a lot of context about him.

So when prompting, for example, how do you feel, it will generate on the fly several options that he's hearing and selecting. And then he can vocalize those using voice that was cloned.

And the cloning is because he had, like most of us, somehow like audio message from WhatsApp to the wife. And then now the models are so good that you only need actually a very short amount of these recordings to clone your voice and you feel related. As you saw, he started crying, which was completely unexpected, but just the power of hearing your voice in that venue was very powerful.

The mission of HALO

Like I was saying, so the whole mission of HALO is actually to join these two forces, so from one side we have a hardware that is based on the electromyography where it doesn't need calibration, so you just put it and you can immediately connect with you and it's connected to this AI persona. At the time it was only based on text, so sending message through WhatsApp, that's where we started.

Right now I can have my phone here and Halo is listening to the conversation, so we have the speech to text and with that context the person is on the conversation being offered suggestions and replying. And it's incredibly powerful because

There are many places where there is no other option and they will go letter by letter, literally with a paper, and now they can vocalize what they need.

However, you can see it as an ecosystem. We can have, it can be multimodality.

So basically it's similar to what we saw in the demo but here instead of having the headband you have the eye tracker. because they will use the eye tracker to interact with the WhatsApp.

Here are the possible centers that the halo will suggest, but contrarily with what you have with the band that is basically accept or not, if you identify, here you can post edit. You can change, you can write from scratch.

So we are capturing actually what was the suggestion, but what was the preference of the user? Because if you have the work of changing, then it's precious information that you can use and in the next interaction you already have data.

Additional features and next steps

We also have Halo Home. We actually started very basic in the beginning with the questions that the therapist could ask or the family or themselves with the eye tracker. But now we have what we call this Halo Home or persona builder where we have quizzes that can be actually automatic generated according to the topics that the person struggled the most. And it's just a way of having more input.

So tell me what we are reading now. You haven't spoke, I don't know, you haven't shared your day lately. How was the school of your kid?

And thinking about next steps, like this could be something like as a hello home family. I, as a, imagine, girlfriend can input information about my boyfriend because I'm, and I can put it like an audio because it's easy for me and I'm helping. the persona of the ALS patient.

Publicity and outreach

In the news, just a little bit of shameless publicity, we started in the Web Summit in 2023 in the main stage, so more than 80 people were seeing us. Then we were in the AI for Good and last December actually we were honoured to have a journalist from The Economist that flew to Lisbon just to go with me to see one of the patients.

He did a print version but also a podcast. It was the first time that a non-verbal a guest participated in a podcast from The Economist, so it was big.

Media coverage

Important contributors

The team, I always like to highlight obviously the team that is working with me at Apella, our partners and the patients, because they are truly part of the journey. The product is from the inception of the idea to being on the hands of these families, but they go through us through all the challenge.

So we still have bugs, Now we created a Discord channel where they can share if they're having problems or not, they can suggest features.

So it's really intuitive because we want to go into a stage where we feel comfortable, that then we go to São João and all the patients can have.

Future expansions

just last thing because i think probably i'm out of time but ALS is only the first disorder because there are many disorders where actually this will make sense i'm talking about for example parkinson 10 to 15 percent of the patients actually start having problems with speech and it's super frustrating sclerosis multiple as well aphasia aphasia after stroke people that go under surgery they are sedated and have intubation You cannot even imagine the frustration that these patients have. And just having suggestions and something easier you can connect, it's already life changing.

Conclusion

I hope you enjoyed, even though it was not what you expected. Thank you so much.

Finished reading?