Hello everyone.
So today we will present to you our latest solution in the last hackathon we went in Thessaloniki at September 2025.
Using AI agents in accessibility.
So my name is Panos, here is George and Chris.
First of all, for those of you who don't know, we are a team of software engineers, and we also, as mentioned, won the June 2025 OpenConf Hackathon.
The app we developed there was a traditional AI algorithm capable of detecting and preventing epileptic crisis in real time.
So based on that, we entered the new hackathon with a different mindset, a different approach.
We wanted to make a more scalable and impactful solution.
So that's why we continued to consider a solution for accessibility.
So the main concept of the hackathon was in general AI and blockchain technology and or actually you had only to choose one or both of them if you could.
But it was a little bit more thematic restricted.
We had something like, as I said earlier, accessibility or smart cities or gaming.
and a lot more.
So yeah, we decided to go to the accessibility route.
Okay, let's first talk about how accessibility works on the web.
So for someone who is visually impaired, for example, the way it works is that in the website, in their browsers, they have a screen reader, and in order for the screen reader to work,
the developer must have taken into account when they are developing, like they have to put specific semantic HTML in order for the screen reader to be able to recognize it and read it aloud to the user.
For example, in images,
you have the alt text, which means that the developer has to put a description for the image because the user is not able to see the image, and the screen reader then reads it aloud for them.
Right now, the European Union has started pressing companies to confront according to the WCAG guidelines, which means that
The companies that have a European market, they need to make their websites accessible.
If they don't, they're going to have to deal with fines and other punishments.
Right now, traditional tools like Acts, they often miss critical issues or emphasize on meaningless rules.
And also the scan process, because your entire website has to be accessible, which means that every single sub-page has to be accessible, the scanning process has to be performed manually, which is a lot of work.
So our solution, your website is being scanned automatically.
Every single sub-page is going to be scanned.
It's going to be discovered.
And then, using the power of AI,
it's not only analyzing it but it also experiences the site like a person who has a disability and this is the game changer right here because we all know how powerful our llms at taking personas for example that's why we leveraged that and we took it into our advantage and at the end
we generate a meaningful business-oriented report that the business owner of a website can understand and detect critical issues that affect their website.
So let's see how it works.
Initially, using Playwright, we crawl the entire website that the user is going to provide to us, to our app.
1Then, for each Discover sub-page, we deploy a team of specialized agents, specialized AI agents.
Each one is specialized at a different accessibility issue, and they all run in parallel.
They start collecting data, and at the end,
We bring all the findings together and we generate an actionable accessibility report.
Okay, so we first thought of it and we tried implementing a single super agent that would take into account every single accessibility issue and it would generate a perfect report.
and we completely failed.
One single agent was losing context window.
The results were simply a disaster.
We could find possibly three out of 10 mistakes.
The rule-based has got the same percentages and stuff.
Around 20%, okay, yeah.
Therefore, we went even further and we split it to different agents.
As you can see, we currently have image agent, forms agent, navigation agent, area agent, and contrast agent.
For example, the image agent would simply focus solely on image semantics, IMG.
check if there is an alt text, should an alt text exist there?
The alt text, is it meaningful or it is not?
For example, you've got an image of a cat and the alt text is a dog.
the rule-based passes but the thing is you haven't actually fixed it and it's not accessible for the visually impaired user or for example the navigation agent sorry the navigation agent we thought of it as dead ends and traps for example a visually impaired user navigates with tab and shift tab and he opens a model she he opens a model and they try to navigate and they cannot actually exit the model
there is a dead end, and the user cannot do something about it.
OK, so that was the plan with the AI agents.
Now the developer experience.
OK, it's our second hackathon.
First of all, the stack.
Next.js with TypeScript.
We go strictly type every single day in Tailwind for the customization in the CSS.
OpenAI for everything related to agents.
And in the backend, ExpressJS, Node.js TypeScript.
And for the database, we went with Prisma as database wrapper and SQLite because we want to be fast and iterate fast.
All of us, the three of us, use Cursor IDE with agentic AI.
We want to iterate rapidly.
We know that there is also the CLI, but we usually prefer to go with the agentic because you accept a change step-by-step, review it, which is a big process of it.
Therefore, we have to go from building logic and different semantics of the code to a higher level, to an overview of the
Instead of going from syntax, now you have to go to whole logic.
You have to plan ahead.
You have to initiate the AI to start building something.
So the third point is the most important for me, at least, of the whole presentation.
They discuss, plan, execute in order to avoid the AI drift and the hallucinations.
In the first hackathon, we didn't do it.
We didn't plan ahead and plan correctly.
And it was simply, I could say, not a waste of time, but it was so much more difficult to do it.
And the last bullet point, short cycles, continuous feedback, and constant refactoring.
This is what we were currently, we were doing back in the hackathon.
And this is actually the core principles of extreme programming.
And this is something that we realized later on.
And also, we used Cursor to create inaccessible sites and accessible sites in order to have many test data.
Let's move on to an actual demo right now.
Okay.
So as you can see here, we start by logging in.
Once the user logs in, the site sends him to dashboard.
So dashboard is basically the main menu of our web app.
Here you can start a new scan.
You can view the history of your scans and also you can change some settings.
Moving on.
Moving on, you can paste the URL as you can see here and that starts to process the site.
This actually takes quite a bit since it goes line by line and basically each agent does its own thing line by line to find every specialization's vulnerability.
So for that reason in the demo you will see a scan that we performed in the same site just to be quicker.
So here inside the summary, not the summary, the report actually, you can see that at the top we have each route of the site and for each route of the site there is a summary tab which contains basically simplified version of each
not of each, like the number of how many actual issues you have.
And if you want to check them even further, you can go inside issues where you can see that all of the issues are here for that specific website, web page, and they are categorized by a prioritization.
For example, high, critical, medium.
et cetera.
And you can also change the web page.
Now, for example, let's take a closer look here.
This is a critical issue.
Sorry, not critical, a medium issue.
And as you can see, it says, for example, that it's a generic, unhelpful alt text on featured Arctic limits.
And if you go even further, you can see that it says that the current alt
text is just image, which is not helpful.
It's so generic.
So it provides also some how you can fix it by adding a description that actually is a description.
Now, moving on to the conclusion, basically the obvious part that I think all of you already know is that AI has actually redefined how we build and develop apps.
by time and by effort also.
But at the same time, there is also a big risk that if you just go, because you feel like you are a 10X developer, and in some cases, yes, you are a 10X developer when you're using AI, in use of your productivity and your time, but you can also
go into circles that makes you actually be a lot, a lot slower.
So yeah, in general, what, as Chris said here, you always need to plan first, plan before you build.
But at the same time, you need to be and stay flexible because in this industry, everything changes at an instance.
Thanks for listening to us for the last 15 minutes, I'd say.
Special thanks
obviously to Epignosis for making it possible, because we went to the second hackathon for them.
And also, special thanks to our former colleague, Zoe Mouzmoula, who is sadly not here right now, but he helped us a lot, and yeah, special thanks.
Thank you.
Questions right now, yeah.