Mindstone Toronto September AI Meetup header image
Mindstone Toronto September AI Meetup

Welcome to the biggest Practical AI Meetup in Toronto!

Join us once a month as we explore the world of artificial intelligence, its cutting-edge practical applications, and the astonishing projects that are shaping our future.

Why should you attend?

  • Get up close and personal with the AI projects that are redefining the boundaries of technology and human potential.
  • Learn from the brightest minds in the field and gain valuable insights into the ever-evolving AI landscape.
  • Unleash your curiosity, fuel your creativity, and expand your network as you connect with fellow AI aficionados and pioneers.

What you can expect?

Mindstone events consist of three talks covering different aspects:

  • What I Learned Building With LLMs: A technical talk breaking down the process for building a product using AI with real-life learnings and insights.
  • How To Be More Productive Using AI: A practical demo and step-by-step guide on how to use AI to speed up and improve tasks.
  • What Does The Future Look Like With AI?: A theoretical talk on the impact of AI on work, life and society.

After the talks we'll have pizza, drinks, and time to connect with everyone around you.

Don't miss the opportunity to see what's really happening in AI - you'll be surprised at what's already possible today.

We have limited spots available, so get your ticket ahead of time to avoid disappointment!

Agenda
Doors Open
Welcome to the event
Introduction
Welcome and event introduction
From Prototype to Production: Lessons Building with AI by Aashni Shah
AI coding tools now let anyone - from non-coders to senior engineers - spin up an app in hours. But how do you go from “Tinder for pets” in a weekend to something real, stable, and safe? In this talk, Aashni Shah walks through the spectrum of AI development tools, live-builds a quick prototype, and shares lessons on when these tools are powerful, when they’re risky, and what it takes to turn an AI-generated prototype into a production-ready product. Key takeaways: \* How to choose the right AI coding tools for your skill level and project. \* Where rapid AI prototypes shine - and where they fail. \* What steps are needed to make an AI-built app secure, scalable, and maintainable.
View recording
Joint Evaluation (Jo.E): A Collaborative Framework for Rigorous Safety and Alignment Evaluation of AI Systems Integrating Human Expertise, LLMs, and AI" by Himanshu Joshi
Abstract - The increasing sophistication of Artificial Intelligence (AI) systems necessitates a rigorous, multi-dimensional evaluation paradigm that surpasses conventional automated metrics and subjective human assessments. This paper introduces Jo.E (Joint Evaluation), a structured evaluation framework that integrates human expertise, AI agents, and Large Language Models (LLMs) to systematically assess AI systems across critical dimensions: accuracy, robustness, fairness, and ethical compliance. Building on methodologies such as ”Agent-as-a-Judge” (Zhuge et al., 2024) and ”LLM-as-a-Judge” (Zheng et al., 2023), Jo.E provides a principled approach to identifying and mitigating AI risks through a tiered evaluation process. We validate this framework through controlled experiments on commercial models (GPT-4o, Llama 3.2, and Phi 3), demonstrating its capacity to detect model vulnerabilities that single-method evaluations miss. The framework’s key innovation lies in its structured information flow between evaluation tiers, enabling targeted human expert involvement where automated methods are insufficient. This creates a scalable, reproducible evaluation methodology with comprehensive coverage of critical AI safety dimensions. Our experimental results show that Jo.E successfully identified 22% more adversarial vulnerabilities and 18% more ethical concerns than standalone evaluation approaches while reducing human expert time requirements by 54%. Weblink - [https://himjoe.github.io/safealign/](https://himjoe.github.io/safealign/)
View recording
Speeding production model deployment by >7x by Brad Micklea
We will look at the many different ways that a model can be deployed to a Kubernetes environment. We'll compare NVIDIA NIM, Tensorizer streaming, in-cluster caching, and a standard container deployment from the perspective of effort, first-deploy performance, and scaling responsiveness. We'll finish with some recommendations and call outs for other factors to consider before deploying to production.
View recording
Wrap Up
Closing remarks and next steps
Pizza, Drinks & Networking
Casual networking with food and drinks
Speakers
Event Partners
Human X
Attendees
Location