Agile Austin | Explaining Explainability in AI

  • Fri, October 25, 2024
  • 3:00 PM - 4:00 PM
  • ONLINE

Explaining Explainability: Learning to Whistle

REGISTER HERE

Details

Join us for a great program at the Agile Austin Product and AI SIG! Once you sign up, you'll receive an email with a calendar event and Zoom link.

Session Title:

Explaining Explainability in AI

Abstract:

In this talk John will explain why many AI systems are not explainable! Did you learn to whistle, or ride a bike, when you were a child? Did someone “teach” you or did you really just have to figure it our for yourself? Whistling is not explainable.

Explainability (or Interpretability) in AI is the ability to interpret how an AI system came up with an answer, decision, or recommendation. This is very important if we want safe and trustworthy AI systems, however many modern AI systems (Deep Learning, Big Data, ChatGPT, ...) are by their nature not explainable.

John will share real examples from AI systems to explain explainability and give the audience a sense of where we can expect explainable AI systems and where it’s unlikely to ever happen!

Blue is closer to red than green!! (um... why? because the embeddings and vector database said so)

Expert models are explainable (Bayesian and Causal Networks are repeatable and explainable)

Speaker: John Heintz

John is an expert in small data Bayesian solutions.

He is the founder of an AI services firm in Austin. Before this, he co-founded a small-data AI/ML product company which was acquired by Planview. He also served as Chief Architect and CTO for several software product companies. An avid practitioner of Agile and Lean, he still writes code.

https://www.linkedin.com/in/johndheintz/

Slack Community for the Agile Austin Product and AI SIG:

Please contact Matt Roberts (matt.roberts@agileaustin.org) or Vishal Sheth (vishal.sheth@agileaustin.org) if you'd like to join our slack community!


Powered by Wild Apricot Membership Software