Why Even the Creators of AI No Longer Understand Their Models

Why Even the Creators of AI No Longer Understand Their Models

To learn more about Hellodarwin: https://go.hellodarwin.com/hypercroissance?utm_source=helloDarwin&utm_medium=podcast&utm_campaign=grants-hypercroissance

📉 AI, opaque models, incomprehensible decisions: have we gone too far with artificial intelligence?

In this episode of Hypercroissance, Jonathan LĂ©veillĂ© (CEO of Openmind Technologies) sits down with Antoine GagnĂ© to discuss a topic that’s as crucial as it is unsettling:
âžĄïž Why even the creators of AI no longer understand their own models.

Based on an article by the founder of Anthropic, they dive into an underestimated but critical issue: interpretability. Behind the promises of productivity and automation lies an uncomfortable truth—we’re using tools we no longer control.

We cover:
✅ What AI interpretability is and why it’s so urgent
✅ The real-world risks of LLMs (large language models) for businesses
✅ Why AI is advancing faster than our ability to regulate it
✅ The role of leadership in this technological revolution
✅ How to integrate AI without losing control of your company

Are you a business leader, marketing director, or operator in a growing company? This episode is a strategic wake-up call you won’t want to miss.

🔔 Subscribe for more essential discussions on growth, innovation, and leadership in Canada.

To learn more about Openmind Technologies: https://www.openmindt.com/
To learn more about Jonathan Léveillé: https://www.linkedin.com/in/jonathanleveille/

To connect with me on LinkedIn: ⁠https://www.linkedin.com/in/antoine-gagn%C3%A9-69a94366/⁠

Our podcast Social Scaling: https://www.youtube.com/@podcastsocialscaling
Our podcast No Pay No Play: https://www.j7media.com/fr/podcast-no-pay-no-play

Follow us on social media:
LinkedIn: ⁠https://www.linkedin.com/company/podcast-d-hypercroissance/â