New: Podcast Series — set it once, get episodes on your schedule
Back to podcasts

From Platform to Policy: Moderna's AI Evolution

Discover how Moderna built its foundation as a 'technology company that happens to do biology,' leveraging AI to revolutionize drug development and scale during unprecedented times. We explore their journey from developing internal secure AI tools to democratizing custom GPTs, and the robust governance framework required to innovate responsibly within a regulated industry.

4:47

From Platform to Policy: Moderna's AI Evolution

0:00 / 4:47

Episode Script

A: So, Moderna really set itself apart from day one, largely thanks to CEO Stéphane Bancel's vision. He famously called it a 'technology company that happens to do biology,' right? That framing alone tells you a lot about their foundational approach.

B: Absolutely. It wasn't just a catchy phrase; it informed everything. Their core mRNA technology wasn't viewed as a single drug pathway, but a versatile 'platform company' concept. This allowed them to develop multiple drugs in parallel, leveraging cross-learning across projects.

A: Which is fascinating, because it means their entire operational model was intrinsically tied to being digital-first. We're talking cloud computing, seamless integration of systems, automation, even robotics, and then, of course, baking AI into everything from early research to preclinical manufacturing.

B: And then came COVID-19, which acted like a massive stress test and accelerator. It forced them to scale exponentially, practically overnight. They grew from around 800 employees to 2,700 in a year, which brought in a lot of new talent, but also presented a huge cultural challenge in maintaining that digital-first, AI-driven mindset.

A: It really pushed their system to its limits, simultaneously validating the platform approach while demanding a rapid recalibration of their internal culture and processes. So, with that digital foundation established, how did they actually roll out generative AI strategically across the enterprise? We know about their core principles, but the implementation itself is a phased process.

B: It started with a real security challenge. Public ChatGPT posed a data leak risk, especially for a highly regulated pharma company. Brice Challamel, their VP of AI Products, and his team quickly developed 'mChat' in just two weeks as an internal, secure alternative.

A: Ah, mChat. So, it was built on OpenAI's models but isolated proprietary data, right? A critical mechanism to allow exploration without the risk.

B: Precisely. And to drive adoption, they launched a prompt competition with prizes—even a trip to meet OpenAI founders. It showed employees how they could use it as an assistant, coach, or creative partner, moving beyond just a search tool.

A: That's a smart way to democratize access initially. And the AI Academy played a role in upskilling employees to really leverage these tools, I'd imagine, especially if initial use cases were rudimentary.

B: Absolutely. That ongoing training was essential. A major proof-of-concept for critical operations was the Benefits Assistant GPT. It helped employees navigate complex benefit enrollments, showing a 96% accuracy rate in tests. That built trust for AI in sensitive areas.

A: And eventually, they transitioned from mChat to ChatGPT Enterprise, didn't they? What was the rationale behind that shift, given they'd built mChat internally?

B: Correct. ChatGPT Enterprise offered similar security assurances to mChat, but with a significantly larger development team behind it at OpenAI, it promised greater stability and faster feature development for broad, horizontal use cases. mChat then evolved into an experimentation platform for more specialized features.

A: And from that, Moderna really embraced this idea of 'letting 1,400 flowers bloom' with custom GPTs, democratizing AI creation across the company. But in a heavily regulated industry like biotech, that approach certainly brings some pretty significant governance challenges, doesn't it?

B: Absolutely, that's where the tension lies. You have this incredible innovation, employees building tailored tools, but then certain use cases immediately flag risks. Like the Self-Review GPT that became super popular, helping with performance reviews.

A: Right, an HR process concern. But then you have something like DoseID GPT, which was designed to help determine drug dosing recommendations for clinical trials. That immediately triggers regulatory alarm bells for me.

B: Exactly. That's a direct clinical trial and regulatory risk. So, to navigate this, Moderna developed an AI governance framework, essentially an assessment matrix. It's a pragmatic way to classify these GPTs based on their criticality.

A: And that criticality matrix had two main dimensions: 'Impact of Failure' and 'Audience.' So, who is affected if it goes wrong, and how bad could it be? That makes a lot of sense for a pharmaceutical company.

Ready to produce your own AI-powered podcast?

Generate voices, scripts and episodes automatically. Experience the future of audio creation.

Start Now