The Lawyer Building AI’s Moral Compass
Our Guest- Tom Lue
Tom Lue is VP of Frontier AI Global Affairs at Google DeepMind, overseeing legal, public policy, and frontier AI safety & governance teams.
Previously, he was General Counsel & Head of Governance at DeepMind, Deputy General Counsel at Waymo, and Senior Counsel at Google, advising on AI law and policy. Prior to Alphabet, Tom served as Acting & Deputy General Counsel at the Office of Management and Budget (OMB), leading the legal team and advising the White House and executive agencies.
He also served as Attorney-Advisor at the DOJ Office of Legal Counsel (OLC), focusing on national security and emerging technology matters. Tom has taught at Stanford on law and technology and served as Chair of the Nominating and Governance Committee for the Board of iCivics, a leading civics education non-profit.
He clerked for Justice Sonia Sotomayor on the U.S. Supreme Court, and is a member of the American Law Institute.
“The best outcomes happen when humans and AI work together”
-Tom Lue
Before he worked with the world’s most advanced AI systems, Tom Lue was a lawyer helping shape U.S. national policy.
From the Supreme Court and the White House to Google DeepMind, his career reads less like a straight path and more like a map of how law, technology, and ethics collide.
Before he worked with the world’s most advanced AI systems, Tom Lue was a lawyer helping shape U.S. national policy. From the Supreme Court and the White House to Google DeepMind, his career reads less like a straight path and more like a map of how law, technology, and ethics collide.
Now as General Counsel at DeepMind in London, Tom leads one of the most cross-disciplinary teams in tech — lawyers, ethicists, sociologists, and computer scientists, all trying to answer a deceptively simple question: How do we build intelligence that helps humanity, not harms it?
From Law and Policy to Machine Learning
Born to Taiwanese parents who were both doctors, Tom began his career expecting to follow in their footsteps. “I thought I’d be a doctor too,” he laughs. But while studying social sciences at Harvard, he became fascinated by the bigger questions — the kind that shape societies, not surgeries.
Everything changed after 9/11, when he joined Senator Dianne Feinstein’s office on Capitol Hill. “That was my first exposure to public service, to the intersection of law, technology, and security.” From there, he clerked for Justice Sonia Sotomayor, worked in the Obama Administration, and helped craft legal frameworks for everything from Guantanamo detentions to advanced surveillance tech.
But it wasn’t until he became a father that his mission crystallized. “I wanted to work somewhere making a transformative impact,” he says. A friend suggested Google — not for the gadgets, but for the reach. “They were building things that touched billions of lives.” That idea hooked him, and he’s been with the company ever since.
The Power of a Non-Linear Life
Tom is the first to admit his path was anything but predictable. “Most people don’t know what drives them at eighteen,” he says. His advice to young professionals? Forget the straight line, follow the intersections.
He describes his career through a three-circle Venn diagram:
What you love
What you’re good at
What makes a positive impact
“Find the overlap,” he says. “That’s where purpose lives.” It’s a message that resonates across generations - especially in a world where AI is changing what “career paths” even mean.
Making AI Safe - and Human
At DeepMind, Tom’s mission is clear: build AI that’s responsible, secure, and beneficial to society. He helped lead the creation of the Frontier Safety Framework, a policy that governs how DeepMind handles risks from powerful AI systems.
The framework focuses on the most severe threats, like misuse in biological research or cybersecurity and commits to pausing model deployment until safeguards are proven. “If we can’t mitigate a risk, we stop,” he says simply.
But safety at DeepMind isn’t just a technical issue, it's philosophical. “You can’t solve AI safety with engineers alone,” Tom explains. “You need sociologists, economists, lawyers, ethicists - everyone.” It’s an approach that reflects his belief that AI governance is not just computer science, it’s civilization science.
Humans and Machines as Partners
One of Tom’s favorite ideas comes from a recent Atlantic article co-authored by economist David Autor and Google’s James Manyika: AI’s future isn’t about replacing people, it’s about collaboration.
“The best outcomes happen when humans and AI work together,” Tom says. A doctor using AI to read scans still brings empathy, intuition, and context that no algorithm can replicate. “Automation isn’t the goal. Collaboration is.”
It’s a mindset he hopes young people will adopt, not fear of machines, but excitement about what’s possible when both sides combine strengths.
Balancing Boldness with Responsibility
For all the optimism, Tom’s job often comes down to tough calls: How safe is safe enough? When do we pause? Who decides?
He leads governance boards that weigh every model’s potential risks before launch. “We talk about enabling maximum velocity of responsible innovation,” he says. “You can move fast - but you have to move safely.”
It’s why DeepMind helped found the Frontier Model Forum with OpenAI, Anthropic, and Meta, sharing best practices for AI safety and pushing for international standards that keep innovation global, not siloed.
The Human Element
For all his expertise, Tom remains humble — and grounded. “Every morning on my drive, I talk to Gemini,” he says, half-joking. “It’s my thinking partner.” But he insists AI will always be a tool, not a replacement. “Humans are messy, emotional, irrational and that’s good,” he smiles. “Those are the things that make us human. AI should help us think better, not become us.”
Key Takeaway
Tom Lue’s journey, from law to AI, isn’t about career shifts. It’s about a deeper conviction: that technology needs values as much as it needs code.
And in an age where machines learn faster than ever, people like him remind us that the real intelligence worth building is the one that still has a conscience.