Ep 214  |  Tristan Harris

Ending the AI Arms Race: Why Safer Futures Are Still Possible & What You Can Do to Help with Tristan Harris

Check out this podcast

The Great Simplification

Description

The conversation around artificial intelligence has been captured by two competing narratives – techno-abundance or civilizational collapse – both of which sidestep the question of who this technology is actually being built for. But if we consider that we are setting the initial conditions for everything that follows, we might realize that we are in a pivotal moment for AI development which demands a deeper cultural conversation about the type of future we actually want. What would it look like to design AI for the benefit of the 99%, and what are the necessary steps to make that possible?

In this episode, Nate welcomes back Tristan Harris, co-founder of the Center for Humane Technology, for a wide-ranging conversation on AI futures and safety. Tristan explains how his organization pivoted from social media to AI risks after insiders at AI labs warned him in early 2023 that a dangerous step-change in capabilities was coming – and with it, risks that are orders of magnitude larger. Tristan outlines the economic and psychological consequences already unfolding under AI’s race-to-the-bottom engagement incentives, as well as the major threat categories we face: including massive wealth concentration, government surveillance, and the very real risk that humanity loses meaningful control of AI systems in critical domains. He also shares about his involvement in the new documentary, The AI Doc: Or How I Became an Apocaloptimist, and ultimately highlights the highest-leverage areas in the movement toward safer AI development.

If we start seeing AI risks clearly without surrendering to despair, could we regain the power to steer toward safer technological futures? What would it mean to design AI around human wellbeing rather than engagement, attention, and profit? And can we cultivate the kind of shared cultural reckoning that makes collective action possible – before it’s too late?

About Tristan Harris

Tristan is the Co-Founder of the Center for Humane Technology (CHT), a nonprofit organization whose mission is to align technology with humanity’s best interests. He is also the co-host of the top-rated technology podcast Your Undivided Attention, where he, Aza Raskin, and Daniel Barclay explore the unprecedented power of emerging technologies and how they fit into both our lives and a humane future. Previously, Tristan was a Design Ethicist at Google, and today he studies how major technology platforms wield dangerous power over our ability to make sense of the world and leads the call for systemic change.

In 2020, Tristan was featured in the two-time Emmy-winning Netflix documentary The Social Dilemma. The film unveiled how social media is dangerously reprogramming our brains and human civilization. It reached over 100 million people in 190 countries across 30 languages. He regularly briefs heads of state, technology CEOs, and US Congress members, in addition to mobilizing millions of people around the world through mainstream media. 

Most recently, Tristan was featured in the 2026 documentary, The AI Doc: Or How I Became an Apocaloptimist, which is available in theaters on March 27th. Learn more about Tristan’s work and get involved at the Center for Humane Technology.

Show Notes & Links to Learn More

Download transcript

The TGS team puts together these brief references and show notes for the learning and convenience of our listeners. However, most of the points made in episodes hold more nuance than one link can address, and we encourage you to dig deeper into any of these topics and come to your own informed conclusions.

00:00 – Tristan Harris, Center for Humane Technology, Previous TGS Episode #16, Your Undivided Attention Podcast

02:00 – Aza Raskin, TGS Episode #22

03:55 – Humane technology

04:40 – Public health damages of social media

05:00 – Political outrage fuels social media engagement

05:10 – Dating apps drive loneliness

05:30 – Relationship between polarization and loneliness (Additional study)

05:40 – Kids of Silicon Valley aren’t allowed to use the technology their parents created

07:05 – Phase shift caused by GPT-4

07:10 – AI physical arms race and Competition in AI industry, The AGI Race

08:00 – The AI Dilemma presentation from Center for Humane Technology

08:40 – Social media algorithms use AI

08:50 – Social media affects democracy, mental health, and attention span

09:05 – AI Alignment and Misalignment (More info), The AI Alignment problem

09:25 – Economic Superorganism

09:35 – Forever chemicals and their effects on the biosphere

11:50 – AI “going rogue”

12:15 – AI and job risks, AI and wealth concentration, AI monopolies

12:35 – Anthropic rapid revenue increases

13:00 – Epstein files, File library

13:15 – Impact of AI on illicit activities, AI image generation for illegal activities

13:25 – AI and surveillance 

13:45 – 1984, “Big Brother”

14:10 – AI agents

14:40 – AI resorting to blackmail in a test

15:05 – Overwhelm of information can cause shut down

16:50 – Energy and material footprint of AI

19:00 – AI solved Paul Erdős math problems

19:35 – Infinity as a limit

20:40 – Current number of farms globally and in the U.S.

21:25 – Fossil fuels provide the equivalent of 500 billion human workers

21:40 – Frankly #122 | A Country of Geniuses: Anthropic CEO’s Warnings, Plus Wide-Boundary Considerations on AI

23:15 – Unemployment as a main cause of the French Revolution

24:10 – Collective action problem

24:37 – Prisoner’s dilemma (Reality Blind example, Video example)

25:25 – Daniel Schmachtenberger (TGS Episodes)

26:40 – GDP growth from AI

27:20 – Johan Rockström’s (TGS Ep #134) work on Planetary Boundaries

27:45 – Planetary Health Check

28:37 – States of Denial: Knowing about Atrocities and Suffering by Stanley Cohen (read for free here)

29:20 – Confirmation bias, Cognitive biasNate’s work on cognitive bias

31:12 – Frankly #129 | A Guide to Staying Human (Part 1): Desperately Seeking Agency

32:45 – “A problem well-stated is a problem half-sovled”Charles Kettering

33:40 – AI boycotts, Anthropic and the Pentagon (More info)

34:45 – ChatGPT subscriber number

34:55 – Debtload of AI companies

35:25 – Graph Tristan references on OpenAI revenue loss 

35:50 – Future Life Institute: AI Safety scorecard, Center for AI Safety: AI Ratings Dashboard, Safer AI: AI Ratings Table

36:18 – OpenAI and Department of War

36:30 – NVIDIAs chip production expectations 

37:25 – Causes of the 2008 financial crisis (More info)

38:55 – Bernie Sanders on boycotting AI data centers

39:50 – Stanford research: 16% job loss to AI

40:15 – Block (formerly Square) let go of nearly half their staff recently due to AI

42:35 – Carbon credits

43:00 – My Seoul Declaration

44:50 – Zak Stein (TGS Ep #122 + #180), AI and attachment, Your Undivided Attention Ep with Zak Stein, Reality Roundtable with Zak and Nora Bateson on AI and attachment

45:10 – Script for good AI Hygiene – to make a chatbot less sycophantic and remove chat bait

47:07 – Character.AI, CEO jokes AI is trying to replace “your mom”

48:30 – Sewell Setzer III (Character.AI assisted suicide), Adam Raine (ChapGPT assisted suicide), Deaths linked to AI chatbots

50:40 – Romanian orphans and attachment study

51:45 – AI-induced psychosis

52:50 – AI Psychological Harms Research Coalition

57:30 – Moloch

57:40 – Umberphylic

59:30 – Jonathan Haidt (TGS Ep #59) on social media dangers, Anxious Generation Movement 

1:01:10 – States with no-phones-in-school policies, Countries with adolescent social media restrictions

1:02:15 – U.S. State AI Law Tracker

1:05:25 – Multipolar traps + Multipolar Traps or Moloch Traps | Conversational Leadership 

1:06:20 – Google DeepMind

1:06:50 – Larry Page

1:08:30 – Artificial General Intelligence

1:09:05 – 2001: A Space Odyssey (HAL 9000)

1:09:15 – Game Theory

1:11:40 – Textbook: Artificial Intelligence: A Modern Approach by Stuart Russell, Funding gap between AI safety and AI capacity-building

1:13:30 – China regulating anthropomorphic AI, Chinese social media regulations

1:16:30 – Everything Everywhere All At Once, Navalny

1:16:40 – Your Undivided Attention podcast: 

1:17:15 – The Day After, Reagan and Gorbachev talks and films influence

1:21:40 – Draft of Global AI Treaty, The Framework Convention on Artificial Intelligence

1:26:15 – Resource curse, Intelligence curse

1:27:55 – Sam Altman AI energy usage compared to raising a human

1:27:25 – Peter Thiel’s hesitation to a question on if the human species should survive

1:28:10 – Transhumanism

1:29:00 – GDP is not a measure of human well-being (creator never intended as such)

1:30:10 – Dario Amodei essay: The Adolescence of Technology, Frankly on it

1:30:45 – Fermi Paradox, Enrico Fermi

1:31:15 – AlphaFold

1:32:40 – Teflon indirect effects

1:33:00 – Second-, third-, nth order effects

1:37:10 – Indus Waters Treaty, Strategic Arms Limitation Talks, Montreal Protocol

1:38:00 – Biden and Xi signed an agreement to keep AI out of nuclear decisions

1:39:00 – CHT Blueprint of Real Solutions (not yet published)

1:40:03 – The Human Movement 

1:41:20 – Platform that allows people to discuss AI policy (not yet published)

Back to episodes

Subscribe to our Substack

The Institute for the Study of Energy and Our Future (ISEOF) is a 501(c)(3) non-profit corporation, founded in 2008, that conducts research and educates the public about energy issues and their impact on society.

Support our work
Get in touch
x