Great simplification pulsing lines

Ep 132  |  Daniel Schmachtenberger

Daniel Schmachtenberger: “Silicon Dreams and Carbon Nightmares: The Wide Boundary Impacts of AI”

Check out this podcast

TGS132 Daniel Schmachtenberger The Great Simplification

Show Summary

Artificial intelligence has been advancing at a break-neck pace. Accompanying this is an almost frenzied optimism that AI will fix our most pressing global problems, particularly when it comes to the hype surrounding climate solutions.

In this episode, Daniel Schmachtenberger joins Nate to take a wide-boundary look at the true environmental risks embedded within the current promises of artificial intelligence. He demonstrates that the current trajectory of AI’s impact is headed towards ecological destruction, rather than restoration… an important narrative currently missing from the discourse surrounding AI at large. 

What are the environmental implications of a tool with unbound computational capabilities aimed towards goals of relentless growth and extraction? How could artificial intelligence play into the themes of power and greed, intensifying inequalities and accelerating the fragmentation of society? What role could AI play under a different set of values and expectations for the future that are in service to the betterment of life?

We encourage you to explore the resources and research from The Civilization Research Institute on artificial intelligence compiled in this document.

About Daniel Schmachtenberger

Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue. 

The throughline of his interests has to do with ways of improving the health and development of individuals and society, with a virtuous relationship between the two as a goal.

Towards these ends, Daniel has a particular interest in catastrophic and existential risk, with focuses on civilization collapse and institutional decay. His work also includes an analysis of progress narratives, collective action problems, and social organization theories. These themes are all connected through close study of the relevant domains in philosophy and science.

In French, we have a motto that says that a simple drawing is often better than a long explanation. Jean-Marc Jancovici Carbone 4 President

That’s very understandable because with left atmosphere thinking, one of the problems is that you see everything as a series of problems that must have solutions. Iain McGilchrist Neuroscientist and Philosopher

We can’t have hundreds and hundreds of real relationships that are healthy because that requires time and effort and full attention and awareness of being in real relationship and conversation with the other human. Nate Hagens Director of ISEOF

This is the crux of the whole problem. Individual parts of nature are more valuable than the biocomplexity of nature. Thomas Crowther Founder Restor

Show Notes & Links to Learn More

Civilization Research Institute Resources to Learn more on AI PDF + Live Document

00:00 – Daniel Schmachtenberger info + Bend Not Break series, Prior episode on Artificial Intelligence, Episode on Naive Progress + Paper on Naive Progress

01:30 – Tristan Harris, TGS Episode

04:47 – Tackling Climate Change with Machine Learning

07:22 – AI’s ability to make oil extraction more efficient, 92% of oil companies in contract with AI companies

08:09 – NVIDIA providing chips to big energy, Microsoft contracts with oil companies

09:23 – AI has been used in fission for a long time, but no evidence of huge breakthroughs

11:32 – AI and defense

11:42 – Precision targeting, comprehensive battle planning, intelligence systems and AI
12:08 – AI in law enforcement

12:24 – AI in education, tutors

12:34 – AI and elderly care

12:10 – NVIDIA market cap, top ten market cap companies are in AI

14:10 – AI marketing and lobbying power

14:18 – Where Arguments Come From

16:06 – OpenAI History

16:50 – Concerns about AI automation

17:09 – Acceleration of trivial AI development companies

18:57 – Clip of top AI leaders saying the AI will kill everyone

21:20 – Motivated Reasoning

22:31 – Brain-computer neural interface, Nick Bostrom

22:57 – Ted Chu, Human Purpose and Transhuman Potential: A Cosmic Vision of Our Future Evolution

25:03 – David Pearce, Abolitionist Project

26:26 – Bootloader

27:22 – Nick Land, Ray Kurzweil

27:50 – Alex Karp, Palantir

29:59 – Zak Stein, TGS Episode

31:44 – Eliezer Yudkowsky

33:14 – ChatGPT on how to wipe out humanity

35:01 – China’s military drill near Taiwan

35:08 – Operation Hellscape

35:48 – Anduril, integrated automated kill webs

38:35 – Ubiquitous Technological Surveillance

38:52 – Wifi routers designed to monitor people’s locations in their homes

40:02 – Internet of Things

40:33 – Planet Labs

40:40 – Increasing spatial recognition from satellites

42:56 – Moore’s Law compared to GPU growth

46:33 – Energy consumption of AI

47:33 – Bloomberg on data center energy consumption

48:50 – Massive data centers that are beyond energy grid capacity

51:17 – Material, water, and land use requirements for data centers

52:16 – Microsoft commitments to be 100% renewable by 2030 fall through because of AI 

53:02 – Jevons Paradox

53:10 – Since 1996 we’ve increased efficiency 36% but increased 63% increase in energy use

53:27 – Rebound effect vs Backfire effect

55:51 – Growth in renewables has not hampered growth of fossil fuels

57:02 – Gini Coefficient

57:43 – First paper on climate change published in 1938

58:12 –  Trillions of dollars in climate change funding

58:41 – Only dips in fossil fuel use comes from recessions

1:02:35 – Janine Benyus

1:04:18 – Fusion development dates keep getting pushed back

1:05:35 – Humans and microgravity aren’t a good mix

1:08:10 – How AI will simply increase economic activity and it’s subsequent externalities
1:09:27 – 16 trillions dollars per year to internalize the externalized cost of PFAS

1:12:31 – NVIDIA the most profitable company in the world, PwC’s Global Artificial Intelligence Study, Elon on Tesla profitability with AI

1:14:25 – “Peak Oil, AI, and the Straw” | Frankly #56

1:17:42 – Treaty preventing arctic drilling

1:20:44 – As people by more organic food, more pesticide is used every year

1:20:48 – As more people eat vegan food every year, total factory farming continues to go up

1:22:39 – Moloch

1:24:31 – How AI could centrally plan the economy

1:31:01 – There are lots of types of AI

1:32:55 – Precautionary Principle

1:44:14 – Energy is 5% of the S&P

1:45:22 – Daniel Schmachtenberger: “A Vision for Betterment” | The Great Simplification 126

Download transcript
Back to episodes
Algorithmic CancerWith Connor LeahyThe Great SimplificationEp 184 | Connor Leahy

Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

Watch nowJun 25, 2025
Rod SchoonoverThe National Security Risks We’re Not Prepared ForWith Rod SchoonoverThe Great SimplificationEp 183 | Rod Schoonover

National security concerns have been the invisible hand guiding governance throughout recorded history. In the 20th century, it was defined by a country versus country dynamic: whichever nation was the strongest and most strategic was also the safest. But today, our biggest national security threats don’t come from opposing nations – they are “actorless threats” that emerge from the breakdown of the complex systems we all depend on – from the stability of our planetary systems to our intricately complex and fragile global supply chains. In this unprecedented landscape, what is required of us in order to keep our citizens safe?

Watch nowJun 18, 2025
Movie Re-ReleaseThe Systems Science Behind Our Global CrisesWith Nate HagensThe Great SimplificationEp 182 | Nate Hagens

Three years ago, my team and I created a 30-minute movie that provides a comprehensive systems analysis of the human predicament—spanning energy, economics, ecology, and behavioral psychology. This beautifully animated film aims to help viewers understand the interconnected crises defining our era.

Watch nowJun 13, 2025

Subscribe to our Substack

Please enable JavaScript in your browser to complete this form.

The Institute for the Study of Energy and Our Future (ISEOF) is a 501(c)(3) non-profit corporation, founded in 2008, that conducts research and educates the public about energy issues and their impact on society.

Support our work
Get in touch
x