
Ep 184 | Connor Leahy
Connor Leahy — Algorithmic Cancer: Why AI Development Is Not What You Think
Check out this podcast

Description
Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?
In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer’ – AI generated content that crowds out true human creations, propelled by algorithms that can’t tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.
What kinds of policy and regulatory approaches could help slow down AI’s acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology’s impacts on mental health, meaning, and societal well-being?
About Connor Leahy
Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.
Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.
In French, we have a motto that says that a simple drawing is often better than a long explanation. Jean-Marc Jancovici Carbone 4 President
That’s very understandable because with left atmosphere thinking, one of the problems is that you see everything as a series of problems that must have solutions. Iain McGilchrist Neuroscientist and Philosopher
We can’t have hundreds and hundreds of real relationships that are healthy because that requires time and effort and full attention and awareness of being in real relationship and conversation with the other human. Nate Hagens Director of ISEOF
This is the crux of the whole problem. Individual parts of nature are more valuable than the biocomplexity of nature. Thomas Crowther Founder Restor
Show Notes & Links to Learn More
00:00 – Connor Leahy, Conjecture, EleutherAI,
- Connor’s newest project – Torchbearer Community Interest Form
- Send a letter to your representative about AI risk
- The Direct Institutional Plan
- A Narrow Path
- The Compendium
The Great Simplification Episodes covering Artificial Intelligence (AI):
How Artificial Intelligence Could Harm Future Generations with Zak Stein | TGS 180
The Wide Boundary Impacts of AI with Daniel Schmachtenberger | TGS 132
Zak Stein: “Values, Education, AI and the Metacrisis” | The Great Simplification 122
Frankly’s covering AI:
Who Will You Become As AI Accelerates? | Frankly 96
Artificial Intelligence – What is NOT In Service of Life? | Frankly 92
Artificial Intelligence and the Lost Ark | Frankly 83
“Peak Oil, AI, and the Straw” | Frankly 56
Artificial Intelligence vs. Real Ecology | Frankly 49
00:30 – Global Catastrophic Risks, TGS Episode on Existential Risk
02:00 – AI vs. Artificial General Intelligence (AGI) vs. Artificial Super Intelligence (ASI)
04:20 – Similarities and differences between chimpanzee and human cognition
04:55 – The Merge Operation in language and how it distinguishes homo sapiens
06:39 – Economic growth is highly correlated with energy use
07:10 – AI exponential improvement and Moore’s Law
07:20 – Technological Singularity – AI to AGI to ASI, More information, Sam Altman’s take
08:24 – Geoffrey Hinton – “Godfather of AI”, CEOs of major AI companies comments on AGI
10:25 – Recursive Self-Improvement
12:25 – Military desire for “Decision Dominance”
13:00 – Meaningful human control of AI
13:20 – Good Old-Fashioned AI (GOFAI)
14:00 – Neural networks in AI, Video explaining AI Neural Networks in 5 minutes
14:40 – Deep Learning, Backpropagation Algorithm
16:50 – Role of Data Centers
17:10 – Significant computational power required of AI, Computation used to train notable artificial intelligence systems, by domain
17:45 – GPUs vs. CPUs, Binary Large Objects (BLOBs)
17:55 – MIT Report on AI’s Climate Footprint, Calculating Open AI’s electricity consumption
18:25 – AI training vs. inference
19:35 – Search Engines vs AI: energy consumption compared
20:10 – Data Center percent energy usage in U.S., Data Centers near renewable energy sources in rural areas
20:30 – AI & AGI Energy Needs
21:40 – Cybersecurity challenges with AI, International and National Security Risks with AI
22:40 – Recent policies on U.S. Federal Government AI Procurement, AI use in U.S. Federal and State Governments 2024, U.S. Federal Government AI Use Case Inventory
22:45 – AI could replace CEOs
23:10 – AI hallucinations and their improvements
25:10 – AI emulates our delusion, self-deception, and overconfidence, AI-induced psychosis
26:00 – AI Model, AI Fine-tuning
27:00 – Jailbreaking AI and why it’s so easy
28:05 – Risks of Open-Source AI
33:50 – Who Will You Become As AI Accelerates? | Frankly 96
35:30 – Mark Zuckerberg says AI can take place of friends, AI company net worths
36:50 – Luddites
37:40 – No regulation of social media even though it is just as addictive as regulated things like gambling and hard drugs
38:19 – Recent legalization of sport’s gambling in many U.S. States
41:02 – Human Behavior: In-group/Out-group and Tribalism
41:15 – The Carbon Pulse
43:15 – Individual Agency and Advertising, Attention Economy
44:10 – Economic Superorganism
44:35 – Algorithmic Cancer
47:30 – Early GPT refuses to speak Croatian
48:30 – Update to ChatGPT caused sycophancy
49:00 – The Oil Drum
49:30 – Populism is on the rise
50:55 – Power of fossil fuels, Environmental externalities ignored, Species migrating poleward (Malin Pinsky + Joe Roman TGS Episodes)
52:50 – Jonathan Haidt TGS Episode, Screen time and child development
54:10 – Cognitive biases and climate change
55:23 – The Great Simplification Movie, Reality Blind, The Bottlenecks of the 21st Century
56:50 – ~19 Terawatts of yearly global energy production, More information
58:30 – AI cognition vs. Human cognition, AI Alignment
59:40 – Evolution of Altruism, AI and Sociopathy
1:00:00 – ASI Existential Risk, Advanced AI Extinction Risk & Risk Analysis
1:00:20 – Sam Altman and Dario Amodei openly sharing concerns that AI could lead to human extinction
1:02:10 – ‘Founder as Victim, Founder as God’: Peter Thiel, Elon Musk and the two bodies of the entrepreneur
1:02:40 – Sam Altman’s blog
1:03:00 – The AGI Race
1:09:00 – Citizen responsibilities in a Democracy
1:10:16 – Audrey Tang TGS Episode
1:11:50 – LLM-use’s relationship to cognitive decline
1:15:10 – “Realpolitik NatSec (National Security)”
1:17:00 – Today’s humans try to achieve the states of our ancestors
1:17:40 – Precursors to ASI, ASI Predictions
1:18:37 – Connor’s newest project – Torchbearer Community Interest Form
1:23:38 – China’s big AI players
1:25:00 – Social media and Propaganda
1:34:00 – Humanism