AI’s Unseen Risks

Ep 180  |  Zak Stein

Zak Stein — AI’s Unseen Risks: How Artificial Intelligence Could Harm Future Generations

Check out this podcast

Zak Stein The Great Simplification

Description

While most industries are embracing artificial intelligence, citing profit and efficiency, the tech industry is pushing AI into education under the guise of ‘inevitability’. But the focus on its potential benefits for academia eclipses the pressing (and often invisible) risks that AI poses to children – including the decline of critical thinking, the inability to connect with other humans, and even addiction. With the use of AI becoming more ubiquitous by the day, we must ask ourselves: can our education systems adequately protect children from the potential harms of AI?

In this episode, Nate is joined once again by philosopher of education Zak Stein to delve into the far-reaching implications of technology – especially artificial intelligence – on the future of education. Together, they examine the risks of over-reliance on AI for the development of young minds, as well as the broader impact on society and some of the biggest existential risks. Zak explores the ethical challenges of adopting AI into educational systems, emphasizing the enduring value of traditional skills and the need for a balanced approach to integrating technology with human values (not just the values of tech companies).

What steps are available to us today – from interface design to regulation of access – to limit the negative effects of Artificial Intelligence on children? How can parents and educators keep alive the pillars of independent thinking and foundational learning as AI threatens them? Ultimately, is there a world where Artificial Intelligence could become a tool to amplify human connection and socialization – or might it replace them entirely?

About Zak Stein

Dr. Zak Stein is a philosopher of education, as well as a Co-founder of the Center for World Philosophy and Religion. He is also the Co-founder of Civilization Research Institute, the Consilience Project, and Lectica, Inc. He is the author of dozens of published papers and two books, including Education in a Time Between Worlds. Zak received his EdD from Harvard University. 

In French, we have a motto that says that a simple drawing is often better than a long explanation. Jean-Marc Jancovici Carbone 4 President

That’s very understandable because with left atmosphere thinking, one of the problems is that you see everything as a series of problems that must have solutions. Iain McGilchrist Neuroscientist and Philosopher

We can’t have hundreds and hundreds of real relationships that are healthy because that requires time and effort and full attention and awareness of being in real relationship and conversation with the other human. Nate Hagens Director of ISEOF

This is the crux of the whole problem. Individual parts of nature are more valuable than the biocomplexity of nature. Thomas Crowther Founder Restor

Show Notes & Links to Learn More

Download transcript

00:00 – Zak Stein: Works, Previous TGS Episode, Civilization Research Institute, Consilience Project, Center of World Philosophy and Religion, Education in a Time Between Worlds

00:30 – Artificial Intelligence (AI), AI Risk

02:45 – Standardization vs. Personalization in Education

04:05 – Intergenerational Knowledge Transmission, Recent examples, Study on the topic

04:29 – Technology: Printing Press, Electricity, Radio, Television, Digital Technologies

05:10 – Algorithmic bias

Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed,

The Impact of Curation Algorithms on Social Network Content Quality and Structure

05:25 – Technology can isolate students from teacher

05:45 – Covid-19 Pandemic

06:13 – Technology should be scaffolding human-to-human interaction in person

06:46 – Education in the Metacrisis

07:51 – Ivan Illich

08:57 – Forbes Study: 80% of Gen Zers Would Marry An AI

09:55 – Updated list of AI lawsuits

10:09 – AI Anthropomorphism

10:25 – Daniel Schmachtenberger (TGS Episodes)

11:19 – Using GPS can negatively impact spatial memory

11:40 – Cognitive atrophy and AI usage

12:00 – Kurt Fischer, Marc Gafni, Ken Wilbur, Daniel Schmactenberger + (TGS Episodes)

13:07 – History of AI

13:20 – Frankly on Attention

14:19 – David J. Temple, Exit the Silicon Maze

14:43 – Conversation as Cosmos, Logos Mysticism

15:35 – Whiteheadian Frame, Holonic Theory, Evolution, Emergence

16:20 – Conversational Pressure: Normativity in Speech Exchanges

16:50 – Speech Act theory

17:19 – Carbon Pulse, Social Status 

18:05 – Sycophantic AI 

18:14 – Psychometric AI

18:50 – Domestic Robots, Augmented Reality

19:15 – Elon Musk predicts domestic robots will be in every household

19:40 – Extremely wealthy people already have access to domestic robots

20:00 – Wozniak’s “Coffee Test”, Turing test 

20:30 – Attachment Theory Applied to Machines, Language Acquisition Process

22:15 – Supernormal Stimuli

22:21 – Parents preferentially feed larger offspring in asynchronously hatched broods irrespective of scramble competition

22:45 – Ideal Parent Figure Protocol by Dr. Daniel P. Brown

24:13 – Jonathan Haidt, TGS Episode, Cognitive abilities of students has declined

24:40 – Red Pill vs. Blue Pill

25:05 – Tristan Harris, TSG Episode, The Social Dilemma, Center for Humane Technology

26:29 – AI: Cheating vs. Tool Use 

27:50 – Who’s Accountable When AI Fails?

28:15- AI Personhood, Freedom of Speech, 14th Amendment and corporations

29:05 – Education, Democracy, and Propaganda: An Epistemological Crisis

31:00 – References Zak mentions in episode from a Harvard AI Conference

31:10 – Cognitive Decline

32:50 – Digital Attachment, Emotional Addiction to AI, Internet and Technology Addicts Anonymous

33:27 – Pornongraphy Addiction TEDx Talk

35:25 – Moloch, Superorganism: paper, video, Eye of Sauron 

36:00 – Tech-Driven Evolution of Schools: Family-Farm, One-room Schoolhouse, Factory Model School, Charter School, Technoptimist Classrooms

37:40 – Feudalism, Technofeudalism 

38:18 – Existential Risk, Nuclear weapon

38:50 – Technocracy, Walter Lippmann, B.F. Skinner 

39:43 – Facebook as Behavior Modification Empire, Facebook sold personal data 

40:45 – Overton Window, Self-Driving Cars

41:41 – 220 million college students globally

42:55 – White-Collar worker

44:15 – Blackouts, Brownouts, and Their Frequency Globally

44:28 – Chinese Finger Trap, One-Way Car Spikes, Bifurcation

45:20 – Epistemic Supply Chain, Book Burning

47:10 – Multipolar Trap

48:45 – Synthetic Intelligence

49:55 – AI Optimism

51:48 – Parenting and Technology

52:27 – Commodity Fetishism 

53:00 – AI lawsuit over teen’s suicide, Adolescents can detect deception

53:45 – Lack of Parental Controls for LLMs, Global Pornography Restrictions

54:51 – Psychotic Breaks via LLM Interaction (Rolling Stone article), Narcissism and AI

58:17 – OpenAI, OpenAI Lawsuit List

1:01:14 – Uniqueness of Human Childhood Duration

1:01:49 – Brain Development, Cortex Development Timeline

1:01:54 – Legitimate Teacherly Authority

1:02:45 – Youth learning starts physiologically and emotionally, Supporting students socially and emotionally 

1:05:20 – What Is Coding? How Do Computers Work?

1:06:11 – Collective Action Problem

1:06:31 – Supermajority 

1:07:05 – Central Nervous System

1:07:43 – OpenAI/MIT Study on AI Use and Emotional Harm

1:09:54 – Mistake Theory vs. Conflict Theory

1:10:41 – Transhumanism 

1:12:35 – AI Hallucination

1:14:04 – Matthew 5:5

1:14:51 – Replacing educational relief work with AI

1:15:15 – AI Colonialism

1:16:20 – Information Epistemology 

1:22:11 – Rationalization

1:23:09 – The Eye of Value

1:23:31 – Thomas Aquinas

1:24:45 – Narrow- vs. Wide-Boundary Wisdom

1:25:20 – Ecosystem Functions

1:28:41 – The Meaning Crisis, The Metacrisis

1:29:11 – Postmodernism

1:29:44 – Merlin bird app

1:30:19 – Malleable human nervous system, Becoming unaddicted

1:33:30 – Stockholm Syndrome

1:37:27 – Age limits on social media and porn in Australia

1:39:03 – Designated Drivers

1:39:30 – Waldorf Schools

1:40:26 – Sisyphean Task

1:41:15 – Speciation Event

1:43:33 – Exponential growth

1:44:30 – Kissinger’s question in Genesis

1:46:56 – What it Means to Be Human – “The Last Educators” chapter by Zak Stein

Back to episodes
If Anyone Builds It, Everyone DiesWith Nate SoaresThe Great SimplificationEp 203 | Nate Soares

Technological development has always been a double-edged sword for humanity: the printing press increased the spread of misinformation, cars disrupted the fabric of our cities, and social media has made us increasingly polarized and lonely. But it has not been since the invention of the nuclear bomb that technology has presented such a severe existential risk to humanity – until now, with the possibility of Artificial Super Intelligence (ASI) on the horizon. Were ASI to come to fruition, it would be so powerful that it would outcompete human beings in everything – from scientific discovery to strategic warfare. What might happen to our species if we reach this point of singularity, and how can we steer away from the worst outcomes?

Watch nowDec 3, 2025
Reimagining Ourselves at the End of the WorldWith Samantha SweetwaterThe Great SimplificationEp 202 | Samantha Sweetwater

Over the past decade, the world has become increasingly chaotic and uncertain – and so, too, has our cultural vision for the future. While the events we face now may feel unprecedented, they are rooted in much deeper patterns, which humanity has been playing out for millennia. If we take the time to understand past trends, we can also employ practices and philosophies that might counteract them –  such as focusing on kinship, intimacy, and resilience – to help pave the way for a better future. How might we nurture the foundations of a different kind of society, even while the end of our current civilization plays out around us?

Watch nowNov 24, 2025
Two Ways of KnowingWith Rosa Vásquez EspinozaThe Great SimplificationEp 201 | Rosa Vásquez Espinoza

For centuries, modern science has relied on the scientific method to better understand the world around us. While helpful in many contexts, the scientific method is also objective, controlled, and reductionist – often breaking down complex systems into smaller parts for analysis and isolating subjects to test hypotheses. In contrast, indigenous wisdom is deeply contextual, rooted in lived experience, and emphasizes a reciprocal, integrated relationship with the rest of the natural world, viewing all parts of the system as interconnected. What becomes possible when we combine the strengths of each of these knowledge systems as we navigate humanity’s biggest challenges? 

Watch nowNov 19, 2025

Subscribe to our Substack

The Institute for the Study of Energy and Our Future (ISEOF) is a 501(c)(3) non-profit corporation, founded in 2008, that conducts research and educates the public about energy issues and their impact on society.

Support our work
Get in touch
x