Good podcast

Top 100 most popular podcasts

Your Undivided Attention

Your Undivided Attention

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

Subscribe

iTunes / Overcast / RSS

Website

your-undivided-attention.simplecast.com

Episodes

AI and the Future of Work: What You Need to Know

No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability? 

Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.

Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market. 

RECOMMENDED MEDIA

Co-Intelligence: Living and Working with AI by Ethan Mollick

Further reading on Molly?s study with the Yale Budget Lab

The ?Canaries in the Coal Mine? Study from Stanford?s Digital Economy Lab

Ethan?s substack One Useful Thing
 

RECOMMENDED YUA EPISODES
Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel

We Have to Get It Right?: Gary Marcus On Untamed AI

AI Is Moving Fast. We Need Laws that Will Too.

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

 

CORRECTIONS

Ethan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance.Ethan claimed that over 50% of Americans say that they?re using AI at work. We weren?t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use.Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to ?keep all 3 million employees and to figure out new ways to expand what they use.? In fact, McMillon?s language on AI has been much softer, saying that ?AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.? Additionally, Walmart has 2.1 million employees, not 3.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-12-04
Link to episode

Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

This week, we?re bringing you Tristan?s conversation with Tobias Rose-Stockwell on his podcast ?Into the Machine.? ?Tobias is a designer, writer, and technologist and the author of the book ?The Outrage Machine.? 

Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we?re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future?one that is within reach, if we have the courage to make it a reality. 

If you enjoyed this conversation, be sure to check out and subscribe to ?Into the Machine?:

YouTube: Into the Machine Show

Spotify: Into the Machine

Apple Podcasts: Into the Machine

Substack: Into the Machine

You may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible.  We'd really love to hear from you about these solutions and any other questions you're holding.  So please, if you have more thoughts or questions, send us an email at [email protected].  

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-11-13
Link to episode

What if we had fixed social media?

We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?

In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like?one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.

This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

Dopamine Nation by Anna Lembke

The Anxious Generation by Jon Haidt

More information on Donella Meadows

Further reading on the Kids Online Safety Act

Further reading on the lawsuit filed by state AGs against Meta

RECOMMENDED YUA EPISODES

Future-proofing Democracy In the Age of AI with Audrey Tang

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

AI Is Moving Fast. We Need Laws that Will Too.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-11-06
Link to episode

Ask Us Anything 2025

It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models?each one smarter, faster, and more unpredictable than the last. We?re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots?with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.

It?s enough to make anyone?s head spin. In this year?s Ask Us Anything, we try to make sense of it all.

You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like?and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week?s episode.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

The system card for Claude 4.5

Our statement in support of the AI LEAD Act

The AI Dilemma

Tristan?s TED talk on the narrow path to a good AI future

RECOMMENDED YUA EPISODES

The Man Who Predicted the Downfall of Thinking

How OpenAI's ChatGPT Guided a Teen to His Death

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

War is a Laboratory for AI with Paul Scharre

No One is Immune to AI Harms with Dr. Joy Buolamwini

?Rogue AI? Used to be a Science Fiction Trope. Not Anymore.

Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it?s been out for about a month. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-10-23
Link to episode

The Crisis That United Humanity?and Why It Matters for AI

In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn?t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.

Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.

So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.

Susan's 2024 book ?Solvable: How We Healed the Earth, and How We Can Do It Again,? explores the playbook for global coordination that has worked for previous planetary crises.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

 

RECOMMENDED MEDIA

?Solvable: How We Healed the Earth, and How We Can Do It Again? by Susan Solomon

The full text of the Montreal Protocol

The full text of the Kigali Amendment
 

RECOMMENDED YUA EPISODES

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco?s Playbook

Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI

AI Is Moving Fast. We Need Laws that Will Too.

Big Food, Big Tech and Big AI with Michael Moss

Corrections:

Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.

Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-09-11
Link to episode

How OpenAI's ChatGPT Guided a Teen to His Death

Content Warning: This episode contains references to suicide and self-harm. 

Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: ?I know what you are asking and I won't look away from it.?

Adam?s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI?which specializes in artificial intimacy?Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.

CHT Policy Director Camille Carlton joins the show to talk about Adam?s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that?s needed to shift those incentives. Cases like Adam and Sewell?s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.

If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.

RECOMMENDED MEDIA 

The 988 Suicide and Crisis Lifeline

Further reading on Adam?s story

Further reading on AI psychosis

Further reading on the backlash to GPT5 and the decision to bring back 4o

OpenAI?s press release on sycophancy in 4o

Further reading on OpenAI?s decision to eliminate the persuasion red line

Kashmir Hill?s reporting on the woman with an AI boyfriend

RECOMMENDED YUA EPISODES

AI is the Next Free Speech Battleground

People are Lonelier than Ever. Enter AI.

Echo Chambers of One: Companion AI and the Future of Human Connection

When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

CORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-08-26
Link to episode

?Rogue AI? Used to be a Science Fiction Trope. Not Anymore.

Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.

And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There?s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this?or even why they?re doing it all.

In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years. ?Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.

The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

Gladstone AI?s State Department Action Plan, which discusses the loss of control risk with AI

Apollo Research?s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic?s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo Research

Anthropic?s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research?s work on alignment faking

The Trump White House AI Action Plan

Further reading on the phenomenon of more advanced AIs being better at deception.

Further reading on Replit AI wiping a company?s coding database

Further reading on the owl example that Jeremie gave

Further reading on AI induced psychosis

Dan Hendryck and Eric Schmidt?s ?Superintelligence Strategy?
 

RECOMMENDED YUA EPISODES

Daniel Kokotajlo Forecasts the End of Human Dominance

Behind the DeepSeek Hype, AI is Learning to Reason

The Self-Preserving Machine: Why AI Learns to Deceive

This Moment in AI: How We Got Here and Where We?re Going

CORRECTIONS

Tristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.

Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven?t been any documented cases of an AI going rogue and asking for control permissions.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-08-14
Link to episode

AI is the Next Free Speech Battleground

Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property.  Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.

This isn't a science fiction scenario. It?s the future we?re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.

In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court?s role in steering AI and what we can do to help steer it better.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

?The First Amendment Does Not Protect Replicants? by Larry Lessig

More information on the Tech Justice Law Project

Further reading on Sewell Setzer?s story

Further reading on NYT v. Sullivan

Further reading on the Citizens United case

Further reading on Google?s deal with Character AI

More information on Megan Garcia?s foundation, The Blessed Mother Family Foundation

RECOMMENDED YUA EPISODES

When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

AI Is Moving Fast. We Need Laws that Will Too.

The AI Dilemma

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-07-31
Link to episode

Daniel Kokotajlo Forecasts the End of Human Dominance

In 2023, researcher Daniel Kokotajlo left OpenAI?and risked millions in stock options?to warn the world about the dangerous direction of AI development. Now he?s out with AI 2027, a forecast of where that direction might take us in the very near future. 

AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you?re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don?t have to agree with Daniel?s specific forecast to recognize that the incentives around AI could take us to a very bad place.

We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.  

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA
The AI 2027 forecast from the AI Futures Project

Daniel?s original AI 2026 blog post 

Further reading on Daniel?s departure from OpenAI

Anthropic recently released a survey of all the recent emergent misalignment research

Our statement in support of Sen. Grassley?s AI Whistleblower bill 

RECOMMENDED YUA EPISODES

The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
AGI Beyond the Buzz: What Is It, and Are We Ready?

Behind the DeepSeek Hype, AI is Learning to Reason
The Self-Preserving Machine: Why AI Learns to Deceive

Clarification: Daniel K. referred to whistleblower protections that apply when companies ?break promises? or ?mislead the public.? There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.


 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-07-17
Link to episode

Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel

Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?

Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics ? it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.

We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

The Tyranny of Merit by Michael Sandel

Democracy?s Discontent by Michael Sandel

What Money Can?t Buy by Michael Sandel

Take Michael?s online course ?Justice?

Michael?s discussion on AI Ethics at the World Economic Forum

Further reading on ?The Intelligence Curse?

Read the full text of Robert F. Kennedy?s 1968 speech

Read the full text of Dr. Martin Luther King Jr.?s 1968 speech

Neil Postman?s lecture on the seven questions to ask of any new technology

RECOMMENDED YUA EPISODES

AGI Beyond the Buzz: What Is It, and Are We Ready?

The Man Who Predicted the Downfall of Thinking

The Tech-God Complex: Why We Need to be Skeptics

The Three Rules of Humane Tech

AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-06-26
Link to episode

The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.

Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.

This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?

We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA 

Tristan?s TED talk on the Narrow Path

Sam?s 95 Theses on AI

Sam?s proposal for a Manhattan Project for AI Safety

Sam?s series on AI and Leviathan

The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson

Dario Amodei?s Machines of Loving Grace essay.

Bourgeois Dignity: Why Economics Can?t Explain the Modern World by Deirdre McCloskey

The Paradox of Libertarianism by Tyler Cowen

Dwarkesh Patel?s interview with Kevin Roberts at the FAI?s annual conference

Further reading on surveillance with 6G

RECOMMENDED YUA EPISODES

AGI Beyond the Buzz: What Is It, and Are We Ready?

The Self-Preserving Machine: Why AI Learns to Deceive 

The Tech-God Complex: Why We Need to be Skeptics 

Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

CORRECTIONS

Sam referenced a blog post titled ?The Libertarian Paradox? by Tyler Cowen. The actual title is the ?Paradox of Libertarianism.? 

Sam also referenced a blog post titled ?The Collapse of Complex Societies? by Eli Dourado. The actual title is ?A beginner?s guide to sociopolitical collapse.?


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-06-12
Link to episode

People are Lonelier than Ever. Enter AI.

Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.

And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.

How will that change us? And what rules should we set down now to avoid the mistakes of the past?

These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel?s Sessions 2025, a conference for clinical therapists. This week, we?re bringing you an edited version of that conversation, originally recorded on April 25th, 2025.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.

RECOMMENDED MEDIA

?Alone Together,? ?Evocative Objects,? ?The Second Self? or any other of Sherry Turkle?s books on how technology mediates our relationships.

Key & Peele - Text Message Confusion 

Further reading on Hinge?s rollout of AI features

Hinge?s AI principles

?The Anxious Generation? by Jonathan Haidt

?Bowling Alone? by Robert Putnam

The NYT profile on the woman in love with ChatGPT

Further reading on the Sewell Setzer story

Further reading on the ELIZA chatbot

RECOMMENDED YUA EPISODES

Echo Chambers of One: Companion AI and the Future of Human Connection

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

Esther Perel on Artificial Intimacy

Jonathan Haidt On How to Solve the Teen Mental Health Crisis


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-05-30
Link to episode

Echo Chambers of One: Companion AI and the Future of Human Connection

AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It?s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we?re connecting with another person.

But these AI companions are not human, they?re a platform designed to maximize user engagement?and they?ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.

RECOMMENDED MEDIA

Further reading on the rise of addictive intelligence 

More information on Melvin Kranzberg?s laws of technology

More information on MIT?s Advancing Humans with AI lab

Pattie and Pat?s longitudinal study on the psycho-social effects of prolonged chatbot use

Pattie and Pat?s study that found that AI avatars of well-liked people improved education outcomes

Pattie and Pat?s study that found that AI systems that frame answers and questions improve human understanding

Pat?s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction 

Further reading on AI?s positivity bias

Further reading on MIT?s ?lifelong kindergarten? initiative

Further reading on ?cognitive forcing functions? to reduce overreliance on AI

Further reading on the death of Sewell Setzer and his mother?s case against Character.AI

Further reading on the legislative response to digital companions

RECOMMENDED YUA EPISODES

The Self-Preserving Machine: Why AI Learns to Deceive

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

Esther Perel on Artificial Intimacy

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

 

Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-05-15
Link to episode

AGI Beyond the Buzz: What Is It, and Are We Ready?

What does it really mean to ?feel the AGI?? Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.

In this episode, Aza Raskin and Randy Fernando dive deep into what ?feeling the AGI? really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.

As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?

Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.

RECOMMENDED MEDIA

Daniel Kokotajlo et al?s ?AI 2027? paper
A demo of Omni Human One, referenced by Randy
A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it?s values
A paper from Palisades Research that found an AI would cheat in order to win
The treaty that banned blinding laser weapons
Further reading on the moratorium on germline editing 

RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive

Behind the DeepSeek Hype, AI is Learning to Reason

The Tech-God Complex: Why We Need to be Skeptics

This Moment in AI: How We Got Here and Where We?re Going

How to Think About AI Consciousness with Anil Seth

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Clarification: When Randy referenced a ?$110 trillion game? as the target for AI companies, he was referring to the entire global economy.

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-04-30
Link to episode

Rethinking School in the Age of AI

AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.

So what comes next?

In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop?two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

Guests

Rebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.

Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.

RECOMMENDED MEDIA 
The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny Anderson

Proust and the Squid, Reader, Come Home, and other books by Maryanne Wolf

The OECD research which found little benefit to desktop computers in the classroom

Further reading on the Singapore study on digital exposure and attention cited by Maryanne 

The Burnout Society by Byung-Chul Han 

Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca 

Leapfrogging Inequality by Rebecca Winthrop

The Nation?s Report Card from NAEP 

Further reading on the Nigeria AI Tutor Study 

Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne 

Further reading on Linda Stone?s thesis of continuous partial attention.

RECOMMENDED YUA EPISODES
We Have to Get It Right?: Gary Marcus On Untamed AI 

AI Is Moving Fast. We Need Laws that Will Too.

Jonathan Haidt On How to Solve the Teen Mental Health Crisis


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-04-21
Link to episode

Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI

Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We?re not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible.

Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS?"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they?ve also been shown to cause serious health problems.

Rob?s story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society.

Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_.

Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump admin has announced their intent to rollback this review process.

RECOMMENDED MEDIA

?Exposure? by Robert Bilott 

ProPublica?s investigation into 3M?s production of PFAS 

The FB study cited by Tristan 

More information on the Exxon Valdez oil spill 

The EPA?s PFAS drinking water standards
 

RECOMMENDED YUA EPISODES

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco?s Playbook 

AI Is Moving Fast. We Need Laws that Will Too. 

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Big Food, Big Tech and Big AI with Michael Moss


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-04-03
Link to episode

Weaponizing Uncertainty: How Tech is Recycling Big Tobacco?s Playbook

One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental?it's often strategically manufactured.

Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.

In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

?Merchants of Doubt? by Naomi Oreskes and Eric Conway 

"The Big Myth? by Naomi Oreskes and Eric Conway 

"Silent Spring? by Rachel Carson 

"The Jungle? by Upton Sinclair 

Further reading on the clash between Galileo and the Pope 

Further reading on the Montreal Protocol
 

RECOMMENDED YUA EPISODES

Laughing at Power: A Troublemaker?s Guide to Changing Tech 

AI Is Moving Fast. We Need Laws that Will Too. 

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins
 
Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

CORRECTIONS:

Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.

CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be ?destroyed? by seatbelt regulation. We couldn?t verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-03-20
Link to episode

The Man Who Predicted the Downfall of Thinking

Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that ?we are a people on the verge of amusing ourselves to death.? Though he was writing mostly about TV, Postman?s insights feel eerily prophetic in our age of smartphones, social media, and AI. 

In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

RECOMMENDED MEDIA

?Amusing Ourselves to Death? by Neil Postman 

?Technopoly? by Neil Postman 

A lecture from Postman where he outlines his seven questions for any new technology. 

Sean?s podcast ?The Gray Area? from Vox 

Sean?s interview with Chris Hayes on ?The Gray Area? 

"Amazing Ourselves to Death," by Professor Strate

Further listening on Professor Strate's analysis of Postman. 

Further reading on mirror bacteria


RECOMMENDED YUA EPISODES

?A Turning Point in History?: Yuval Noah Harari on AI?s Cultural Takeover

This Moment in AI: How We Got Here and Where We?re Going

Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

Future-proofing Democracy In the Age of AI with Audrey Tang

CORRECTION:  Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-03-06
Link to episode

Behind the DeepSeek Hype, AI is Learning to Reason

When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.

In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before ? bringing unprecedented problem-solving potential but also unpredictable risks.

These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been ?solved? in the the game theory sense.

Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.

RECOMMENDED MEDIA

Further reading on DeepSeek?s R1 and the market reaction 

Further reading on the debate about the actual cost of DeepSeek?s R1 model  

The study that found training AIs to code also made them better writers 

More information on the AI coding company Cursor 

Further reading on Eric Schmidt?s threshold to ?pull the plug? on AI

 Further reading on Move 37

RECOMMENDED YUA EPISODES

The Self-Preserving Machine: Why AI Learns to Deceive 

This Moment in AI: How We Got Here and Where We?re Going 

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn 

The AI ?Race?: China vs. the US with Jeffrey Ding and Karen Hao

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-02-20
Link to episode

The Self-Preserving Machine: Why AI Learns to Deceive

When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.

In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team?s findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

Subscribe to your Youtube channel

And our brand new Substack!

RECOMMENDED MEDIA 

Anthropic?s blog post on the Redwood Research paper 

Palisade Research?s thread on X about GPT o1 autonomously cheating at chess 

Apollo Research?s paper on AI strategic deception

RECOMMENDED YUA EPISODES

We Have to Get It Right?: Gary Marcus On Untamed AI

This Moment in AI: How We Got Here and Where We?re Going

How to Think About AI Consciousness with Anil Seth

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-01-30
Link to episode

Laughing at Power: A Troublemaker?s Guide to Changing Tech

The status quo of tech today is untenable: we?re addicted to our devices, we?ve become increasingly polarized, our mental health is suffering and our personal data is sold to the highest bidder. This situation feels entrenched, propped up by a system of broken incentives beyond our control. So how do you shift an immovable status quo? Our guest today, Srdja Popovic, has been working to answer this question his whole life. 

As a young activist, Popovic helped overthrow Serbian dictator Slobodan Milosevic by turning creative resistance into an art form. His tactics didn't just challenge authority, they transformed how people saw their own power to create change. Since then, he's dedicated his life to supporting peaceful movements around the globe, developing innovative strategies that expose the fragility of seemingly untouchable systems. In this episode, Popovic sits down with CHT's Executive Director Daniel Barcay to explore how these same principles of creative resistance might help us address the challenges we face with tech today. 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.

RECOMMENDED MEDIA

?Pranksters vs. Autocrats? by Srdja Popovic and Sophia A. McClennen 

?Blueprint for Revolution? by Srdja Popovic

The Center for Applied Non-Violent Actions and Strategies, Srjda?s organization promoting peaceful resistance around the globe.

Tactics4Change, a database of global dilemma actions created by CANVAS

The Power of Laughtivism, Srdja?s viral TEDx talk from 2013

Further reading on the dilemma action tactics used by Syrian rebels

Further reading on the toy protest in Siberia

More info on The Yes Men and their activism toolkit Beautiful Trouble 

?This is Not Propaganda? by Peter Pomerantsev?

Machines of Loving Grace,? the essay on AI by Anthropic CEO Dario Amodei, which mentions creating an AI Srdja.

RECOMMENDED YUA EPISODES

Future-proofing Democracy In the Age of AI with Audrey Tang

The AI ?Race?: China vs. the US with Jeffrey Ding and Karen Hao

The Tech We Need for 21st Century Democracy with Divya Siddarth

The Race to Cooperation with David Sloan Wilson

CLARIFICATION: Srdja makes reference to Russian President Vladimir Putin wanting to win an election in 2012 by 82%. Putin did win that election but only by 63.6%. However, international election observers concluded that "there was no real competition and abuse of government resources ensured that the ultimate winner of the election was never in doubt."


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2025-01-16
Link to episode

Ask Us Anything 2024

2024 was a critical year in both AI and social media. Things moved so fast it was hard to keep up. So our hosts reached into their mailbag to answer some of your most burning questions. Thank you so much to everyone who submitted questions. We will see you all in the new year.

We are hiring for a new Director of Philanthropy at CHT. Next year will be an absolutely critical time for us to shape how AI is going to get rolled out across our society. And our team is working hard on public awareness, policy and technology and design interventions. So we're looking for someone who can help us grow to the scale of this challenge. If you're interested, please apply. You can find the job posting at humanetech.com/careers.

And, if you'd like to support all the work that we do here at the Center for Humane technology, please consider giving to the organization this holiday season at humantech.com/donate. All donations are tax-deductible.  

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA 

Earth Species Project, Aza?s organization working on inter-species communication

Further reading on Gryphon Scientific?s White House AI Demo

Further reading on the Australian social media ban for children under 16

Further reading on the Sewell Setzer case 

Further reading on the Oviedo Convention, the international treaty that restricted germline editing 

Video of Space X?s successful capture of a rocket with ?chopsticks?
 

RECOMMENDED YUA EPISODES
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

AI Is Moving Fast. We Need Laws that Will Too.

This Moment in AI: How We Got Here and Where We?re Going

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Talking With Animals... Using AI

The Three Rules of Humane Tech


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-12-19
Link to episode

The Tech-God Complex: Why We Need to be Skeptics

Silicon Valley's interest in AI is driven by more than just profit and innovation. There?s an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X.

If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future.

RECOMMENDED MEDIA 

?Tech Agnostic? by Greg Epstein

Further reading on Avi Schiffmann?s ?Friend? AI necklace 

Further reading on Blake Lemoine and Lamda 

Blake LeMoine?s conversation with Greg at MIT 

Further reading on the Sewell Setzer case 

Further reading on Terminal of Truths 

Further reading on Ray Kurzweil?s attempt to create a digital recreation of his dad with AI 

The Drama of the Gifted Child by Alice Miller

RECOMMENDED YUA EPISODES 

?A Turning Point in History?: Yuval Noah Harari on AI?s Cultural Takeover 

How to Think About AI Consciousness with Anil Seth 

Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei 

How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-11-21
Link to episode

What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

CW: This episode features discussion of suicide and sexual abuse. 

In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai. The question now is: what's next?

Megan has filed a major new lawsuit against Character.ai in Florida, which could force the company?and potentially the entire AI industry?to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT?s Policy Director.

RECOMMENDED MEDIA

Further reading on Sewell?s story

Laurie Segall?s interview with Megan Garcia

The full complaint filed by Megan against Character.AI

Further reading on suicide bots 

Further reading on Noam Shazier and Daniel De Frietas? relationship with Google 

The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use

Organizations mentioned: 

The Tech Justice Law Project

The Social Media Victims Law Center

Mothers Against Media Addiction

Parents SOS

Parents Together

Common Sense Media

RECOMMENDED YUA EPISODES

When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

AI Is Moving Fast. We Need Laws that Will Too.

Corrections: 

Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18.

Meetali referred to Section 230 as providing ?full scope immunity? to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-11-07
Link to episode

When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer

Content Warning: This episode contains references to suicide, self-harm, and sexual abuse.

Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she?s suing the company that made those chatbots. On today?s episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie?s full interview with Megan on her new show, Dear Tomorrow.

Aza and Laurie discuss the profound implications of Sewell?s story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell?s story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward.

If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

The first episode of Dear Tomorrow, from Mostly Human Media

The CHT Framework for Incentivizing Responsible AI Development 

Further reading on Sewell?s case

Character.ai?s ?About Us? page 

Further reading on the addictive properties of AI

RECOMMENDED YUA EPISODES

AI Is Moving Fast. We Need Laws that Will Too.

This Moment in AI: How We Got Here and Where We?re Going

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

The AI Dilemma


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-10-24
Link to episode

Is It AI? One Tool to Tell What?s Real with Truemedia.org CEO Oren Etzioni

Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what?s real and what?s not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue?

As it turns out, there?s a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org, a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 

RECOMMENDED MEDIA

TrueMedia.org

Further reading on the deepfaked image of an explosion near the Pentagon

Further reading on the deepfaked robocall pretending to be President Biden 

Further reading on the election deepfake in Slovakia 

Further reading on the President Obama lip-syncing deepfake from 2017 

One of several deepfake quizzes from the New York Times, test yourself! 

The Partnership on AI 

C2PA

Witness.org 

Truepic

 

RECOMMENDED YUA EPISODES

?We Have to Get It Right?: Gary Marcus On Untamed AI

Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

Synthetic Humanity: AI & What?s At Stake

 

CLARIFICATION: Oren said that the largest social media platforms ?don?t see a responsibility to let the public know this was manipulated by AI.? Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-10-10
Link to episode

'A Turning Point in History': Yuval Noah Harari on AI?s Cultural Takeover

Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI?s ability to generate cultural artifacts threatens humanity?s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ?alien AI agents??

In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity?s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.

This episode was recorded live at the Commonwealth Club World Affairs of California.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

NEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari 

You Can Have the Blue Pill or the Red Pill, and We?re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan

 The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza 

Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo?s ?move 37? 

Further Reading on Social.AI

RECOMMENDED YUA EPISODES

This Moment in AI: How We Got Here and Where We?re Going

The Tech We Need for 21st Century Democracy with Divya Siddarth

Synthetic Humanity: AI & What?s At Stake

The AI Dilemma

Two Million Years in Two Hours: A Conversation with Yuval Noah Harari


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-10-07
Link to episode

?We Have to Get It Right?: Gary Marcus On Untamed AI

It?s a confusing moment in AI. Depending on who you ask, we?re either on the fast track to AI that?s smarter than most humans, or the technology is about to hit a wall. Gary Marcus is in the latter camp. He?s a cognitive psychologist and computer scientist who built his own successful AI start-up. But he?s also been called AI?s loudest critic.

On Your Undivided Attention this week, Gary sits down with CHT Executive Director Daniel Barcay to defend his skepticism of generative AI and to discuss what we need to do as a society to get the rollout of this technology right? which is the focus of his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us.

The bottom line: No matter how quickly AI progresses, Gary argues that our society is woefully unprepared for the risks that will come from the AI we already have.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 

RECOMMENDED MEDIA

Link to Gary?s book: Taming Silicon Valley: How We Can Ensure That AI Works for Us

Further reading on the deepfake of the CEO of India's National Stock Exchange

Further reading on the deepfake of of an explosion near the Pentagon.

The study Gary cited on AI and false memories.

Footage from Gary and Sam Altman?s Senate testimony.

 

RECOMMENDED YUA EPISODES

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

No One is Immune to AI Harms with Dr. Joy Buolamwini

 

Correction: Gary mistakenly listed the reliability of GPS systems as 98%. The federal government?s standard for GPS reliability is 95%.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-09-26
Link to episode

AI Is Moving Fast. We Need Laws that Will Too.

AI is moving fast. And as companies race to rollout newer, more capable models?with little regard for safety?the downstream risks of those models become harder and harder to counter. On this week?s episode of Your Undivided Attention, CHT?s policy director Casey Mock comes on the show to discuss a new legal framework to incentivize better AI, one that holds AI companies liable for the harms of their products. 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

The CHT Framework for Incentivizing Responsible AI Development

Further Reading on Air Canada?s Chatbot Fiasco 

Further Reading on the Elon Musk Deep Fake Scams 

The Full Text of SB1047, California?s AI Regulation Bill 

Further reading on SB1047 

RECOMMENDED YUA EPISODES

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Can We Govern AI? with Marietje Schaake

A First Step Toward AI Regulation with Tom Wheeler

Correction: Casey incorrectly stated the year that the US banned child labor as 1937. It was banned in 1938.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-09-13
Link to episode

Esther Perel on Artificial Intimacy (rerun)

[This episode originally aired on August 17, 2023] For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there?s another harmful ?AI? on the rise ? Artificial Intimacy ? and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.

RECOMMENDED MEDIA 

Mating in Captivity by Esther Perel

Esther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desire

The State of Affairs by Esther Perel

Esther takes a look at modern relationships through the lens of infidelity

Where Should We Begin? with Esther Perel

Listen in as real couples in search of help bare the raw and profound details of their stories

How?s Work? with Esther Perel

Esther?s podcast that focuses on the hard conversations we're afraid to have at work 

Lars and the Real Girl (2007)

A young man strikes up an unconventional relationship with a doll he finds on the internet

Her (2013)

In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every need

RECOMMENDED YUA EPISODES

Big Food, Big Tech and Big AI with Michael Moss

The AI Dilemma

The Three Rules of Humane Tech

Digital Democracy is Within Reach with Audrey Tang

 

CORRECTION: Esther refers to the 2007 film Lars and the Real Doll. The title of the film is Lars and the Real Girl.
 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-09-06
Link to episode

Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

Today, the tech industry is  the second-biggest lobbying power in Washington, DC, but that wasn?t true as recently as ten years ago. How did we get to this moment? And where could we be going next? On this episode of Your Undivided Attention, Tristan and Daniel sit down with historian Margaret O?Mara and journalist Brody Mullins to discuss how Silicon Valley has changed the nature of American lobbying. 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

The Wolves of K Street: The Secret History of How Big Money Took Over Big Government - Brody?s book on the history of lobbying.

The Code: Silicon Valley and the Remaking of America - Margaret?s book on the historical relationship between Silicon Valley and Capitol Hill

More information on the Google antitrust ruling

More Information on KOSPA

More information on the SOPA/PIPA internet blackout

Detailed breakdown of Internet lobbying from Open Secrets

 

RECOMMENDED YUA EPISODES

U.S. Senators Grilled Social Media CEOs. Will Anything Change?

Can We Govern AI? with Marietje Schaake
The Race to Cooperation with David Sloan Wilson

 

CORRECTION: Brody Mullins refers to AT&T as having a ?hundred million dollar? lobbying budget in 2006 and 2007. While we couldn?t verify the size of their budget for lobbying, their actual lobbying spend was much less than this: $27.4m in 2006 and $16.5m in 2007, according to OpenSecrets.

 

The views expressed by guests appearing on Center for Humane Technology?s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-08-26
Link to episode

This Moment in AI: How We Got Here and Where We?re Going

It?s been a year and half since Tristan and Aza laid out their vision and concerns for the future of artificial intelligence in The AI Dilemma. In this Spotlight episode, the guys discuss what?s happened since then?as funding, research, and public interest in AI has exploded?and where we could be headed next. Plus, some major updates on social media reform, including the passage of the Kids Online Safety and Privacy Act in the Senate. 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 

RECOMMENDED MEDIA

The AI Dilemma: Tristan and Aza?s talk on the catastrophic risks posed by AI.

Info Sheet on KOSPA: More information on KOSPA from FairPlay.

Situational Awareness by Leopold Aschenbrenner: A widely cited blog from a former OpenAI employee, predicting the rapid arrival of AGI.

AI for Good: More information on the AI for Good summit that was held earlier this year in Geneva. 

Using AlphaFold in the Fight Against Plastic Pollution: More information on Google?s use of AlphaFold to create an enzyme to break down plastics. 

Swiss Call For Trust and Transparency in AI: More information on the initiatives mentioned by Katharina Frey.

 

RECOMMENDED YUA EPISODES

War is a Laboratory for AI with Paul Scharre

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

Can We Govern AI? with Marietje Schaake 

The Three Rules of Humane Tech

The AI Dilemma

 

Clarification: Swiss diplomat Nina Frey?s full name is Katharina Frey.

 

The views expressed by guests appearing on Center for Humane Technology?s podcast, Your Undivided Attention, are their own, and do not necessarily reflect the views of CHT. CHT does not support or oppose any candidate or party for election to public office


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-08-12
Link to episode

Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

AI has been a powerful accelerant for biological research, rapidly opening up new frontiers in medicine and public health. But that progress can also make it easier for bad actors to manufacture new biological threats. In this episode, Tristan and Daniel sit down with biologist Kevin Esvelt to discuss why AI has been such a boon for biologists and how we can safeguard society against the threats that AIxBio poses.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

Sculpting Evolution: Information on Esvelt?s lab at MIT.

SecureDNA: Esvelt?s free platform to provide safeguards for DNA synthesis.

The Framework for Nucleic Acid Synthesis Screening: The Biden admin?s suggested guidelines for DNA synthesis regulation.

Senate Hearing on Regulating AI Technology: C-SPAN footage of Dario Amodei?s testimony to Congress.

The AlphaFold Protein Structure Database

RECOMMENDED YUA EPISODES

U.S. Senators Grilled Social Media CEOs. Will Anything Change?

Big Food, Big Tech and Big AI with Michael Moss

The AI Dilemma

Clarification: President Biden?s executive order only applies to labs that receive funding from the federal government, not state governments.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-07-18
Link to episode

How to Think About AI Consciousness With Anil Seth

Will AI ever start to think by itself? If it did, how would we know, and what would it mean?

In this episode, Dr. Anil Seth and Aza discuss the science, ethics, and incentives of artificial consciousness. Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and the author of Being You: A New Science of Consciousness.

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

Frankenstein by Mary Shelley

A free, plain text version of the Shelley?s classic of gothic literature.

OpenAI?s GPT4o Demo

A video from OpenAI demonstrating GPT4o?s remarkable ability to mimic human sentience.

You Can Have the Blue Pill or the Red Pill, and We?re Out of Blue Pills

The NYT op-ed from last year by Tristan, Aza, and Yuval Noah Harari outlining the AI dilemma. 

What It?s Like to Be a Bat

Thomas Nagel?s essay on the nature of consciousness.

Are You Living in a Computer Simulation?

Philosopher Nick Bostrom?s essay on the simulation hypothesis.

Anthropic?s Golden Gate Claude

A blog post about Anthropic?s recent discovery of millions of distinct concepts within their LLM, a major development in the field of AI interpretability.

RECOMMENDED YUA EPISODES

Esther Perel on Artificial Intimacy

Talking With Animals... Using AI

Synthetic Humanity: AI & What?s At Stake


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-07-04
Link to episode

Why Are Migrants Becoming AI Test Subjects? With Petra Molnar

Climate change, political instability, hunger. These are just some of the forces behind an unprecedented refugee crisis that?s expected to include over a billion people by 2050. In response to this growing crisis, wealthy governments like the US and the EU are employing novel AI and surveillance technologies to slow the influx of migrants at their borders. But will this rollout stop at the border?

In this episode, Tristan and Aza sit down with Petra Molnar to discuss how borders have become a proving ground for the sharpest edges of technology, and especially AI. Petra is an immigration lawyer and co-creator of the Migration and Technology Monitor. Her new book is ?The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.?

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

RECOMMENDED MEDIA

The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence

Petra?s newly published book on the rollout of high risk tech at the border.

Bots at the Gate

A report co-authored by Petra about Canada?s use of AI technology in their immigration process.

Technological Testing Grounds

A report authored by Petra about the use of experimental technology in EU border enforcement.

Startup Pitched Tasing Migrants from Drones, Video Reveals

An article from The Intercept, containing the demo for Brinc?s taser drone pilot program.

The UNHCR

Information about the global refugee crisis from the UN.

RECOMMENDED YUA EPISODES

War is a Laboratory for AI with Paul Scharre

No One is Immune to AI Harms with Dr. Joy Buolamwini

Can We Govern AI? With Marietje Schaake

CLARIFICATION:

The iBorderCtrl project referenced in this episode was a pilot project that was discontinued in 2019


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-06-20
Link to episode

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

This week, a group of current and former employees from OpenAI and Google DeepMind penned an open letter accusing the industry?s leading companies of prioritizing profits over safety. This comes after a spate of high profile departures from OpenAI, including co-founder Ilya Sutskever and senior researcher Jan Leike, as well as reports that OpenAI has gone to great lengths to silence would-be whistleblowers. 

The writers of the open letter argue that researchers have a ?right to warn? the public about AI risks and laid out a series of principles that would protect that right. In this episode, we sit down with one of those writers: William Saunders, who left his job as a research engineer at OpenAI in February. William is now breaking the silence on what he saw at OpenAI that compelled him to leave the company and to put his name to this letter. 

RECOMMENDED MEDIA 

The Right to Warn Open Letter

My Perspective On "A Right to Warn about Advanced Artificial Intelligence": A follow-up from William about the letter

Leaked OpenAI documents reveal aggressive tactics toward former employees: An investigation by Vox into OpenAI?s policy of non-disparagement.

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom WheelerSpotlight on AI: What Would It Take For This to Go Well?Big Food, Big Tech and Big AI with Michael MossCan We Govern AI? With Marietje Schaake

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-06-07
Link to episode

War is a Laboratory for AI with Paul Scharre

Right now, militaries around the globe are investing heavily in the use of AI weapons and drones.  From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill people and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. 

RECOMMENDED MEDIA

Four Battlegrounds: Power in the Age of Artificial Intelligence: Paul?s book on the future of AI in war, which came out in 2023.

Army of None: Autonomous Weapons and the Future of War: Paul?s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.

The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul?s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.

The night the world almost almost ended: A BBC documentary about Stanislav Petrov?s decision not to start nuclear war.

AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.

?Lavender?: The AI machine directing Israel?s bombing spree in Gaza: An investigation into the use of AI targeting systems by the IDF.

RECOMMENDED YUA EPISODES

The AI ?Race?: China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole Perlroth

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-05-23
Link to episode

AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu

Tech companies say that AI will lead to massive economic productivity gains. But as we know from the first digital revolution, that?s not what happened. Can we do better this time around?

RECOMMENDED MEDIA

Power and Progress by Daron Acemoglu and Simon Johnson Professor Acemoglu co-authored a bold reinterpretation of economics and history that will fundamentally change how you see the world

Can we Have Pro-Worker AI? Professor Acemoglu co-authored this paper about redirecting AI development onto the human-complementary path

Rethinking Capitalism: In Conversation with Daron Acemoglu The Wheeler Institute for Business and Development hosted Professor Acemoglu to examine how technology affects the distribution and growth of resources while being shaped by economic and social incentives

RECOMMENDED YUA EPISODES

The Three Rules of Humane TechThe Tech We Need for 21st Century DemocracyCan We Govern AI?An Alternative to Silicon Valley Unicorns

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-05-09
Link to episode

Jonathan Haidt On How to Solve the Teen Mental Health Crisis

Suicides. Self harm. Depression and anxiety. The toll of a social media-addicted, phone-based childhood has never been more stark. It can be easy for teens, parents and schools to feel like they?re trapped by it all. But in this conversation with Tristan Harris, author and social psychologist Jonathan Haidt makes the case that the conditions that led to today?s teenage mental health crisis can be turned around ? with specific, achievable actions we all can take starting today.

This episode was recorded live at the San Francisco Commonwealth Club.  

Correction: Tristan mentions that 40 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

Clarification: Jonathan refers to the Wait Until 8th pledge. By signing the pledge, a parent  promises not to give their child a smartphone until at least the end of 8th grade. The pledge becomes active once at least ten other families from their child?s grade pledge the same.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-04-11
Link to episode

Chips Are the Future of AI. They?re Also Incredibly Vulnerable. With Chris Miller

Beneath the race to train and release more powerful AI models lies another race: a race by companies and nation-states to secure the hardware to make sure they win AI supremacy. 

Correction: The latest available Nvidia chip is the Hopper H100 GPU, which has 80 billion transistors. Since the first commercially available chip had four transistors, the Hopper actually has 20 billion times that number. Nvidia recently announced the Blackwell, which boasts 208 billion transistors - but it won?t ship until later this year.

RECOMMENDED MEDIA 

Chip War: The Fight For the World?s Most Critical Technology by Chris Miller

To make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips

Gordon Moore Biography & Facts

Gordon Moore, the Intel co-founder behind Moore's Law, passed away in March of 2023

AI?s most popular chipmaker Nvidia is trying to use AI to design chips faster

Nvidia's GPUs are in high demand - and the company is using AI to accelerate chip production

RECOMMENDED YUA EPISODES

Future-proofing Democracy In the Age of AI with Audrey Tang

How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

The AI ?Race?: China vs. the US with Jeffrey Ding and Karen Hao

Protecting Our Freedom of Thought with Nita Farahany

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-03-30
Link to episode

Future-proofing Democracy In the Age of AI with Audrey Tang

What does a functioning democracy look like in the age of artificial intelligence? Could AI even be used to help a democracy flourish? Just in time for election season, Taiwan?s Minister of Digital Affairs Audrey Tang returns to the podcast to discuss healthy information ecosystems, resilience to cyberattacks, how to ?prebunk? deepfakes, and more. 

RECOMMENDED MEDIA 

Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens by Martin Gilens and Benjamin I. Page

This academic paper addresses tough questions for Americans: Who governs? Who really rules? 

Recursive Public

Recursive Public is an experiment in identifying areas of consensus and disagreement among the international AI community, policymakers, and the general public on key questions of governance

A Strong Democracy is a Digital Democracy

Audrey Tang?s 2019 op-ed for The New York Times

The Frontiers of Digital Democracy

Nathan Gardels interviews Audrey Tang in Noema

RECOMMENDED YUA EPISODES 

Digital Democracy is Within Reach with Audrey Tang

The Tech We Need for 21st Century Democracy with Divya Siddarth

How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-02-29
Link to episode

U.S. Senators Grilled Social Media CEOs. Will Anything Change?

Was it political progress, or just political theater? The recent Senate hearing with social media CEOs led to astonishing moments ? including Mark Zuckerberg?s public apology to families who lost children following social media abuse. Our panel of experts, including Facebook whistleblower Frances Haugen, untangles the explosive hearing, and offers a look ahead, as well. How will this hearing impact protocol within these social media companies? How will it impact legislation? In short: will anything change?

Clarification: Julie says that shortly after the hearing, Meta?s stock price had the biggest increase of any company in the stock market?s history. It was the biggest one-day gain by any company in Wall Street history.

Correction: Frances says it takes Snap three or four minutes to take down exploitative content. In Snap's most recent transparency report, they list six minutes as the median turnaround time to remove exploitative content.

RECOMMENDED MEDIA 

Get Media Savvy

Founded by Julie Scelfo, Get Media Savvy is a non-profit initiative working to establish a healthy media environment for kids and families

The Power of One by Frances Haugen

The inside story of France?s quest to bring transparency and accountability to Big Tech

RECOMMENDED YUA EPISODES

Real Social Media Solutions, Now with Frances Haugen

A Conversation with Facebook Whistleblower Frances Haugen

Are the Kids Alright?

Social Media Victims Lawyer Up with Laura Marquez-Garrett

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-02-13
Link to episode

Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet

Over the past year, a tsunami of apps that digitally strip the clothes off real people has hit the market. Now anyone can create fake non-consensual sexual images in just a few clicks. With cases proliferating in high schools, guest presenter Laurie Segall talks to legal scholar Mary Anne Franks about the AI-enabled rise in deep fake porn and what we can do about it. 

Correction: Laurie refers to the app 'Clothes Off.' It?s actually named Clothoff. There are many clothes remover apps in this category.

RECOMMENDED MEDIA 

Revenge Porn: The Cyberwar Against Women

In a five-part digital series, Laurie Segall uncovers a disturbing internet trend: the rise of revenge porn

The Cult of the Constitution

In this provocative book, Mary Anne Franks examines the thin line between constitutional fidelity and constitutional fundamentalism

Fake Explicit Taylor Swift Images Swamp Social Media

Calls to protect women and crack down on the platforms and technology that spread such images have been reignited

RECOMMENDED YUA EPISODES 

No One is Immune to AI Harms

Esther Perel on Artificial Intimacy

Social Media Victims Lawyer Up

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-02-01
Link to episode

Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei

We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI ? summoning an inanimate force with the powers of code ? sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast,  says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care.  He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. 

Correction: Josh says the first telling of "The Sorcerer?s Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.

RECOMMENDED MEDIA 

The Emerald podcast

The Emerald explores the human experience through a vibrant lens of myth, story, and imagination

Embodied Ethics in The Age of AI

A five-part course with The Emerald podcast?s Josh Schrei and School of Wise Innovation?s Andrew Dunn

Nature Nurture: Children Can Become Stewards of Our Delicate Planet

A U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animals

The New Fire

AI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive order

RECOMMENDED YUA EPISODES 

How Will AI Affect the 2024 Elections?

The AI Dilemma

The Three Rules of Humane Tech

AI Myths and Misconceptions

 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2024-01-18
Link to episode

How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller

2024 will be the biggest election year in world history. Forty countries will hold national elections, with over two billion voters heading to the polls. In this episode of Your Undivided Attention, two experts give us a situation report on how AI will increase the risks to our elections and our democracies. 

Correction: Tristan says two billion people from 70 countries will be undergoing democratic elections in 2024. The number expands to 70 when non-national elections are factored in.

RECOMMENDED MEDIA 

White House AI Executive Order Takes On Complexity of Content Integrity Issues
Renee DiResta?s piece in Tech Policy Press about content integrity within President Biden?s AI executive order

The Stanford Internet Observatory
A cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social media

Demos

Britain?s leading cross-party think tank

Invisible Rulers: The People Who Turn Lies into Reality by Renee DiResta

Pre-order Renee?s upcoming book that?s landing on shelves June 11, 2024

RECOMMENDED YUA EPISODES

The Spin Doctors Are In with Renee DiResta

From Russia with Likes Part 1 with Renee DiResta

From Russia with Likes Part 2 with Renee DiResta

Esther Perel on Artificial Intimacy

The AI Dilemma

A Conversation with Facebook Whistleblower Frances Haugen

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-12-21
Link to episode

2023 Ask Us Anything

You asked, we answered. This has been a big year in the world of tech, with the rapid proliferation of artificial intelligence, acceleration of neurotechnology, and continued ethical missteps of social media. Looking back on 2023, there are still so many questions on our minds, and we know you have a lot of questions too. So we created this episode to respond to listener questions and to reflect on what lies ahead.

Correction: Tristan mentions that 41 Attorneys General have filed a lawsuit against Meta for allegedly fostering addiction among children and teens through their products. However, the actual number is 42 Attorneys General who are taking legal action against Meta.

Correction: Tristan refers to Casey Mock as the Center for Humane Technology?s Chief Policy and Public Affairs Manager. His title is Chief Policy and Public Affairs Officer.

RECOMMENDED MEDIA 

Tech Policy Watch

Marietje Schaake curates this briefing on artificial intelligence and technology policy from around the world

The AI Executive Order

President Biden?s executive order on the safe, secure, and trustworthy development and use of AI

Meta sued by 42 AGs for addictive features targeting kids

A bipartisan group of 42 attorneys general is suing Meta, alleging features on Facebook and Instagram are addictive and are aimed at kids and teens

RECOMMENDED YUA EPISODES 

The Three Rules of Humane Tech

Two Million Years in Two Hours: A Conversation with Yuval Noah Harari

Inside the First AI Insight Forum in Washington

Digital Democracy is Within Reach with Audrey Tang

The Tech We Need for 21st Century Democracy with Divya Siddarth

Mind the (Perception) Gap with Dan Vallone

The AI Dilemma

Can We Govern AI? with Marietje Schaake

Ask Us Anything: You Asked, We Answered

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-11-30
Link to episode

The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora?s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? 

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

 

RECOMMENDED MEDIA 

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it?s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

 

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-11-21
Link to episode

A First Step Toward AI Regulation with Tom Wheeler

On Monday, Oct. 30, President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order with Tristan and Aza and discusses what?s next in the push toward AI regulation. 

Clarification: When quoting Thomas Jefferson, Aza incorrectly says ?regime? instead of ?regimen.? The correct quote is: ?I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truths discovered, and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their barbarous ancestors.?

 

RECOMMENDED MEDIA 

The AI Executive Order

President Biden?s Executive Order on the safe, secure, and trustworthy development and use of AI

UK AI Safety Summit

The summit brings together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI and discuss how they can be mitigated through internationally coordinated action

aitreaty.org

An open letter calling for an international AI treaty

Techlash: Who Makes the Rules in the Digital Gilded Age?

Praised by Kirkus Reviews as ?a rock-solid plan for controlling the tech giants,? readers will be energized by Tom Wheeler?s vision of digital governance

 

RECOMMENDED YUA EPISODES

Inside the First AI Insight Forum in Washington

Digital Democracy is Within Reach with Audrey Tang

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-11-02
Link to episode

No One is Immune to AI Harms with Dr. Joy Buolamwini

In this interview, Dr. Joy Buolamwini argues that algorithmic bias in AI systems poses risks to marginalized people. She challenges the assumptions of tech leaders who advocate for AI ?alignment? and explains why some tech companies are hypocritical when it comes to addressing bias. 

Dr. Joy Buolamwini is the founder of the Algorithmic Justice League and the author of ?Unmasking AI: My Mission to Protect What Is Human in a World of Machines.?

Correction: Aza says that Sam Altman, the CEO of OpenAI, predicts superintelligence in four years. Altman predicts superintelligence in ten years.

 

RECOMMENDED MEDIA

Unmasking AI by Joy Buolamwini

?The conscience of the AI revolution? explains how we?ve arrived at an era of AI harms and oppression, and what we can do to avoid its pitfalls

Coded Bias

Shalini Kantayya?s film explores the fallout of Dr. Joy?s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all

How I?m fighting bias in algorithms

Dr. Joy?s 2016 TED Talk about her mission to fight bias in machine learning, a phenomenon she calls the "coded gaze."

 

RECOMMENDED YUA EPISODES

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

Protecting Our Freedom of Thought with Nita Farahany

The AI Dilemma

 

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-10-26
Link to episode

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

This is going to be the most productive decade in the history of our species, says Mustafa Suleyman, author of ?The Coming Wave,? CEO of Inflection AI, and founder of Google?s DeepMind. But in order to truly reap the benefits of AI, we need to learn how to contain it. Paradoxically, part of that will mean collectively saying no to certain forms of progress. As an industry leader reckoning with a future that?s about to be ?turbocharged?  Mustafa says we can all play a role in shaping the technology in hands-on ways and by advocating for appropriate governance.

RECOMMENDED MEDIA 

The Coming Wave: Technology, Power, and the 21st Century?s Greatest Dilemma

This new book from Mustafa Suleyman is a must-read guide to the technological revolution just starting, and the transformed world it will create

Partnership on AI

Partnership on AI is bringing together diverse voices from across the AI community to create resources for advancing positive outcomes for people and society

Policy Reforms Toolkit from the Center for Humane Technology

Digital lawlessness has been normalized in the name of innovation. It?s possible to craft policy that protects the conditions we need to thrive

RECOMMENDED YUA EPISODES 

AI Myths and Misconceptions

Can We Govern AI? with Marietje Schaake

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

2023-09-28
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.