Good podcast

Top 100 most popular podcasts

Dwarkesh Podcast

Dwarkesh Podcast

Deeply researched interviews www.dwarkesh.com

Subscribe

iTunes / Overcast / RSS

Website

dwarkesh.com/podcast

Episodes

2027 Intelligence Explosion: Month-by-Month Model ? Scott Alexander & Daniel Kokotajlo

Scott and Daniel break down every month from now until the 2027 intelligence explosion.

Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety.

We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress.

I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them. I highly recommend checking out their new scenario planning document, AI 2027

Watch on Youtube; listen on Apple Podcasts or Spotify.

----------

Sponsors

* WorkOS helps today?s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit workos.com

* Jane Street likes to know what's going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had blast trying it out. See if you have the skills to crack it at janestreet.com/dwarkesh

* Scale?s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you?re an AI researcher or engineer, learn about how Scale?s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh

To sponsor a future episode, visit dwarkesh.com/advertise.

----------

Timestamps

(00:00:00) - AI 2027

(00:06:56) - Forecasting 2025 and 2026

(00:14:41) - Why LLMs aren't making discoveries

(00:24:33) - Debating intelligence explosion

(00:49:45) - Can superintelligence actually transform science?

(01:16:54) - Cultural evolution vs superintelligence

(01:24:05) - Mid-2027 branch point

(01:32:30) - Race with China

(01:44:47) - Nationalization vs private anarchy

(02:03:22) - Misalignment

(02:14:52) - UBI, AI advisors, & human future

(02:23:00) - Factory farming for digital minds

(02:26:52) - Daniel leaving OpenAI

(02:35:15) - Scott's blogging advice



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-04-03
Link to episode

AMA ft. Sholto & Trenton: New Book, Career Advice Given AGI, How I'd Start From Scratch

I recorded an AMA! I had a blast chatting with my friends Trenton Bricken and Sholto Douglas. We discussed my new book, career advice given AGI, how I pick guests, how I research for the show, and some other nonsense.

My book, ?The Scaling Era: An Oral History of AI, 2019-2025? is available in digital format now. Preorders for the print version are also open!

Watch on YouTube; listen on Apple Podcasts or Spotify.

Timestamps

(0:00:00) - Book launch announcement

(0:04:57) - AI models not making connections across fields

(0:10:52) - Career advice given AGI

(0:15:20) - Guest selection criteria

(0:17:19) - Choosing to pursue the podcast long-term

(0:25:12) - Reading habits

(0:31:10) - Beard deepdive

(0:33:02) - Who is best suited for running an AI lab?

(0:35:16) - Preparing for fast AGI timelines

(0:40:50) - Growing the podcast



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-03-25
Link to episode

Joseph Henrich ? Why Humans Survived and Smarter Species Didn't

Humans have not succeeded because of our raw intelligence.

Marooned European explorers regularly starved to death in areas where foragers thrived for 1000s of years.

I?ve always found this cultural evolution deeply mysterious.

How do you discover the 10 steps for processing cassava so it won?t give you cyanide poisoning simply by trial and error?

Has the human brain declined in size over the last 10,000 years because we outsourced cultural evolution to a larger collective brain?

The most interesting part of the podcast is Henrich?s explanation of how the Catholic Church unintentionally instigated the Industrial Revolution through the dismantling of intensive kinship systems in medieval Europe.

Watch on Youtube; listen on Apple Podcasts or Spotify.

----------

Sponsors

Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale?s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you?re an AI researcher or engineer, learn about how Scale?s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh.

To sponsor a future episode, visit dwarkesh.com/p/advertise.

----------

Joseph?s books

The WEIRDest People in the World

The Secret of Our Success

----------

Timestamps

(0:00:00) - Humans didn?t succeed because of raw IQ

(0:09:27) - How cultural evolution works

(0:20:48) - Why is human brain size declining?

(0:32:00) - Will AGI have superhuman cultural learning?

(0:42:34) - Why Industrial Revolution happened in Europe

(0:55:30) - Why China, Rome, India got left behind

(1:21:09) - Loss of cultural variance in modern world

(1:31:20) - Is individual genius real?

(1:43:49) - IQ and collective brains



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-03-12
Link to episode

Notes on China

I?m so excited with how this visualization of Notes on China turned out. Petr, thank you for such beautiful watercolor artwork. More to come!

Watch on YouTube.

----------

Timestamps

(0:00:00) - Intro

(0:00:32) - Scale

(0:05:50) - Vibes

(0:11:14) - Youngsters

(0:14:27) - Tech & AI

(0:15:47) - Hearts & Minds

(0:17:07) - On Travel



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-03-05
Link to episode

Satya Nadella ? Microsoft?s AGI Plan & Quantum Breakthrough

Satya Nadella on:

Why he doesn?t believe in AGI but does believe in 10% economic growth;

Microsoft?s new topological qubit breakthrough and gaming world models;

Whether Office commoditizes LLMs or the other way around.

Watch on Youtube; listen on Apple Podcasts or Spotify.

----------

Sponsors

Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale?s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you?re an AI researcher or engineer, learn about how Scale?s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh

Linear's project management tools have become the default choice for product teams at companies like Ramp, CashApp, OpenAI, and Scale. These teams use Linear so they can stay close to their products and move fast. If you?re curious why so many companies are making the switch, visit linear.app/dwarkesh

To sponsor a future episode, visit dwarkeshpatel.com/p/advertise.

----------

Timestamps

(0:00:00) - Intro

(0:05:04) - AI won't be winner-take-all

(0:15:18) - World economy growing by 10%

(0:21:39) - Decreasing price of intelligence

(0:30:19) - Quantum breakthrough

(0:42:51) - How Muse will change gaming

(0:49:51) - Legal barriers to AI

(0:55:46) - Getting AGI safety right

(1:04:59) - 34 years at Microsoft

(1:10:46) - Does Satya Nadella believe in AGI?



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-02-19
Link to episode

Jeff Dean & Noam Shazeer ? 25 years at Google: from PageRank to AGI

This week I welcome on the show two of the most important technologists ever, in any field.

Jeff Dean is Google's Chief Scientist, and through 25 years at the company, has worked on basically the most transformative systems in modern computing: from MapReduce, BigTable, Tensorflow, AlphaChip, to Gemini.

Noam Shazeer invented or co-invented all the main architectures and techniques that are used for modern LLMs: from the Transformer itself, to Mixture of Experts, to Mesh Tensorflow, to Gemini and many other things.

We talk about their 25 years at Google, going from PageRank to MapReduce to the Transformer to MoEs to AlphaChip ? and maybe soon to ASI.

My favorite part was Jeff's vision for Pathways, Google?s grand plan for a mutually-reinforcing loop of hardware and algorithmic design and for going past autoregression. That culminates in us imagining *all* of Google-the-company, going through one huge MoE model.

And Noam just bites every bullet: 100x world GDP soon; let?s get a million automated researchers running in the Google datacenter; living to see the year 3000.Watch on Youtube; listen on Apple Podcasts or Spotify.

Sponsors

Scale partners with major AI labs like Meta, Google Deepmind, and OpenAI. Through Scale?s Data Foundry, labs get access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you?re an AI researcher or engineer, learn about how Scale?s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at scale.com/dwarkesh

Curious how Jane Street teaches their new traders? They use Figgie, a rapid-fire card game that simulates the most exciting parts of markets and trading. It?s become so popular that Jane Street hosts an inter-office Figgie championship every year. Download from the app store or play on your desktop at figgie.com

Meter wants to radically improve the digital world we take for granted. They?re developing a foundation model that automates network management end-to-end. To do this, they just announced a long-term partnership with Microsoft for tens of thousands of GPUs, and they?re recruiting a world class AI research team. To learn more, go to meter.com/dwarkesh

To sponsor a future episode, visit dwarkeshpatel.com/p/advertise

Timestamps

00:00:00 - Intro

00:02:44 - Joining Google in 1999

00:05:36 - Future of Moore's Law

00:10:21 - Future TPUs

00:13:13 - Jeff?s undergrad thesis: parallel backprop

00:15:10 - LLMs in 2007

00:23:07 - ?Holy s**t? moments

00:29:46 - AI fulfills Google?s original mission

00:34:19 - Doing Search in-context

00:38:32 - The internal coding model

00:39:49 - What will 2027 models do?

00:46:00 - A new architecture every day?

00:49:21 - Automated chip design and intelligence explosion

00:57:31 - Future of inference scaling

01:03:56 - Already doing multi-datacenter runs

01:22:33 - Debugging at scale

01:26:05 - Fast takeoff and superalignment

01:34:40 - A million evil Jeff Deans

01:38:16 - Fun times at Google

01:41:50 - World compute demand in 2030

01:48:21 - Getting back to modularity

01:59:13 - Keeping a giga-MoE in-memory

02:04:09 - All of Google in one model

02:12:43 - What?s missing from distillation

02:18:03 - Open research, pros and cons

02:24:54 - Going the distance



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-02-12
Link to episode

Sarah Paine Episode 3: How Mao Conquered China

Third and final episode in the Paine trilogy!

Chinese history is full of warlords constantly challenging the capital. How could Mao not only stay in power for decades, but not even face any insurgency?

And how did Mao go from military genius to peacetime disaster - the patriotic hero who inflicted history?s worst human catastrophe on China? How can someone shrewd enough to win a civil war outnumbered 5 to 1 decide "let's have peasants make iron in their backyards" and "let's kill all the birds"?

In her lecture and our Q&A, we cover the first nationwide famine in Chinese history; Mao's lasting influence on other insurgents; broken promises to minorities and peasantry; and what Taiwan means.

Thanks so much to @Substack for running this in-person event!

Note that Sarah is doing an AMA over the next couple days on Youtube; see the pinned comment.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Sponsor

Today?s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America?s AI advantage through their data foundry. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities.

If you?re interested in learning more on how Scale powers frontier AI capabilities, go to https://scale.com/dwarkesh.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-01-30
Link to episode

Sarah Paine Episode 2: Why Japan Lost (Lecture & Interview)

This is the second episode in the trilogy of a lectures by Professor Sarah Paine of the Naval War College.

In this second episode, Prof Paine dissects the ideas and economics behind Japanese imperialism before and during WWII. We get into the oil shortage which caused the war; the unique culture of honor and death; the surprisingly chaotic chain of command. This is followed by a Q&A with me.

Huge thanks to Substack for hosting this event!

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Sponsor

Today?s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America?s AI advantage through their data foundry. Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities.

If you?re interested in learning more on how Scale powers frontier AI capabilities, go to scale.com/dwarkesh.

Buy Sarah's Books!

I highly, highly recommend both "The Wars for Asia, 1911?1949" and "The Japanese Empire: Grand Strategy from the Meiji Restoration to the Pacific War".

Timestamps

(0:00:00) - Lecture begins

(0:06:58) - The code of the samurai

(0:10:45) - Buddhism, Shinto, Confucianism

(0:16:52) - Bushido as bad strategy

(0:23:34) - Military theorists

(0:33:42) - Strategic sins of omission

(0:38:10) - Crippled logistics

(0:40:58) - the Kwantung Army

(0:43:31) - Inter-service communication

(0:51:15) - Shattering Japanese morale

(0:57:35) - Q&A begins

(01:05:02) - Unusual brutality of WWII

(01:11:30) - Embargo caused the war

(01:16:48) - The liberation of China

(01:22:02) - Could US have prevented war?

(01:25:30) - Counterfactuals in history

(01:27:46) - Japanese optimism

(01:30:46) - Tech change and social change

(01:38:22) - Hamming questions

(01:44:31) - Do sanctions work?

(01:50:07) - Backloaded mass death

(01:54:09) - demilitarizing Japan

(01:57:30) - Post-war alliances

(02:03:46) - Inter-service rivalry



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-01-23
Link to episode

Sarah Paine Episode 1: The War For India (Lecture & Interview)

I?m thrilled to launch a new trilogy of double episodes: a lecture series by Professor Sarah Paine of the Naval War College, each followed by a deep Q&A.

In this first episode, Prof Paine talks about key decisions by Khrushchev, Mao, Nehru, Bhutto, & Lyndon Johnson that shaped the whole dynamic of South Asia today. This is followed by a Q&A.

Come for the spy bases, shoestring nukes, and insight about how great power politics impacts every region.

Huge thanks to Substack for hosting this!

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Sponsors

Today?s episode is brought to you by Scale AI. Scale partners with the U.S. government to fuel America?s AI advantage through their data foundry. The Air Force, Army, Defense Innovation Unit, and Chief Digital and Artificial Intelligence Office all trust Scale to equip their teams with AI-ready data and the technology to build powerful applications.

Scale recently introduced Defense Llama, Scale's latest solution available for military personnel. With Defense Llama, military personnel can harness the power of AI to plan military or intelligence operations and understand adversary vulnerabilities.

If you?re interested in learning more on how Scale powers frontier AI capabilities, go to scale.com/dwarkesh.

Timestamps

(00:00) - Intro

(02:11) - Mao at war, 1949-51

(05:40) - Pactomania and Sino-Soviet conflicts

(14:42) - The Sino-Indian War

(20:00) - Soviet peace in India-Pakistan

(22:00) - US Aid and Alliances

(26:14) - The difference with WWII

(30:09) - The geopolitical map in 1904

(35:10) - The US alienates Indira Gandhi

(42:58) - Instruments of US power

(53:41) - Carrier battle groups

(1:02:41) - Q&A begins

(1:04:31) - The appeal of the USSR

(1:09:36) - The last communist premier

(1:15:42) - India and China's lost opportunity

(1:58:04) - Bismark's cunning

(2:03:05) - Training US officers

(2:07:03) - Cruelty in Russian history



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-01-16
Link to episode

Tyler Cowen - the #1 bottleneck to AI progress is humans

I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him ? and yet I?m always hearing new stuff.

We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth ? among many other things in the full episode.

Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.

Sponsors

I?m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkesh.

Timestamps

(00:00:00) Economic Growth and AI

(00:14:57) Founder Mode and increasing variance

(00:29:31) Effective Altruism and Progress Studies

(00:33:05) What AI changes for Tyler

(00:44:57) The slow diffusion of innovation

(00:49:53) Stalin's library

(00:52:19) DC vs SF vs EU



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2025-01-09
Link to episode

Adam Brown ? How Future Civilizations Could Change The Laws of Physics

Adam Brown is a founder and lead of BlueShift with is cracking maths and reasoning at Google DeepMind and a theoretical physicist at Stanford.

We discuss: destroying the light cone with vacuum decay, holographic principle, mining black holes, & what it would take to train LLMs that can make Einstein level conceptual breakthroughs.

Stupefying, entertaining, & terrifying.

Enjoy!

Watch on YouTube, read the transcript, listen on Apple Podcasts, Spotify, or your favorite platform.

Sponsors

- Deepmind, Meta, Anthropic, and OpenAI, partner with Scale for high quality data to fuel post-training Publicly available data is running out - to keep developing smarter and smarter models, labs will need to rely on Scale?s data foundry, which combines subject matter experts with AI models to generate fresh data and break through the data wall. Learn more at scale.ai/dwarkesh.

- Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open for just a few more weeks. If you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh.

- This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

Timestamps

(00:00:00) - Changing the laws of physics

(00:26:05) - Why is our universe the way it is

(00:37:30) - Making Einstein level AGI

(01:00:31) - Physics stagnation and particle colliders

(01:11:10) - Hitchhiking

(01:29:00) - Nagasaki

(01:36:19) - Adam?s career

(01:43:25) - Mining black holes

(01:59:42) - The holographic principle

(02:23:25) - Philosophy of infinities

(02:31:42) - Engineering constraints for future civilizations



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-12-26
Link to episode

Gwern Branwen - How an Anonymous Researcher Predicted AI's Trajectory

Gwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you've read his blog, you know he's one of the most interesting polymathic thinkers alive.

In order to protect Gwern's anonymity, I proposed interviewing him in person, and having my friend Chris Painter voice over his words after. This amused him enough that he agreed.

After the episode, I convinced Gwern to create a donation page where people can help sustain what he's up to. Please go here to contribute.

Read the full transcript here.

Sponsors:

* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for ML researchers, FPGA programmers, and CUDA programmers. Summer internships are open - if you want to stand out, take a crack at their new Kaggle competition. To learn more, go to janestreet.com/dwarkesh.

* Turing provides complete post-training services for leading AI labs like OpenAI, Anthropic, Meta, and Gemini. They specialize in model evaluation, SFT, RLHF, and DPO to enhance models? reasoning, coding, and multimodal capabilities. Learn more at turing.com/dwarkesh.

* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you?re interested in advertising on the podcast, check out this page.

Timestamps

00:00:00 - Anonymity

00:01:09 - Automating Steve Jobs

00:04:38 - Isaac Newton's theory of progress

00:06:36 - Grand theory of intelligence

00:10:39 - Seeing scaling early

00:21:04 - AGI Timelines

00:22:54 - What to do in remaining 3 years until AGI

00:26:29 - Influencing the shoggoth with writing

00:30:50 - Human vs artificial intelligence

00:33:52 - Rabbit holes

00:38:48 - Hearing impairment

00:43:00 - Wikipedia editing

00:47:43 - Gwern.net

00:50:20 - Counterfactual careers

00:54:30 - Borges & literature

01:01:32 - Gwern's intelligence and process

01:11:03 - A day in the life of Gwern

01:19:16 - Gwern's finances

01:25:05 - The diversity of AI minds

01:27:24 - GLP drugs and obesity

01:31:08 - Drug experimentation

01:33:40 - Parasocial relationships

01:35:23 - Open rabbit holes



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-11-13
Link to episode

Dylan Patel & Jon (Asianometry) ? How the Semiconductor Industry Actually Works

A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.

Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world?s best YouTube channel on semiconductors and business history.

* What Xi would do if he became scaling pilled

* $ 1T+ in datacenter buildout by end of decade

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Sponsors:

* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here.

* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you?re interested in advertising on the podcast, check out this page.

Timestamps

00:00:00 ? Xi's path to AGI

00:04:20 ? Liang Mong Song

00:08:25 ? How semiconductors get better

00:11:16 ? China can centralize compute

00:18:50 ? Export controls & sanctions

00:32:51 ? Huawei's intense culture

00:38:51 ? Why the semiconductor industry is so stratified

00:40:58 ? N2 should not exist

00:45:53 ? Taiwan invasion hypothetical

00:49:21 ? Mind-boggling complexity of semiconductors

00:59:13 ? Chip architecture design

01:04:36 ? Architectures lead to different AI models? China vs. US

01:10:12 ? Being head of compute at an AI lab

01:16:24 ? Scaling costs and power demand

01:37:05 ? Are we financing an AI bubble?

01:50:20 ? Starting Asianometry and SemiAnalysis

02:06:10 ? Opportunities in the semiconductor stack



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-10-02
Link to episode

Daniel Yergin ? Oil Explains the Entire 20th Century

Unless you understand the history of oil, you cannot understand the rise of America, WW1, WW2, secular stagnation, the Middle East, Ukraine, how Xi and Putin think, and basically anything else that's happened since 1860.

It was a great honor to interview Daniel Yergin, the Pulitzer Prize winning author of The Prize - the best history of oil ever written (which makes it the best history of the 20th century ever written).

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Sponsors:

This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

This episode is brought to you by Suno, pioneers in AI-generated music. Suno's technology allows artists to experiment with melodic forms and structures in unprecedented ways. From chart-toppers to avant-garde compositions, Suno is redefining musical creativity. If you're an ML researcher passionate about shaping the future of music, email your resume to dwarkesh@suno.com.

If you?re interested in advertising on the podcast, check out this page.

Timestamps

(00:00:00) ? Beginning of the oil industry

(00:13:37) ? World War I & II

(00:25:06) ? The Middle East

(00:47:04) ? Yergin?s conversations with Putin & Modi

(01:04:36) ? Writing through stories

(01:10:26) ? The renewable energy transition



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-09-18
Link to episode

David Reich - How One Small Tribe Conquered the World 70,000 Years Ago

I had no idea how wild human history was before chatting with the geneticist of ancient DNA David Reich.

Human history has been again and again a story of one group figuring ?something? out, and then basically wiping everyone else out.

From the tribe of 1k-10k modern humans who killed off all the other human species 70,000 years ago; to the Yamnaya horse nomads 5,000 years ago who killed off 90+% of (then) Europeans and also destroyed the Indus Valley.

So much of what we thought we knew about human history is turning out to be wrong, from the ?Out of Africa? theory to the evolution of language, and this is all thanks to the research from David Reich?s lab.

Buy David Reich?s fascinating book, Who We Are How We Got Here.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Follow me on Twitter for updates on future episodes.

Sponsor

This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you?re interested in advertising on the podcast, check out this page.

Timestamps

(00:00:00) ? Archaic and modern humans gene flow

(00:20:24) ? How early modern humans dominated the world

(00:39:59) ? How bubonic plague rewrote history

(00:50:03) ? Was agriculture terrible for humans?

(00:59:28) ? Yamnaya expansion and how populations collide

(01:15:39) ? ?Lost civilizations? and our Neanderthal ancestry

(01:31:32) ? The DNA Challenge

(01:41:38) ? David?s career: the genetic vocation



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-08-29
Link to episode

Joe Carlsmith - Otherness and control in the age of AGI

Chatted with Joe Carlsmith about whether we can trust power/techno-capital, how to not end up like Stalin in our urge to control the future, gentleness towards the artificial Other, and much more.

Check out Joe's sequence on Otherness and Control in the Age of AGI here.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Sponsors:

- Bland.ai is an AI agent that automates phone calls in any language, 24/7. Their technology uses "conversational pathways" for accurate, versatile communication across sales, operations, and customer support. You can try Bland yourself by calling 415-549-9654. Enterprises can get exclusive access to their advanced model at bland.ai/dwarkesh.

- Stripe is financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you?re interested in advertising on the podcast, check out this page.

Timestamps:

(00:00:00) - Understanding the Basic Alignment Story

(00:44:04) - Monkeys Inventing Humans

(00:46:43) - Nietzsche, C.S. Lewis, and AI

(1:22:51) - How should we treat AIs

(1:52:33) - Balancing Being a Humanist and a Scholar

(2:05:02) - Explore exploit tradeoffs and AI



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-08-22
Link to episode

Patrick McKenzie - How a Discord Server Saved Thousands of Lives

I talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.

Enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Follow me on Twitter for updates on future episodes.

Sponsor

This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

Timestamps

(00:00:00) ? Why hackers on Discord had to save thousands of lives

(00:17:26) ? How politics crippled vaccine distribution

(00:38:19) ? Fundraising for VaccinateCA

(00:51:09) ? Why tech needs to understand how government works

(00:58:58) ? What is crypto good for?

(01:13:07) ? How the US government leverages big tech to violate rights

(01:24:36) ? Can the US have nice things like Japan?

(01:26:41) ? Financial plumbing & money laundering: a how-not-to guide

(01:37:42) ? Maximizing your value: why some people negotiate better

(01:42:14) ? Are young people too busy playing Factorio to found startups?

(01:57:30) ? The need for a post-mortem



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-07-24
Link to episode

Tony Blair - Life of a PM, The Deep State, Lee Kuan Yew, & AI's 1914 Moment

I chatted with Tony Blair about:

- What he learned from Lee Kuan Yew

- Intelligence agencies track record on Iraq & Ukraine

- What he tells the dozens of world leaders who come seek advice from him

- How much of a PM?s time is actually spent governing

- What will AI?s July 1914 moment look like from inside the Cabinet?

Enjoy!

Watch the video on YouTube. Read the full transcript here.

Follow me on Twitter for updates on future episodes.

Sponsors

- Prelude Security is the world?s leading cyber threat management automation platform. Prelude Detect quickly transforms threat intelligence into validated protections so organizations can know with certainty that their defenses will protect them against the latest threats. Prelude is backed by Sequoia Capital, Insight Partners, The MITRE Corporation, CrowdStrike, and other leading investors. Learn more here.

- This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you?re interested in advertising on the podcast, check out this page.

Timestamps

(00:00:00) ? A prime minister?s constraints

(00:04:12) ? CEOs vs. politicians

(00:10:31) ? COVID, AI, & how government deals with crisis

(00:21:24) ? Learning from Lee Kuan Yew

(00:27:37) ? Foreign policy & intelligence

(00:31:12) ? How much leadership actually matters

(00:35:34) ? Private vs. public tech

(00:39:14) ? Advising global leaders

(00:46:45) ? The unipolar moment in the 90s



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-06-26
Link to episode

Francois Chollet, Mike Knoop - LLMs won?t lead to AGI - $1,000,000 Prize to find true solution

Here is my conversation with Francois Chollet and Mike Knoop on the $1 million ARC-AGI Prize they're launching today.

I did a bunch of socratic grilling throughout, but Francois?s arguments about why LLMs won?t lead to AGI are very interesting and worth thinking through.

It was really fun discussing/debating the cruxes. Enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Timestamps

(00:00:00) ? The ARC benchmark

(00:11:10) ? Why LLMs struggle with ARC

(00:19:00) ? Skill vs intelligence

(00:27:55) - Do we need ?AGI? to automate most jobs?

(00:48:28) ? Future of AI progress: deep learning + program synthesis

(01:00:40) ? How Mike Knoop got nerd-sniped by ARC

(01:08:37) ? Million $ ARC Prize

(01:10:33) ? Resisting benchmark saturation

(01:18:08) ? ARC scores on frontier vs open source models

(01:26:19) ? Possible solutions to ARC Prize



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-06-11
Link to episode

Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

Chatted with my friend Leopold Aschenbrenner on the trillion dollar nationalized cluster, CCP espionage at AI labs, how unhobblings and scaling can lead to 2027 AGI, dangers of outsourcing clusters to Middle East, leaving OpenAI, and situational awareness.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Follow me on Twitter for updates on future episodes. Follow Leopold on Twitter.

Timestamps

(00:00:00) ? The trillion-dollar cluster and unhobbling

(00:20:31) ? AI 2028: The return of history

(00:40:26) ? Espionage & American AI superiority

(01:08:20) ? Geopolitical implications of AI

(01:31:23) ? State-led vs. private-led AI

(02:12:23) ? Becoming Valedictorian of Columbia at 19

(02:30:35) ? What happened at OpenAI

(02:45:11) ? Accelerating AI research progress

(03:25:58) ? Alignment

(03:41:26) ? On Germany, and understanding foreign perspectives

(03:57:04) ? Dwarkesh?s immigration story and path to the podcast

(04:07:58) ? Launching an AGI hedge fund

(04:19:14) ? Lessons from WWII

(04:29:08) ? Coda: Frederick the Great



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-06-04
Link to episode

John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Chatted with John Schulman (cofounded OpenAI and led ChatGPT creation) on how posttraining tames the shoggoth, and the nature of the progress to come...

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Pre-training, post-training, and future capabilities

(00:16:57) - Plan for AGI 2025

(00:29:19) - Teaching models to reason

(00:40:50) - The Road to ChatGPT

(00:52:13) - What makes for a good RL researcher?

(01:00:58) - Keeping humans in the loop

(01:15:15) - State of research, plateaus, and moats

Sponsors

If you?re interested in advertising on the podcast, fill out this form.

* Your DNA shapes everything about you. Want to know how? Take 10% off our Premium DNA kit with code DWARKESH at mynucleus.com.

* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-05-15
Link to episode

Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Augustus

Mark Zuckerberg on:

- Llama 3

- open sourcing towards AGI

- custom silicon, synthetic data, & energy constraints on scaling

- Caesar Augustus, intelligence explosion, bioweapons, $10b models, & much more

Enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Human edited transcript with helpful links here.

Timestamps

(00:00:00) - Llama 3

(00:08:32) - Coding on path to AGI

(00:25:24) - Energy bottlenecks

(00:33:20) - Is AI the most important technology ever?

(00:37:21) - Dangers of open source

(00:53:57) - Caesar Augustus and metaverse

(01:04:53) - Open sourcing the $10b model & custom silicon

(01:15:19) - Zuck as CEO of Google+

Sponsors

If you?re interested in advertising on the podcast, fill out this form.

* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue. Learn more at stripe.com.

* V7 Go is a tool to automate multimodal tasks using GenAI, reliably and at scale. Use code DWARKESH20 for 20% off on the pro plan. Learn more here.

* CommandBar is an AI user assistant that any software product can embed to non-annoyingly assist, support, and unleash their users. Used by forward-thinking CX, product, growth, and marketing teams. Learn more at commandbar.com.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-04-18
Link to episode

Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

Had so much fun chatting with my good friends Trenton Bricken and Sholto Douglas on the podcast.

No way to summarize it, except: 

This is the best context dump out there on how LLMs are trained, what capabilities they're likely to soon have, and what exactly is going on inside them.

You would be shocked how much of what I know about this field, I've learned just from talking with them.

To the extent that you've enjoyed my other AI interviews, now you know why.

So excited to put this out. Enjoy! I certainly did :)

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. 

There's a transcript with links to all the papers the boys were throwing down - may help you follow along.

Follow Trenton and Sholto on Twitter.

Timestamps

(00:00:00) - Long contexts

(00:16:12) - Intelligence is just associations

(00:32:35) - Intelligence explosion & great researchers

(01:06:52) - Superposition & secret communication

(01:22:34) - Agents & true reasoning

(01:34:40) - How Sholto & Trenton got into AI research

(02:07:16) - Are feature spaces the wrong way to think about intelligence?

(02:21:12) - Will interp actually work on superhuman models

(02:45:05) - Sholto?s technical challenge for the audience

(03:03:57) - Rapid fire



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-03-28
Link to episode

Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

Here is my episode with Demis Hassabis, CEO of Google DeepMind

We discuss:

* Why scaling is an artform

* Adding search, planning, & AlphaZero type training atop LLMs

* Making sure rogue nations can't steal weights

* The right way to align superhuman AIs and do an intelligence explosion

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Timestamps

(0:00:00) - Nature of intelligence

(0:05:56) - RL atop LLMs

(0:16:31) - Scaling and alignment

(0:24:13) - Timelines and intelligence explosion

(0:28:42) - Gemini training

(0:35:30) - Governance of superhuman AIs

(0:40:42) - Safety, open source, and security of weights

(0:47:00) - Multimodal and further progress

(0:54:18) - Inside Google DeepMind



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-02-28
Link to episode

Patrick Collison (Stripe CEO) - Craft, Beauty, & The Future of Payments

We discuss:

* what it takes to process $1 trillion/year

* how to build multi-decade APIs, companies, and relationships

* what's next for Stripe (increasing the GDP of the internet is quite an open ended prompt, and the Collison brothers are just getting started).

Plus the amazing stuff they're doing at Arc Institute, the financial infrastructure for AI agents, playing devil's advocate against progress studies, and much more.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Advice for 20-30 year olds

(00:12:12) - Progress studies

(00:22:21) - Arc Institute

(00:34:27) - AI & Fast Grants

(00:43:46) - Stripe history

(00:55:44) - Stripe Climate

(01:01:39) - Beauty & APIs

(01:11:51) - Financial innards

(01:28:16) - Stripe culture & future

(01:41:56) - Virtues of big businesses

(01:51:41) - John



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-02-21
Link to episode

Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

It was a great pleasure speaking with Tyler Cowen for the 3rd time.

We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more.

The topics covered in this episode are too many to summarize. Hope you enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - John Maynard Keynes

(00:17:16) - Controversy

(00:25:02) - Fredrick von Hayek

(00:47:41) - John Stuart Mill

(00:52:41) - Adam Smith

(00:58:31) - Coase, Schelling, & George

(01:08:07) - Anarchy

(01:13:16) - Cheap WMDs

(01:23:18) - Technocracy & political philosophy

(01:34:16) - AI & Scaling



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-01-31
Link to episode

Lessons from The Years of Lyndon Johnson by Robert Caro [Narration]

This is a narration of my blog post, Lessons from The Years of Lyndon Johnson by Robert Caro.

You read the full post here: https://www.dwarkeshpatel.com/p/lyndon-johnson

Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-01-23
Link to episode

Will scaling work? [Narration]

This is a narration of my blog post, Will scaling work?.

You read the full post here: https://www.dwarkeshpatel.com/p/will-scaling-work

Listen on Apple Podcasts, Spotify, or any other podcast platform. Follow me on Twitter for updates on future posts and episodes.



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2024-01-19
Link to episode

Jung Chang - Living through Cultural Revolution and the Crimes of Mao

A true honor to speak with Jung Chang.

She is the author of Wild Swans: Three Daughters of China (sold 15+ million copies worldwide) and Mao: The Unknown Story.

We discuss:

- what it was like growing up during the Cultural Revolution as the daughter of a denounced official

- why the CCP continues to worship the biggest mass murderer in human history.

- how exactly Communist totalitarianism was able to subjugate a billion people

- why Chinese leaders like Xi and Deng who suffered from the Cultural Revolution don't condemn Mao

- how Mao starved and killed 40 million people during The Great Leap Forward in order to exchange food for Soviet weapons

Wild Swans is the most moving book I've ever read. It was a real privilege to speak with its author.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Growing up during Cultural Revolution

(00:15:58) - Could officials have overthrown Mao?

(00:34:09) - Great Leap Forward

(00:48:12) - Modern support of Mao

(01:03:24) - Life as peasant

(01:21:30) - Psychology of communist society



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-11-29
Link to episode

Andrew Roberts - SV's Napoleon Cult, Why Hitler Lost WW2, Churchill as Applied Historian

Andrew Roberts is the world's best biographer and one of the leading historians of our time.

We discussed

* Churchill the applied historian,

* Napoleon the startup founder,

* why Nazi ideology cost Hitler WW2,

* drones, reconnaissance, and other aspects of the future of war,

* Iraq, Afghanistan, Korea, Ukraine, & Taiwan.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Post WW2 conflicts

(00:10:57) - Ukraine

(00:16:33) - How Truman Prevented Nuclear War

(00:22:49) - Taiwan

(00:27:15) - Churchill

(00:35:11) - Gaza & future wars

(00:39:05) - Could Hitler have won WW2?

(00:48:00) - Surprise attacks

(00:59:33) - Napoleon and startup founders

(01:14:06) - Robert?s insane productivity



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-11-22
Link to episode

Dominic Cummings - COVID, Brexit, & Fixing Western Governance

Here is my interview with Dominic Cummings on why Western governments are so dangerously broken, and how to fix them before an even more catastrophic crisis.

Dominic was Chief Advisor to the Prime Minister during COVID, and before that, director of Vote Leave (which masterminded the 2016 Brexit referendum).

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - One day in COVID?

(00:08:26) - Why is government broken?

(00:29:10) - Civil service

(00:38:27) - Opportunity wasted?

(00:49:35) - Rishi Sunak and Number 10 vs 11

(00:55:13) - Cyber, nuclear, bio risks

(01:02:04) - Intelligence & defense agencies

(01:23:32) - Bismarck & Lee Kuan Yew

(01:37:46) - How to fix the government?

(01:56:43) - Taiwan

(02:00:10) - Russia

(02:07:12) - Bismarck?s career as an example of AI (mis)alignment

(02:17:37) - Odyssean education



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-11-15
Link to episode

Paul Christiano - Preventing an AI Takeover

Paul Christiano is the world?s leading AI safety researcher. My full episode with him is out!

We discuss:

- Does he regret inventing RLHF, and is alignment necessarily dual-use?

- Why he has relatively modest timelines (40% by 2040, 15% by 2030),

- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?

- Why he?s leading the push to get to labs develop responsible scaling policies, and what it would take to prevent an AI coup or bioweapon,

- His current research into a new proof system, and how this could solve alignment by explaining model's behavior

- and much more.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Open Philanthropy

Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.

For more information and to apply, please see the application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/

The deadline to apply is November 9th; make sure to check out those roles before they close.

Timestamps

(00:00:00) - What do we want post-AGI world to look like?

(00:24:25) - Timelines

(00:45:28) - Evolution vs gradient descent

(00:54:53) - Misalignment and takeover

(01:17:23) - Is alignment dual-use?

(01:31:38) - Responsible scaling policies

(01:58:25) - Paul?s alignment research

(02:35:01) - Will this revolutionize theoretical CS and math?

(02:46:11) - How Paul invented RLHF

(02:55:10) - Disagreements with Carl Shulman

(03:01:53) - Long TSMC but not NVIDIA



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-10-31
Link to episode

Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models

I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind!

We discuss:

* Why he expects AGI around 2028

* How to align superhuman models

* What new architectures needed for AGI

* Has Deepmind sped up capabilities or safety more?

* Why multimodality will be next big landmark

* and much more

Watch full episode on YouTube, Apple Podcasts, Spotify, or any other podcast platform. Read full transcript here.

Timestamps

(0:00:00) - Measuring AGI

(0:11:41) - Do we need new architectures?

(0:16:26) - Is search needed for creativity?

(0:19:19) - Superhuman alignment

(0:29:58) - Impact of Deepmind on safety vs capabilities

(0:34:03) - Timelines

(0:41:24) - Multimodality



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-10-26
Link to episode

Grant Sanderson (3Blue1Brown) - Past, Present, & Future of Mathematics

I had a lot of fun chatting with Grant Sanderson (who runs the excellent 3Blue1Brown YouTube channel) about:

- Whether advanced math requires AGI

- What careers should mathematically talented students pursue

- Why Grant plans on doing a stint as a high school teacher

- Tips for self teaching

- Does Godel?s incompleteness theorem actually matter

- Why are good explanations so hard to find?

- And much more

Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Full transcript here.

Timestamps

(0:00:00) - Does winning math competitions require AGI?

(0:08:24) - Where to allocate mathematical talent?

(0:17:34) - Grant?s miracle year

(0:26:44) - Prehistoric humans and math

(0:33:33) - Why is a lot of math so new?

(0:44:44) - Future of education

(0:56:28) - Math helped me realize I wasn?t that smart

(0:59:25) - Does Godel?s incompleteness theorem matter?

(1:05:12) - How Grant makes videos

(1:10:13) - Grant?s math exposition competition

(1:20:44) - Self teaching



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-10-12
Link to episode

Sarah C. M. Paine - WW2, Taiwan, Ukraine, & Maritime vs Continental Powers

I learned so much from Sarah Paine, Professor of History and Strategy at the Naval War College.

We discuss:

- how continental vs maritime powers think and how this explains Xi & Putin's decisions

- how a war with China over Taiwan would shake out and whether it could go nuclear

- why the British Empire fell apart, why China went communist, how Hitler and Japan could have coordinated to win WW2, and whether Japanese occupation was good for Korea, Taiwan and Manchuria

- plus other lessons from WW2, Cold War, and Sino-Japanese War

- how to study history properly, and why leaders keep making the same mistakes

If you want to learn more, check out her books - they?re some of the best military history I?ve ever read.

Watch on YouTube, listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript.

Timestamps

(0:00:00) - Grand strategy

(0:11:59) - Death ground

(0:23:19) - WW1

(0:39:23) - Writing history

(0:50:25) - Japan in WW2

(0:59:58) - Ukraine

(1:10:50) - Japan/Germany vs Iraq/Afghanistan occupation

(1:21:25) - Chinese invasion of Taiwan

(1:51:26) - Communists & Axis

(2:08:34) - Continental vs maritime powers



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-10-04
Link to episode

Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Here is my conversation with Dario Amodei, CEO of Anthropic.

Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Introduction

(00:01:00) - Scaling

(00:15:46) - Language

(00:22:58) - Economic Usefulness

(00:38:05) - Bioterrorism

(00:43:35) - Cybersecurity

(00:47:19) - Alignment & mechanistic interpretability

(00:57:43) - Does alignment research require scale?

(01:05:30) - Misuse vs misalignment

(01:09:06) - What if AI goes well?

(01:11:05) - China

(01:15:11) - How to think about alignment

(01:31:31) - Is modern security good enough?

(01:36:09) - Inefficiencies in training

(01:45:53) - Anthropic?s Long Term Benefit Trust

(01:51:18) - Is Claude conscious?

(01:56:14) - Keeping a low profile



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-08-08
Link to episode

Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don?t Work

A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook.

Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was.

So I asked if we could record a conversation about how he learns and a bunch of other topics:

* How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace)

* Why memorization is essential to understanding and decision-making

* How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system.

* How LLMs and video games will change education

* How independent researchers and writers can make money

* The balance of freedom and discipline in education

* Why we produce fewer von Neumann-like prodigies nowadays

* How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

To see Andy?s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material.

You can check out his website and personal notes, and follow him on Twitter.

Cometeer

Visit cometeer.com/lunar for $20 off your first order on the best coffee of your life!

If you want to sponsor an episode, contact me at dwarkesh.sanjay.patel@gmail.com.

Timestamps

(00:00:52) - Skillful reading

(00:02:30) - Do people care about understanding?

(00:06:52) - Structuring effective self-teaching

(00:16:37) - Memory and forgetting

(00:33:10) - Andy?s memory practice

(00:40:07) - Intellectual stamina

(00:44:27) - New media for learning (video, games, streaming)

(00:58:51) - Schools are designed for the median student

(01:05:12) - Is learning inherently miserable?

(01:11:57) - How Andy would structure his kids? education

(01:30:00) - The usefulness of hypertext

(01:41:22) - How computer tools enable iteration

(01:50:44) - Monetizing public work

(02:08:36) - Spaced repetition

(02:10:16) - Andy?s personal website and notes

(02:12:44) - Working at Apple

(02:19:25) - Spaced repetition 2



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-07-12
Link to episode

Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

The second half of my 7 hour conversation with Carl Shulman is out!

My favorite part! And the one that had the biggest impact on my worldview.

Here, Carl lays out how an AI takeover might happen:

* AI can threaten mutually assured destruction from bioweapons,

* use cyber attacks to take over physical infrastructure,

* build mechanical armies,

* spread seed AIs we can never exterminate,

* offer tech and other advantages to collaborating countries, etc

Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:

* what is the far future best case scenario for humanity

* what it would look like to have AI make thousands of years of intellectual progress in a month

* how do we detect deception in superhuman models

* does space warfare favor defense or offense

* is a Malthusian state inevitable in the long run

* why markets haven't priced in explosive economic growth

* & much more

Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Catch part 1 here

Timestamps

(0:00:00) - Intro

(0:00:47) - AI takeover via cyber or bio

(0:32:27) - Can we coordinate against AI?

(0:53:49) - Human vs AI colonizers

(1:04:55) - Probability of AI takeover

(1:21:56) - Can we detect deception?

(1:47:25) - Using AI to solve coordination problems

(1:56:01) - Partial alignment

(2:11:41) - AI far future

(2:23:04) - Markets & other evidence

(2:33:26) - Day in the life of Carl Shulman

(2:47:05) - Space warfare, Malthusian long run, & other rapid fire



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-06-26
Link to episode

Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

In terms of the depth and range of topics, this episode is the best I?ve done.

No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.

We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.

This part is about Carl?s model of an intelligence explosion, which integrates everything from:

* how fast algorithmic progress & hardware improvements in AI are happening,

* what primate evolution suggests about the scaling hypothesis,

* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,

* how quickly robots produced from existing factories could take over the economy.

We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he?s more optimistic than Eliezer.

The next part, which I?ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.

Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - Intro

(00:01:32) - Intelligence Explosion

(00:18:03) - Can AIs do AI research?

(00:39:00) - Primate evolution

(01:03:30) - Forecasting AI progress

(01:34:20) - After human-level AGI

(02:08:39) - AI takeover scenarios



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-06-14
Link to episode

Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb

We discuss

- similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)

- visiting starving former Soviet scientists during fall of Soviet Union

- whether Oppenheimer was a spy, & consulting on the Nolan movie

- living through WW2 as a child

- odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea

- how the US pulled of such a massive secret wartime scientific & industrial project

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Oppenheimer movie

(0:06:22) - Was the bomb inevitable?

(0:29:10) - Firebombing vs nuclear vs hydrogen bombs

(0:49:44) - Stalin & the Soviet program

(1:08:24) - Deterrence, disarmament, North Korea, Taiwan

(1:33:12) - Oppenheimer as lab director

(1:53:40) - AI progress vs Manhattan Project

(1:59:50) - Living through WW2

(2:16:45) - Secrecy

(2:26:34) - Wisdom & war



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-05-23
Link to episode

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - TIME article

(0:09:06) - Are humans aligned?

(0:37:35) - Large language models

(1:07:15) - Can AIs help with alignment?

(1:30:17) - Society?s response to AI

(1:44:42) - Predictions (or lack thereof)

(1:56:55) - Being Eliezer

(2:13:06) - Othogonality

(2:35:00) - Could alignment be easier than we think?

(3:02:15) - What will AIs want?

(3:43:54) - Writing fiction & whether rationality helps you win



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-04-06
Link to episode

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:

* time to AGI

* leaks and spies

* what's after generative models

* post AGI futures

* working with Microsoft and competing with Google

* difficulty of aligning superhuman AI

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00) - Time to AGI

(05:57) - What?s after generative models?

(10:57) - Data, models, and research

(15:27) - Alignment

(20:53) - Post AGI Future

(26:56) - New ideas are overrated

(36:22) - Is progress inevitable?

(41:27) - Future Breakthroughs



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-03-27
Link to episode

Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history?s greatest prize. For beneath those ashes lies the only salvageable library from the classical world.

Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.

And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.

We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Vesuvius Challenge

(0:30:00) - Finding points of leverage

(0:37:39) - Open Source in AI

(0:40:32) - Github Acquisition

(0:50:18) - Copilot origin Story

(1:11:47) - Nat.org

(1:32:56) - Questions from Twitter



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-03-22
Link to episode

Brett Harrison - FTX US Former President & HFT Veteran Speaks Out

I flew out to Chicago to interview Brett Harrison, who is the former President of FTX US President and founder of Architect.

In his first longform interview since the fall of FTX, he speak in great detail about his entire tenure there and about SBF?s dysfunctional leadership. He talks about how the inner circle of Gary Wang, Nishad Singh, and SBF mismanaged the company, controlled the codebase, got distracted by media, and even threatened him for his letter of resignation.

In what was my favorite part of the interview, we also discuss his insights about the financial system from his decades of experience in the world's largest HFT firms.

And we talk about Brett's new startup, Architect, as well as the general state of crypto post-FTX.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - Passive investing & HFT hacks

(0:08:30) - Is Finance Zero-Sum?

(0:18:38) - Interstellar Markets & Periodic Auctions

(0:23:10) - Hiring & Programming at Jane Street

(0:32:09) - Quant Culture

(0:42:10) - FTX - Meeting Sam, Joining FTX US

(0:58:20) - FTX - Accomplishments, Beginnings of Trouble

(1:08:11) - FTX - SBF's Dysfunctional Leadership

(1:26:53) - FTX - Alameda

(1:33:50) - FTX - Leaving FTX, SBF"s Threats

(1:45:45) - FTX - Collapse

(1:53:10) - FTX - Lessons

(2:04:34) - FTX - Regulators, & FTX Mafia

(2:15:42) - Architect.xyz

(2:30:10) - Institutional Interest & Uses of Crypto



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-03-13
Link to episode

Marc Andreessen - AI, Crypto, 1000 Elon Musks, Regrets, Vulnerabilities, & Managerial Revolution

My podcast with the brilliant Marc Andreessen is out!

We discuss:

* how AI will revolutionize software

* whether NFTs are useless, & whether he should be funding flying cars instead

* a16z's biggest vulnerabilities

* the future of fusion, education, Twitter, venture, managerialism, & big tech

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:17) - Chewing glass

(0:04:21) - AI

(0:06:42) - Regrets

(0:08:51) - Managerial capitalism

(0:18:43) - 100 year fund

(0:22:15) - Basic research

(0:27:07) - $100b fund?

(0:30:32) - Crypto debate

(0:43:29) - Future of VC

(0:50:20) - Founders

(0:56:42) - a16z vulnerabilities

(1:01:28) - Monetizing Twitter

(1:07:09) - Future of big tech

(1:14:07) - Is VC Overstaffed?



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-02-01
Link to episode

Garett Jones - Immigration, National IQ, & Less Democracy

Garett Jones is an economist at George Mason University and the author of The Cultural Transplant, Hive Mind, and 10% Less Democracy.

This episode was fun and interesting throughout!

He explains:

* Why national IQ matters

* How migrants bring their values to their new countries

* Why we should have less democracy

* How the Chinese are an unstoppable global force for free markets

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform.

Timestamps

(00:00:00) - Intro

(00:01:08) - Migrants Change Countries with Culture or Votes?

(00:09:15) - Impact of Immigrants on Markets & Corruption

(00:12:02) - 50% Open Borders?

(00:16:54) - Chinese are Unstoppable Capitalists 

(00:21:39) - Innovation & Immigrants 

(00:24:53) - Open Borders for Migrants Equivalent to Americans?

(00:28:54) - Let's Ignore Side Effects?

(00:30:25) - Are Poor Countries Stuck?

(00:32:26) - How Can Effective Altruists Increase National IQ

(00:39:13) - Clone a million John von Neumann?

(00:44:39) - Genetic Selection for IQ

(00:47:02) - Democracy, Fed, FDA, & Presidential Power

(00:49:42) - EU is a force for good?

(00:55:12) - Why is America More Libertarian Than Median Voter?

(00:56:19) - Is Ethnic Conflict a Short Run Problem?

(00:59:38) - Bond Holder Democracy

(01:04:57) - Mormonism

(01:08:52) - Garett Jones's Immigration System

(01:10:12) - Interviewing SBF



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-01-24
Link to episode

Lars Doucet - Progress, Poverty, Georgism, & Why Rent is Too Damn High

One of my best episodes ever. Lars Doucet is the author of Land is a Big Deal, a book about Georgism which has been praised by Vitalik Buterin, Scott Alexander, and Noah Smith. Sam Altman is the lead investor in his new startup, ValueBase.

Talking with Lars completely changed how I think about who creates value in the world and who leeches off it.

We go deep into the weeds on Georgism:

* Why do even the wealthiest places in the world have poverty and homelessness, and why do rents increase as fast as wages?

* Why are land-owners able to extract the profits that rightly belong to labor and capital?

* How would taxing the value of land alleviate speculation, NIMBYism, and income and sales taxes?

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Follow Lars on Twitter. Follow me on Twitter.

Timestamps

(00:00:00) - Intro

(00:01:11) - Georgism

(00:03:16) - Metaverse Housing Crises

(00:07:10) - Tax Leisure?

(00:13:53) - Speculation & Frontiers

(00:24:33) - Social Value of Search 

(00:33:13) - Will Georgism Destroy The Economy?

(00:38:51) - The Economics of San Francisco

(00:43:31) - Transfer from Landowners to Google?

(00:46:47) - Asian Tigers and Land Reform

(00:51:19) - Libertarian Georgism

(00:55:42) - Crypto

(00:57:16) - Transitioning to Georgism

(01:02:56) - Lars's Startup & Land Assessment 

(01:15:12) - Big Tech

(01:20:50) - Space

(01:23:05) - Copyright

(01:25:02) - Politics of Georgism

(01:33:10) - Someone Is Always Collecting Rents



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-01-09
Link to episode

Holden Karnofsky - Transformative AI & Most Important Century

Holden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes.

We discuss:

* Are we living in the most important century?

* Does he regret OpenPhil?s 30 million dollar grant to OpenAI in 2016?

* How does he think about AI, progress, digital people, & ethics?

Highly recommend!

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.

Timestamps

(0:00:00) - Intro

(0:00:58) - The Most Important Century

(0:06:44) - The Weirdness of Our Time

(0:21:20) - The Industrial Revolution 

(0:35:40) - AI Success Scenario

(0:52:36) - Competition, Innovation , & AGI Bottlenecks

(1:00:14) - Lock-in & Weak Points

(1:06:04) - Predicting the Future

(1:20:40) - Choosing Which Problem To Solve

(1:26:56) - $30M OpenAI Investment

(1:30:22) - Future Proof Ethics

(1:37:28) - Integrity vs Utilitarianism

(1:40:46) - Bayesian Mindset & Governance

(1:46:56) - Career Advice



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2023-01-03
Link to episode

Bethany McLean - Enron, FTX, 2008, Musk, Frauds, & Visionaries

This was one of my favorite episodes ever.

Bethany McLean was the first reporter to question Enron?s earnings, and she has written some of the best finance books out there.

We discuss:

* The astounding similarities between Enron & FTX,

* How visionaries are just frauds who succeed (and which category describes Elon Musk),

* What caused 2008, and whether we are headed for a new crisis,

* Why there?s too many venture capitalists and not enough short sellers,

* And why history keeps repeating itself.

McLean is a contributing editor at Vanity Fair (see her articles here) and the author of The Smartest Guys in the Room, All the Devils Are Here, Saudi America, and Shaky Ground.

Watch on YouTube. Listen on Spotify, Apple Podcasts, or your favorite podcast platform.

Follow McLean on Twitter. Follow me on Twitter for updates on future episodes.

Timestamps

(0:04:37) - Is Fraud Over?

(0:11:22) - Shortage of Shortsellers

(0:19:03) - Elon Musk - Fraud or Visionary?

(0:23:00) - Intelligence, Fake Deals, & Culture

(0:33:40) - Rewarding Leaders for Long Term Thinking

(0:37:00) - FTX Mafia?

(0:40:17) - Is Finance Too Big?

(0:44:09) - 2008 Collapse, Fannie & Freddie

(0:49:25) - The Big Picture

(1:00:12) - Frackers Vindicated?

(1:03:40) - Rating Agencies

(1:07:05) - Lawyers Getting Rich Off Fraud

(1:15:09) - Are Some People Fundamentally Deceptive?

(1:19:25) - Advice for Big Picture Thinkers



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2022-12-21
Link to episode

Nadia Asparouhova - Tech Elites, Democracy, Open Source, & Philanthropy

Nadia Asparouhova is currently researching what the new tech elite will look like at nadia.xyz. She is also the author of Working in Public: The Making and Maintenance of Open Source Software.

We talk about how:

* American philanthropy has changed from Rockefeller to Effective Altruism

* SBF represented the Davos elite rather than the Silicon Valley elite,

* Open source software reveals the limitations of democratic participation,

* & much more.

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here.

Timestamps

(0:00:00) - Intro

(0:00:26) - SBF was Davos elite

(0:09:38) - Gender sociology of philanthropy

(0:16:30) - Was Shakespeare an open source project?

(0:22:00) - Need for charismatic leaders

(0:33:55) - Political reform

(0:40:30) - Why didn?t previous wealth booms lead to new philanthropic movements?

(0:53:35) - Creating a 10,000 year endowment

(0:57:27) - Why do institutions become left wing?

(1:02:27) - Impact of billionaire intellectual funding

(1:04:12) - Value of intellectuals

(1:08:53) - Climate, AI, & Doomerism

(1:18:04) - Religious philanthropy



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
2022-12-15
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.