Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/ce/e1/c6/cee1c6b1-766b-9137-2409-b41df5796394/mza_467482820930693497.jpg/600x600bb.jpg
Theo Jaffee Podcast
Theo Jaffee
21 episodes
5 days ago
Deep conversations with brilliant people.
Show more...
Technology
RSS
All content for Theo Jaffee Podcast is the property of Theo Jaffee and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Deep conversations with brilliant people.
Show more...
Technology
Episodes (20/21)
Theo Jaffee Podcast
Luke Drago - AGI, The Intelligence Curse, and Hip-Hop

Luke Drago is the co-author of The Intelligence Curse and previously researched AI governance and economics at BlueDot Impact, served on the leadership team at Encode, and studied history and politics at Oxford.


0:00 - Intro

0:54 - Overview of the intelligence curse

2:10 - Why are the doomers wrong?

4:37 - Why are the optimists wrong?

7:00 - Do people really have power now?

13:33 - Why would powerful people’s values change?

18:31 - Why do we take care of dependents?

21:43 - Why should we want democracy in an AI future?

24:23 - Why fear rentier states?

32:45 - What powerful people should do right now

39:33 - Diffusion time and bottlenecks

44:20 - Why should we care if China achieves AGI first?

46:25 - The jagged frontier

49:16 - Why AGI society could be static

51:10 - Restricting AI rights

56:34 - What should we be excited for?

59:28 - Music

1:30:41 - Building God

1:32:46 - More music


The Intelligence Curse: https://intelligence-curse.ai/

Luke’s Twitter: https://x.com/luke_drago_

Luke’s Substack: https://lukedrago.substack.com/


Luke’s top 10 albums:

- A Fever You Can't Sweat Out by Panic! at the Disco (2005)

- Channel Orange by Frank Ocean (2012)

- Random Access Memories by Daft Punk (2013)

- Yeezus by Kanye West (2013)

- DAMN. by Kendrick Lamar (2017)

- DAYTONA by Pusha T (2018)

- IGOR by Tyler, the Creator (2019)

- I Didn't Mean to Haunt You by Quadeca (2022)

- College Park by Logic (2023)

- Atavista by Childish Gambino (2024)


More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf


My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
6 months ago
1 hour 47 minutes 18 seconds

Theo Jaffee Podcast
Alok Singh - AI, Math, Philosophy, and Erewhon

Alok Singh researches provable AI safety via formal verification using Lean at Max Tegmark’s Beneficial AI Foundation, and writes about mathematics at alok.github.io.

0:00 - Intro

1:12 - Typing

8:45 - Elon’s demo day

22:42 - Animation, discrete vs continuous

29:04 - Number systems

35:26 - Nonstandard analysis

43:04 - Reasoning models and o3

50:45 - Fiction

55:48 - o1 and Linguistics

58:50 - Hyperfinite sets

1:11:58 - AI for math

1:16:01 - The field with one element

1:23:17 - Lean

1:31:53 - Lean for formally verifying superintelligence

1:36:03 - Ayn Rand

1:47:46 - Erewhon

1:57:56 - Proto-Indo-European

2:03:18 - More Erewhon

2:14:41 - Butler and Kaczynski

2:50:19 - Outro

Alok’s Website: https://alok.github.io/

Alok’s Twitter: https://x.com/TheRevAlokSingh

Beneficial AI Foundation: http://beneficialaifoundation.org/

Lean: https://lean-lang.org/

Transcript: https://www.theojaffee.com/p/podcast-alok-singh

More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
6 months ago
2 hours 50 minutes 53 seconds

Theo Jaffee Podcast
#19: Samo Burja - Superintelligence and History, Ideology, and 21st Century Philosophy

Samo Burja is a writer, historian, and political scientist, the founder of civilizational consulting firm Bismarck Analysis, and the editor-in-chief of governance futurism magazine Palladium.


Chapters

0:00 - Intro

1:06 - Implications of OpenAI o1

10:21 - Implications of superintelligence on history

35:06 - Palladium, Chinese technocracy, ideology, and media

1:00:44 - Best ideas, philosophers, and works of the past 20-30 years


Links

Samo’s Website: https://samoburja.com/

Bismarck Analysis: https://www.bismarckanalysis.com/

Palladium: https://www.palladiummag.com/

Bismarck’s Twitter: https://x.com/bismarckanlys

Palladium’s Twitter: https://x.com/palladiummag

Samo’s Twitter: https://x.com/samoburja

Transcript: https://www.theojaffee.com/p/19-samo-burja


More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
1 hour 6 minutes 51 seconds

Theo Jaffee Podcast
#18: Alec Stapp - The Institute for Progress, American Dynamism, and Fixing Governance

Alec Stapp is the co-founder and co-CEO of the Institute for Progress, a non-profit think tank dedicated to accelerating scientific, technological, and industrial progress.


Chapters

0:00 - Intro

1:13 - Why can’t smart people fix the Bay Area?

3:38 - How to get normal people on board with IFP

10:23 - How to get smart people into governance

15:55 - How IFP chose its priorities

21:56 - How will IFP avoid mission creep?

24:17 - How important is academia today?

26:03 - Would Alec press a button to fully open borders?

29:45 - How prepared are we for another pandemic?

33:16 - Why don’t easy wins happen?

36:17 - Is Biden’s spending good?

40:51 - How important is the repeal of Chevron deference?

43:23 - Are land value taxes good?

45:01 - “The Project” for AGI and AI Alignment

48:19 - Is globalism dying?

50:32 - Overrated or Underrated?

59:28 - The most overrated issue

1:00:26 - The most underrated issue


Links

Institute for Progress: ifp.org

  • “Progress Is A Policy Choice” founding essay by Alec Stapp and Caleb Watney: https://ifp.org/progress-is-a-policy-choice/

  • “How to Reuse the Operation Warp Speed Model” by Arielle D’Souza: https://ifp.org/how-to-reuse-the-operation-warp-speed-model/

  • “How to Be a Policy Entrepreneur in the American Vetocracy” by Alec Stapp: https://ifp.org/how-to-be-a-policy-entrepreneur-in-the-american-vetocracy/

  • “To Speed Up Scientific Progress, We Need to Understand Science Policy”: https://ifp.org/to-speed-up-scientific-progress-we-need-to-understand-science-policy/

  • “But Seriously, How Do We Make an Entrepreneurial State?” by Caleb Watney: https://ifp.org/how-do-we-make-an-entrepreneurial-state/

  • Construction Physics newsletter by Brian Potter: https://constructionphysics.substack.com/

  • Macroscience newsletter by Tim Hwang: https://www.macroscience.org/

  • Statecraft newsletter by Santi Ruiz: https://www.statecraft.pub/

IFP’s Twitter: x.com/IFP

Alec’s Twitter: x.com/AlecStapp

Transcript: https://www.theojaffee.com/p/18-alec-stapp


More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
1 hour 3 minutes 6 seconds

Theo Jaffee Podcast
#17: Casey Handmer - Terraform, solar, space, Hyperloop, and how to think

Casey Handmer is the founder and CEO of Terraform Industries and a physicist, immigrant, pilot, dad, solar enthusiast, Caltech physics PhD and former Hyperloop One levitation engineer and NASA JPL software system architect.


0:00 - Intro

1:40 - Why don’t other people do what Terraform does?

2:51 - Why is solar better than nuclear fusion?

5:27 - Could carbon emissions actually be good?

8:38 - Why isn’t anyone stopping global warming with sulfur?

13:20 - Can America build something like Terraform?

20:53 - Solar and nuclear

23:10 - Why not terraform Venus instead of Mars?

30:47 - Why did Casey work at NASA instead of SpaceX?

37:18 - Why is Elon the only person with multiple huge companies?

39:59 - Why didn’t the Hyperloop work?

42:26 - Tile the desert with solar

46:03 - How does solar change geopolitics?

48:30 - How does Casey manage his time?

53:24 - How do you develop first principles thinking?

56:28 - Favorite place Casey has traveled to

59:21 - Outro


Casey’s Blog: https://caseyhandmer.wordpress.com/

- You Should Be Working On Hardware: https://caseyhandmer.wordpress.com/2023/08/25/you-should-be-working-on-hardware/

- The solar industrial revolution is the biggest investment opportunity in history: https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/

- Future of Energy Reading List: https://caseyhandmer.wordpress.com/2023/10/19/future-of-energy-reading-list/

- Elon Musk Is Not Understood: https://caseyhandmer.wordpress.com/2024/01/02/elon-musk-is-not-understood/

- Why High Speed Rail Hasn’t Caught On: https://caseyhandmer.wordpress.com/2022/10/11/why-high-speed-rail-hasnt-caught-on/

Casey’s Website: http://caseyhandmer.com/

Casey’s Twitter: https://x.com/cjhandmer

Terraform Industries: https://terraformindustries.com/

Terraform Blog: https://terraformindustries.wordpress.com/

- Scaling Carbon Capture: https://terraformindustries.wordpress.com/2022/07/24/scaling-carbon-capture/

- Terraform Industries Whitepaper: https://terraformindustries.wordpress.com/2022/07/24/terraform-industries-whitepaper/

- Terraform Industries Whitepaper 2.0: https://terraformindustries.wordpress.com/2023/01/09/terraform-industries-whitepaper-2-0/

- Permitting Reform or Death: https://terraformindustries.wordpress.com/2023/11/10/permitting-reform-or-death/


Transcript: https://www.theojaffee.com/p/17-casey-handmer


More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf


My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
1 hour 15 seconds

Theo Jaffee Podcast
#16: Stephen Grugett and Austin Chen - Manifold, Manifund, Manifest, prediction markets, and EA

Stephen Grugett and Austin Chen are co-founders of Manifold Markets, an online play-money prediction market and competitive forecasting platform. Stephen currently serves on the company’s management, while Austin recently stepped down to start Manifund, a unique, open-source grant program. This video is not sponsored in any way by Manifold, Manifund, or Manifest - I just think they’re cool.


Chapters

0:00 - Intro

Part 1: Stephen Grugett

1:20 - Are prediction markets actually bad?

4:11 - Would Manifold use real money if allowed?

5:24 - How Manifold would use real money if allowed

6:08 - Would Manifold use crypto if allowed?

7:17 - Can you ever get long-term returns from prediction markets?

10:01 - Would subsidies ruin markets?

11:23 - Why Manifold beat real money on predicting the 2022 elections

16:00 - Would Stephen implement futarchy?

19:54 - Manifold Love

23:22 - Bet on Love

26:21 - Why Manifold is miscalibrated

29:06 - Insider trading and market manipulation

31:42 - Is it easier to make money on prediction markets or normal markets?

32:37 - Good prediction market UI

34:35 - Why should people trust market creators?

35:34 - Derivatives on prediction markets

37:20 - Stephen’s ginseng adventures

40:55 - Audience Q: why don’t Americans consume American ginseng?

41:35 - Audience Q: cancel culture and Richard Hanania

45:50 - Audience Q: why aren’t there more institutional investors in prediction markets?

47:33 - Audience Q: can journalists help resolve markets?

49:45 - Audience Q: is there any role for sweepstakes other than regulatory arbitrage?

Part 2: Austin Chen

51:14 - Are prediction markets insufficiently powerful?

54:22 - What prediction markets can do if not futarchy

55:36 - How Manifund was designed

59:35 - How Manifund chooses regrantors

1:00:49 - Why donate to Manifund?

1:03:09 - Does Dustin Moskovitz have too much power over EA?

1:04:29 - What Manifund would do differently with more money

1:05:52 - How Manifest gets so many interesting people

1:09:10 - How much did SBF’s fall damage EA?

1:10:04 - OpenAI

1:11:54 - Is this decade more important than other decades?

1:13:01 - Why aren’t more philanthropic organizations open?

1:15:35 - Manifund’s best projects

1:17:25 - How short AGI timelines would affect Manifund

1:19:21 - Audience Q: how Manifold ships fast

1:22:11 - Outro

Links

Manifold: https://manifold.markets

Manifund: https://manifund.com

Manifest: https://www.manifest.is

Manifold’s Twitter: https://x.com/manifoldmarkets

Manifund’s Twitter: https://x.com/manifund

Austin’s Twitter: https://x.com/akrolsmir

Transcript: https://www.theojaffee.com/p/16-stephen-grugett-and-austin-chen

More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
1 hour 23 minutes 12 seconds

Theo Jaffee Podcast
#15: Perry Metzger - Extropians, Nanotech, AI Optimism, and the Alliance for the Future

Perry Metzger is an entrepreneur, technology manager, consultant, computer scientist, early proponent of extropianism and futurism, and co-founder and chairman of the Alliance for the Future.


0:00 - Intro

0:47 - How Perry got into extropianism

7:04 - Is extropianism the same as e/acc?

9:38 - Why extropianism died out

12:59 - Eliezer Yudkowsky

17:19 - Perry and Eliezer’s Twitter beef

19:46 - TESCREAL, Baptists and bootleggers

22:34 - Why Eliezer became a doomer

28:39 - Is singularitarianism eschatology?

37:51 - Will nanotech kill us?

45:51 - What if the offense-defense balance favors offense?

53:03 - Instrumental convergence and agency

1:05:35 - How Alliance for the Future was founded

1:12:08 - Decels

1:15:52 - China

1:25:52 - Why a nonprofit lobbying firm?

1:28:36 - How to convince legislators

1:32:20 - Can the government do anything good on AI?

1:39:09 - The future of Alliance for the Future

1:44:22 - Outro


Perry’s Twitter: https://x.com/perrymetzger

AFTF’s Twitter: https://x.com/aftfuture

AFTF’s Manifesto: https://www.affuture.org/manifesto/

An Archaeological Dig Through The Extropian Archives: https://mtabarrok.com/extropia-archaeology

Alliance for the Future: https://www.affuture.org/

Donate to AFTF: affuture.org/donate

Sci-Fi Short Film “Slaughterbots”: https://www.youtube.com/watch?v=O-2tpwW0kmU


Transcript: https://www.theojaffee.com/p/15-perry-metzger


More Episodes

YouTube: https://tinyurl.com/57jr42wk

Spotify: https://tinyurl.com/mrxkkhb4

Apple Podcasts: https://tinyurl.com/yck8pnmf

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
1 hour 44 minutes 53 seconds

Theo Jaffee Podcast
#14: Robin Hanson - Cultural Drift, Ems, Elephants, Institutions, and The Future

Robin Hanson is a professor of economics at George Mason University, the author of The Age of Em and The Elephant in the Brain, the writer of the blog Overcoming Bias, and one of the most interesting polymaths alive today.


0:00 - Intro

1:24 - Mathematical models and grabby aliens

9:11 - Will we run out of value in the future?12:23 - Kurzweil’s Law of Accelerating Returns

14:29 - Posadism

17:53 - Moral progress and Whig history

20:29 - Will there be a trad resurgence?

23:00 - Will Israel’s ultra-Orthodox problem globalize?

25:39 - Why will fertility rate keep dropping?

30:14 - Is declining fertility solvable technologically?

32:20 - What is wokeness? Has it peaked?

35:02 - Will virtualization make society more multicultural?

39:30 - How do institutions coordinate so well?

42:50 - Will ems care about death?

46:16 - Personal identity and death

49:30 - How much of Age of Em is applicable to LLMs?

51:09 - Why we shouldn’t worry about AI risk

55:40 - What if people don’t see AIs as their descendants?

1:00:41 - Other future tech deep dives

1:02:43 - Our very long-run descendants

1:06:08 - Time and risk preferences

1:08:34 - Wouldn’t ems be selected for docility?

1:11:24 - How Robin got involved in rationalism

1:13:22 - Girls getting the “ick”

1:16:56 - Have humans evolved since forager times?

1:18:28 - Cultural evolution

1:20:30 - Culture and prestige

1:22:49 - Why medicine in the US is bad

1:25:54 - Is academia the best truth-seeking institution in society?

1:28:52 - Peer review

1:31:13 - Which institutions are actually good?

1:32:33 - Why universities are all the same

1:37:40 - Bitcoin and speculation

1:46:44 - Demarchy

1:50:03 - Futarchy

1:53:56 - Applying prediction markets to dating apps

1:57:38 - The broadest thinkers and books in the world

2:00:59 - How Robin balances his many interests

2:01:58 - Teaching

2:03:12 - Outro


Robin’s Homepage: https://mason.gmu.edu/~rhanson/home.html

Overcoming Bias: https://www.overcomingbias.com/

Robin’s Twitter: https://twitter.com/robinhansonGrabby Aliens: https://grabbyaliens.com/

Age of Em: https://archive.is/LMrr9The Elephant in the Brain: https://www.elephantinthebrain.com/

Beware Cultural Drift: https://quillette.com/2024/04/11/beware-cultural-drift/

More Episodes

Playlist: https://www.youtube.com/watch?v=sdJRQ6924HY&list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=b4001cf2e8a5453f

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

My Twitter: https://x.com/theojaffee

My Substack: https://www.theojaffee.com

Show more...
1 year ago
2 hours 3 minutes 52 seconds

Theo Jaffee Podcast
#13: Nick Simmons - All About Urbit

Nick Simmons is a founding partner at Octu Ventures, a member-driven venture DAO investing in teams building on Urbit. Urbit is a new computing paradigm that provides complete ownership of your digital world.

0:00 - Intro

1:19 - What actually is Urbit?

5:43 - Urbit ID and Schelling points

9:05 - Why Urbit?

10:23 - Roko Mijic on Urbit vs. TikTok and Crypto

17:32 - Urbit vs. Worldcoin

22:26 - Niche or growth model?

28:50 - Why haven’t Urbit star prices recovered since 2021?

33:13 - Intrinsic value of Urbit address space

36:37 - Urbit as digital land

42:51 - Urbit and DeFi

45:42 - Personal AI on Urbit

51:35 - Urbit-native hardware

55:58 - Urbit design and aesthetics

1:02:15 - Outro

Urbit: https://urbit.org/

Octu: https://octu.ventures/

Urbit Blog: https://urbit.org/blog

“Creating Sigils”: https://urbit.org/blog/creating-sigils

“On Christopher Alexander”: https://urbit.org/blog/on-christopher-alexander

Nick’s Twitter: https://x.com/Halikaarn1an

More Episodes

Playlist: https://www.youtube.com/watch?v=sdJRQ6924HY&list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=b4001cf2e8a5453f

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

My Twitter: https://x.com/theojaffee
My Substack: https://www.theojaffee.com/

Show more...
1 year ago
1 hour 4 minutes 3 seconds

Theo Jaffee Podcast
#12: Paul Buchheit - Creating Gmail, Fixing Google, Narrative Understanding

Paul Buchheit is a programmer and entrepreneur who joined Google as its 23rd employee. He created Gmail, developed the first prototype of Google AdSense, and suggested the company’s motto, “don’t be evil”. He later co-founded FriendFeed and served as a managing partner at Y Combinator.

0:00 - Intro

1:15 - Issues with Google

3:47 - AI risk

5:21 - AI centralization and decentralization

8:01 - Open-sourcing frontier AI

9:59 - Paul’s Predictions

14:28 - Centralization, free speech, and censorship

24:16 - Trends in ideology

32:00 - Freeing people of narratives

35:49 - Alignment

39:06 - Startups and YC in 2024

50:30 - Email and communication interfaces

Paul’s Twitter: https://twitter.com/paultoo

Paul’s Blog: https://paulbuchheit.blogspot.com/

More Episodes

Playlist: https://www.youtube.com/watch?v=sdJRQ6924HY&list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=b4001cf2e8a5453f

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

My Twitter: https://twitter.com/theojaffee

My Substack: https://www.theojaffee.com/

Show more...
1 year ago
59 minutes 9 seconds

Theo Jaffee Podcast
#11: Bryan Caplan - You Will Not Stampede Me: Essays on Non-Conformism

Bryan Caplan is a professor of economics at George Mason University, research fellow at the Mercatus Center, adjunct scholar at the Cato Institute, writer at EconLib and Bet On It, and best-selling author of eight books, including You Will Not Stampede Me: Essays on Non-Conformism, the subject of this episode.


0:00 - Intro

2:04 - The Next Crusade

3:44 - Moderating X

6:11 - Inventing Slippery Slopes

8:04 - Right-Wing Antiwokes

10:20 - Nonconformism and Asperger’s

12:02 - Making society less conformist

16:44 - The rationality community

20:30 - Polyamory

23:28 - Caplan vs. Yudkowsky on methods of rationality

26:40 - Updating on AI risk

29:35 - Checking your nonconformity

31:10 - Making LinkedIn not suck

33:53 - The George Mason economics department

38:35 - Does tenure still matter?

40:03 - Improving education

46:50 - Should people living under totalitarianism conform?

49:30 - Natalism and birth rates in Israel

51:19 - Hedonic adaptation in the age of AI

53:52 - Should we abolish the FDA?

57:15 - Being a prolific writer

1:00:30 - Bryan’s writing advice

1:02:35 - Outro


Bryan’s Twitter: https://x.com/bryan_caplan

Bryan’s Blog, Bet On It: https://www.betonit.ai/

Buy You Will Not Stampede Me: https://www.amazon.com/You-Will-Not-Stampede-Non-Conformism/dp/B0CQPJ6DCT/ref=tmm_pap_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.95bGnREBhOREb8kL8JeVpjT4w1MPCEfRYcUNZuMAEHVdVGJYv9Ns80cVoAGLtZJPOA6b5DTvnMfO6tF6ZAfRybM2TL7EtaHQlsvbOZVOZro.EzrhsuVDhljbtjiqE_YqD9_RP0YXHFyVHMYEu9GOviI&qid=1708877990&sr=8-1

More Episodes

Playlist: https://www.youtube.com/watch?v=sdJRQ6924HY&list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=b4001cf2e8a5453f

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

My Twitter: https://twitter.com/theojaffee

My Substack: https://www.theojaffee.com/

Show more...
1 year ago
1 hour 3 minutes 5 seconds

Theo Jaffee Podcast
#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto

Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer.

  • Liron’s Twitter: https://twitter.com/liron

  • Liron’s Substack: https://lironshapira.substack.com/

  • Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com

TJP LINKS:

  • TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira
  • YouTube: https://youtu.be/YfEcAtHExFM

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

  • Intro (0:00)

  • Non-AI x-risks (0:53)

  • AI non-x-risks (3:00)

  • p(doom) (5:21)

  • Liron vs. Eliezer (12:18)

  • Why might doom not happen? (15:42)

  • Elon Musk and AGI (17:12)

  • Alignment vs. Governance (20:24)

  • Scott Alexander lowering p(doom) (22:32)

  • Human minds vs ASI minds (28:01)

  • Vitalik Buterin and d/acc (33:30)

  • Carefully bootstrapped alignment (35:22)

  • GPT vs AlphaZero (41:55)

  • Belrose & Pope AI Optimism (43:17)

  • AI doom meets daily life (57:57)

  • Israel vs. Hamas (1:02:17)

  • Rationalism (1:06:15)

  • Crypto (1:14:50)

  • Charlie Munger and Richard Feynman (1:22:12)

Show more...
1 year ago
1 hour 25 minutes 18 seconds

Theo Jaffee Podcast
#9: Dwarkesh Patel - Podcasting, AI, Talent, and Fixing Government

Dwarkesh Patel is the host of the Dwarkesh Podcast, where he interviews intellectuals, scientists, historians, economists, and founders about their big ideas. He does deep research and asks great questions. Past podcast guests include billionaire entrepreneur and investor Marc Andreessen, economist and polymath Tyler Cowen, and OpenAI Chief Scientist Ilya Sutskever. Dwarkesh has been recommended by Jeff Bezos, Paul Graham, and me.

  • Dwarkesh Podcast (and transcripts): https://www.dwarkeshpatel.com/podcast

  • Dwarkesh Podcast on YouTube: https://www.youtube.com/@DwarkeshPatel

  • Dwarkesh Podcast on Spotify: https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF

  • Dwarkesh Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381

  • Dwarkesh’s Twitter: https://twitter.com/dwarkesh_sp

  • Dwarkesh’s Blog: https://www.dwarkeshpatel.com/s/writing

PODCAST LINKS:

  • Video Transcript: https://www.theojaffee.com/p/9-dwarkesh-patel

  • Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

Intro (0:00)

OpenAI drama (0:50)

Learning methods (4:10)

Growing the podcast (7:38)

Improving the podcast (17:03)

Contra Marc Andreessen on AI risk (24:18)

How will AI affect podcasts? (26:31)

AI alignment (32:08)

Dwarkesh’s guests (38:04)

Is Eliezer Yudkowsky right? (41:58)

More on the Dwarkesh Podcast (46:01)

Other great podcasts (50:06)

Nanobots, foom, and doom (56:01)

Great Twitter poasters (1:01:59)

Rationalism and other factions (1:05:44)

Why hasn’t Marxism died? (1:15:27)

Where to allocate talent (1:18:51)

Sam Bankman-Fried (1:22:22)

Why is Elon Musk so successful? (1:29:07)

How relevant is human talent with AGI soon? (1:35:07)

Is government actually broken? (1:36:35)

How should we fix Congress? (1:40:50)

Dwarkesh’s favorite part of podcasting (1:46:46)

Show more...
1 year ago
1 hour 48 minutes 27 seconds

Theo Jaffee Podcast
#8: Scott Aaronson - Quantum computing, AI watermarking, Superalignment, complexity, and rationalism

Scott Aaronson is the Schlumberger Chair of Computer Science and Director of the Quantum Information Center at the University of Texas at Austin. Previously, he got his bachelor’s in CS from Cornell, his PhD in complexity theory at UC Berkeley, held postdocs at Princeton and Waterloo, and taught at MIT. Currently, he’s on leave to work on OpenAI’s Superalignment team.

  • Scott’s blog, Shtetl-Optimized: https://www.scottaaronson.blog

  • Scott’s website: https://www.scottaaronson.com

PODCAST LINKS:

  • Video Transcript: https://www.theojaffee.com/p/8-scott-aaronson

  • Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

Intro (0:00)

Background (0:59)

What Quantum Computers Can Do (16:07)

P=NP (21:57)

Complexity Theory (28:07)

David Deutsch (33:49)

AI Watermarking and CAPTCHAs (44:15)

Alignment By Default (56:41)

Cryptography in AI (1:02:12)

OpenAI Superalignment (1:10:29)

Twitter (1:20:27)

Rationalism (1:24:50)

Show more...
1 year ago
1 hour 29 minutes 27 seconds

Theo Jaffee Podcast
#7: Nora Belrose - EleutherAI, Interpretability, Linguistics, and ELK

Nora Belrose is the Head of Interpretability at EleutherAI, and a noted AI optimist.

  • Nora’s Twitter: https://x.com/norabelrose

  • EleutherAI Website: https://www.eleuther.ai/

  • EleutherAI Discord: https://discord.gg/zBGx3azzUn

PODCAST LINKS:

  • Video Transcript: https://www.theojaffee.com/p/7-nora-belrose

  • Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

Intro (0:00)

EleutherAI (0:32)

Optimism (8:02)

Linguistics (22:27)

What Should AIs Do? (32:01)

Regulation (43:44)

Future Vibes (53:56)

Anthropic Polysemanticity (1:05:05)

More Interpretability (1:19:52)

Eliciting Latent Knowledge (1:44:44)

Show more...
2 years ago
2 hours 24 minutes

Theo Jaffee Podcast
#6: Razib Khan - Genetics, ancient history, rationalism, IQ

Razib Khan is a geneticist, the CXO and CSO of a biotech startup, and a writer and podcaster with interests in genetics, genomics, evolution, history, and politics.

  • Razib’s Twitter: https://x.com/razibkhan

  • Razib’s Website: https://www.razib.com

  • Razib’s Substack (Unsupervised Learning): https://www.razibkhan.com

PODCAST LINKS:

  • Video Transcript: https://www.theojaffee.com/p/5-quintin-pope

  • Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

Intro (0:00)

Genrait (0:37)

Genetics and Memetics (4:31)

Domestication (13:48)

Ancient History (22:48)

TESCREALism (30:02)

Transhumanism (53:05)

IQ (1:02:26)

Show more...
2 years ago
1 hour 16 minutes 35 seconds

Theo Jaffee Podcast
#5: Quintin Pope - AI alignment, machine learning, failure modes, and reasons for optimism

Quintin Pope is a machine learning researcher focusing on natural language modeling and AI alignment. Among alignment researchers, Quintin stands out for his optimism. He believes that AI alignment is far more tractable than it seems, and that we appear to be on a good path to making the future great. On LessWrong, he's written one of the most popular posts of the last year, “My Objections To ‘We're All Gonna Die with Eliezer Yudkowsky’”, as well as many other highly upvoted posts on various alignment papers, and on his own theory of alignment, shard theory.

  • Quintin’s Twitter: https://twitter.com/QuintinPope5

  • Quintin’s LessWrong profile: https://www.lesswrong.com/users/quintin-pope

  • My Objections to “We’re All Gonna Die with Eliezer Yudkowsky”: https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky

  • The Shard Theory Sequence: https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX

  • Quintin’s Alignment Papers Roundup: https://www.lesswrong.com/s/5omSW4wNKbEvYsyje

  • Evolution provides no evidence for the sharp left turn: https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn

  • Deep Differentiable Logic Gate Networks: https://arxiv.org/abs/2210.08277

  • The Hydra Effect: Emergent Self-repair in Language Model Computations: https://arxiv.org/abs/2307.15771

  • Deep learning generalizes because the parameter-function map is biased towards simple functions: https://arxiv.org/abs/1805.08522

  • Bridging RL Theory and Practice with the Effective Horizon: https://arxiv.org/abs/2304.09853

PODCAST LINKS:

  • Video Transcript: https://www.theojaffee.com/p/5-quintin-pope

  • Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

  • Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

  • RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

  • Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

  • My Twitter: https://x.com/theojaffee

  • My Substack: https://www.theojaffee.com

CHAPTERS:

Introduction (0:00)

What Is AGI? (1:03)

What Can AGI Do? (12:49)

Orthogonality (23:14)

Mind Space (42:50)

Quintin’s Background and Optimism (55:06)

Mesa-Optimization and Reward Hacking (1:02:48)

Deceptive Alignment (1:11:52)

Shard Theory (1:24:10)

What Is Alignment? (1:30:05)

Misalignment and Evolution (1:37:21)

Mesa-Optimization and Reward Hacking, Part 2 (1:46:56)

RL Agents (1:55:02)

Monitoring AIs (2:09:29)

Mechanistic Interpretability (2:14:00)

AI Disempowering Humanity (2:28:13)

Show more...
2 years ago
2 hours 36 minutes 28 seconds

Theo Jaffee Podcast
#4: Rohit Krishnan - Developing Genius, Investing, AI Optimism, and the Future

Rohit Krishnan is a venture capitalist, economist, engineer, former hedge fund manager, and essayist. On Twitter @krishnanrohit, and on his Substack, Strange Loop Canon, at strangeloopcanon.com, he writes about AI, business, investing, complex systems, and more.

CHAPTERS:

Intro (0:00)

Comparing Countries (0:33)

Reading (6:50)

Developing Genius (12:36)

Investing (24:08)

Contra AI Doom (34:27)

The Future of AI (46:26)

PODCAST LINKS:

Video Transcript: https://www.theojaffee.com/p/4-rohit-krishnan

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj


SOCIALS:

My Twitter: https://twitter.com/theojaffee

My Substack: https://www.theojaffee.com

Rohit’s Twitter: https://twitter.com/krishnanrohit

Strange Loop Canon: https://www.strangeloopcanon.com

Show more...
2 years ago
1 hour 5 minutes 14 seconds

Theo Jaffee Podcast
#3: Zvi Mowshowitz - Rationality, Writing, Public Policy, and AI

Zvi Mowshowitz is a former professional Magic: The Gathering player who is a Pro Tour and Grand Prix winner, and member of the MTG Hall of Fame. He’s also been a professional trader and market maker, and a startup founder. He’s been involved in the rationalist movement for many years, and today, he writes about AI, rationality, game design and theory, philosophy, and more on his blog, Don’t Worry About the Vase.

PODCAST LINKS:

Video Transcript: https://www.theojaffee.com/p/3-zvi-mowshowitz

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj

SOCIALS:

My Twitter: https://twitter.com/theojaffee

My Substack: https://www.theojaffee.com

Zvi’s Twitter: https://twitter.com/thezvi

Zvi’s Blog: https://thezvi.wordpress.com/

Zvi’s Substack: https://thezvi.substack.com/

Balsa Research: https://balsaresearch.com/

CHAPTERS:

Intro (0:00)

Zvi’s Background (0:42)

Rationalism (6:28)

Critiques of Rationalism (20:08)

The Pill Poll (39:26)

Balsa Research (47:58)

p(doom | AGI) (1:05:47)

Alignment (1:17:18)

Decentralization and the Cold War (1:39:42)

More on AI (1:53:53)

Dealing with AI Risks (2:07:40)

Writing (2:18:57)

Show more...
2 years ago
2 hours 21 minutes 13 seconds

Theo Jaffee Podcast
#2: Carlos de la Guardia - AGI, Deutsch, Popper, knowledge, and progress

Carlos de la Guardia is an independent AGI researcher inspired by the work of Karl Popper, David Deutsch, and Richard Dawkins. In his research, he seeks answers to some of humanity’s biggest questions: how humans create knowledge, how AIs can one day do the same, and how we can use knowledge to figure out how to speed up our minds and even end death. Carlos is currently working on a book about AGI, which will be published on his Substack, Making Minds and Making Progress.

PODCAST LINKS:

Video Transcript: https://www.theojaffee.com/p/2-carlos-de-la-guardia

Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb

Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677

RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss

Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj


SOCIALS:

My Twitter: https://twitter.com/theojaffee

My Substack: https://www.theojaffee.com

Carlos’s Twitter: https://twitter.com/dela3499

Carlos's Substack: https://carlosd.substack.com/

Carlos's Website: https://carlosd.org/


CHAPTERS:

Intro (0:00)

Carlos’ Research (0:55)

Economics and Computation (19:46)

Bayesianism (27:54)

AI Doom and Optimism (34:21)

Will More Compute Produce AGI? (46:04)

AI Alignment and Interpretability (54:11)

Mind Uploading (1:05:44)

Carlos’ 6 Questions on AGI (1:12:47)

1. What are the limits of biological evolution? (1:13:06)

2. What makes explanatory knowledge special? (1:18:07)

3. How can Popperian epistemology improve AI? (1:19:54)

4. What are the different kinds of idea conflicts? (1:23:58)

5. Why is the brain a network of neurons? (1:25:47)

6. How do neurons make consciousness? (1:27:26)

The Optimistic AI Future (1:34:50)

Show more...
2 years ago
1 hour 38 minutes 45 seconds

Theo Jaffee Podcast
Deep conversations with brilliant people.