Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/93/73/1b/93731b77-a0b3-d8c0-be20-ed3bfb6efaed/mza_7084727708549558115.jpg/600x600bb.jpg
Generally Intelligent
Kanjun Qiu
38 episodes
1 day ago
Conversations with builders and thinkers on AI's technical and societal futures. Made by Imbue.
Show more...
Technology
RSS
All content for Generally Intelligent is the property of Kanjun Qiu and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Conversations with builders and thinkers on AI's technical and societal futures. Made by Imbue.
Show more...
Technology
Episodes (20/38)
Generally Intelligent
From lawless spaces to true liberty: rethinking AI's role in society

Welcome back to Generally Intelligent! We’re excited to relaunch our podcast—still featuring thoughtful conversations on building AI, but now with an expanded lens on its economic, societal, political, and human impacts.


Matt Boulos leads policy and safety at Imbue, where he shapes the responsible development of AI coding tools that make software creation broadly accessible. His work centers on understanding what technological power means for individual liberty and advocates for the legal and institutional frameworks we need to protect our freedom. Matt is a lawyer, computer scientist, and founder.A full transcript is available on our Substack: https://imbueai.substack.com/matt-boulos/


Highlights:

  • AI’s four core challenges
  • Governing lawless digital spaces
  • Why abundance is not enough without liberty
  • Freedom as deep enablement and deep protection
  • The role of technologists in shaping society


Generally Intelligent is a podcast by Imbue, an independent research company developing a better way to build personal software. Our mission is to empower humans in the age of AI by creating powerful computing tools controlled by individuals.

Website: https://imbue.com/

Substack: https://imbueai.substack.com/

LinkedIn: https://www.linkedin.com/company/imbue_ai/

X: @imbue_ai

Bluesky: https://bsky.app/profile/imbue-ai.bsky.social

YouTube: https://www.youtube.com/@imbue_ai/

Show more...
2 months ago
1 hour 38 minutes 37 seconds

Generally Intelligent
Rylan Schaeffer, Stanford: Investigating emergent abilities and challenging dominant research ideas

Rylan Schaeffer is a PhD student at Stanford studying the engineering, science, and mathematics of intelligence. He authored the paper “Are Emergent Abilities of Large Language Models a Mirage?”, as well as other interesting refutations in the field that we’ll talk about today. He previously interned at Meta on the Llama team, and at Google DeepMind.

Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.

About Imbue

Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.

Website: https://imbue.com

LinkedIn: https://www.linkedin.com/company/imbue_ai/

Twitter/X: @imbue_ai

Show more...
1 year ago
1 hour 2 minutes 51 seconds

Generally Intelligent
Ari Morcos, DatologyAI: Leveraging data to democratize model training

Ari Morcos is the CEO of DatologyAI, which makes training deep learning models more performant and efficient by intervening on training data. He was at FAIR and DeepMind before that, where he worked on a variety of topics, including how training data leads to useful representations, lottery ticket hypothesis, and self-supervised learning. His work has been honored with Outstanding Paper awards at both NeurIPS and ICLR.

Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.

About Imbue

Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.

Website: ⁠https://imbue.com/⁠

LinkedIn: ⁠https://www.linkedin.com/company/imbue-ai/⁠

Twitter: @imbue_ai

Show more...
1 year ago
1 hour 34 minutes 19 seconds

Generally Intelligent
Percy Liang, Stanford: The paradigm shift and societal effects of foundation models

Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on language models for long before foundation models appeared. Percy is also a big proponent of reproducible research, and toward that end he’s shipped most of his recent papers as executable papers using the CodaLab Worksheets platform his lab developed, and published a wide variety of benchmarks.

Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.

About Imbue

Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.

Website: ⁠https://imbue.com/⁠

LinkedIn: ⁠https://www.linkedin.com/company/imbue-ai/⁠

Twitter: ⁠@imbue_ai

Show more...
1 year ago
1 hour 1 minute 55 seconds

Generally Intelligent
Seth Lazar, Australian National University: Legitimate power, moral nuance, and the political philosophy of AI

Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good and just AI future.

Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.

About Imbue
Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.

Website: https://imbue.com/
LinkedIn: https://www.linkedin.com/company/imbue-ai/
Twitter: @imbue_ai


Show more...
1 year ago
1 hour 55 minutes 45 seconds

Generally Intelligent
Tri Dao, Stanford: FlashAttention and sparsity, quantization, and efficient inference

Tri Dao is a PhD student at Stanford, co-advised by Stefano Ermon and Chris Re. He’ll be joining Princeton as an assistant professor next year. He works at the intersection of machine learning and systems, currently focused on efficient training and long-range context.


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Show more...
2 years ago
1 hour 20 minutes 29 seconds

Generally Intelligent
Jamie Simon, UC Berkeley: Theoretical principles for how neural networks learn and generalize

Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss reverse engineering kernels, the conservation of learnability during training, infinite-width neural networks, and much more.

About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Show more...
2 years ago
1 hour 1 minute 54 seconds

Generally Intelligent
Bill Thompson, UC Berkeley: How cultural evolution shapes knowledge acquisition

Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore the impact of cultural evolution on human knowledge acquisition, how pure biological evolution can lead to slow adaptation and overfitting, and much more.


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Show more...
2 years ago
1 hour 15 minutes 24 seconds

Generally Intelligent
Ben Eysenbach, CMU: Designing simpler and more principled RL algorithms

Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long horizon reasoning, exploration, and representation learning. In this episode, we discuss designing simpler and more principled RL algorithms, and much more.

About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Show more...
2 years ago
1 hour 45 minutes 56 seconds

Generally Intelligent
Jim Fan, NVIDIA: Foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

Jim Fan is a research scientist at NVIDIA and got his PhD at Stanford under Fei-Fei Li. Jim is interested in building generally capable autonomous agents, and he recently published MineDojo, a massively multiscale benchmarking suite built on Minecraft, which was an Outstanding Paper at NeurIPS. In this episode, we discuss the foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant.  


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Show more...
2 years ago
1 hour 26 minutes 45 seconds

Generally Intelligent
Sergey Levine, UC Berkeley: The bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems.

Show more...
2 years ago
1 hour 34 minutes 49 seconds

Generally Intelligent
Noam Brown, FAIR: Achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

Noam Brown is a research scientist at FAIR. During his Ph.D. at CMU, he made the first AI to defeat top humans in No Limit Texas Hold 'Em poker. More recently, he was part of the team that built CICERO which achieved human-level performance in Diplomacy. In this episode, we extensively discuss ideas underlying both projects, the power of spending compute at inference time, and much more.

Show more...
2 years ago
1 hour 44 minutes 54 seconds

Generally Intelligent
Sugandha Sharma, MIT: Biologically inspired neural architectures, how memories can be implemented, and control theory

Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don’t exhibit catastrophic forgetting. In this episode, we chat about biologically inspired neural architectures, how memory could be implemented, why control theory is underrated and much more.

Show more...
2 years ago
1 hour 44 minutes

Generally Intelligent
Nicklas Hansen, UCSD: Long-horizon planning and why algorithms don't drive research progress

Nicklas Hansen is a Ph.D. student at UC San Diego advised by Prof Xiaolong Wang and Prof Hao Su. He is also a student researcher at Meta AI. Nicklas' research interests involve developing machine learning systems, specifically neural agents, that have the ability to learn, generalize, and adapt over their lifetime. In this episode, we talk about long-horizon planning, adapting reinforcement learning policies during deployment, why algorithms don't drive research progress, and much more!

Show more...
2 years ago
1 hour 49 minutes 18 seconds

Generally Intelligent
Jack Parker-Holder, DeepMind: Open-endedness, evolving agents and environments, online adaptation, and offline learning

Jack Parker-Holder recently joined DeepMind after his Ph.D. with Stephen Roberts at Oxford. Jack is interested in using reinforcement learning to train generally capable agents, especially via an open-ended learning process where environments can adapt to constantly challenge the agent's capabilities. Before doing his Ph.D., Jack worked for 7 years in finance at JP Morgan. In this episode, we chat about open-endedness, evolving agents and environments, online adaptation, offline learning with world models, and much more.

Show more...
2 years ago
1 hour 56 minutes 42 seconds

Generally Intelligent
Celeste Kidd, UC Berkeley: Attention and curiosity, how we form beliefs, and where certainty comes from

Celeste Kidd is a professor of psychology at UC Berkeley. Her lab studies the processes involved in knowledge acquisition; essentially, how we form our beliefs over time and what allows us to select a subset of all the information we encounter in the world to form those beliefs. In this episode, we chat about attention and curiosity, beliefs and expectations, where certainty comes from, and much more.

Show more...
2 years ago
1 hour 52 minutes 35 seconds

Generally Intelligent
Archit Sharma, Stanford: Unsupervised and autonomous reinforcement learning

Archit Sharma is a Ph.D. student at Stanford advised by Chelsea Finn. His recent work is focused on autonomous deep reinforcement learning—that is, getting real world robots to learn to deal with unseen situations without human interventions. Prior to this, he was an AI resident at Google Brain and he interned with Yoshua Bengio at Mila. In this episode, we chat about unsupervised, non-episodic, autonomous reinforcement learning and much more.

Show more...
2 years ago
1 hour 38 minutes 13 seconds

Generally Intelligent
Chelsea Finn, Stanford: The biggest bottlenecks in robotics and reinforcement learning

Chelsea Finn is an Assistant Professor at Stanford and part of the Google Brain team. She's interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction at scale. In this episode, we chat about some of the biggest bottlenecks in RL and robotics—including distribution shifts, Sim2Real, and sample efficiency—as well as what makes a great researcher, why she aspires to build a robot that can make cereal, and much more.

Show more...
3 years ago
40 minutes 7 seconds

Generally Intelligent
Hattie Zhou, Mila: Supermasks, iterative learning, and fortuitous forgetting

Hattie Zhou is a Ph.D. student at Mila working with Hugo Larochelle and Aaron Courville. Her research focuses on understanding how and why neural networks work, starting with deconstructing why lottery tickets work and most recently exploring how forgetting may be fundamental to learning. Prior to Mila, she was a data scientist at Uber and did research with Uber AI Labs. In this episode, we chat about supermasks and sparsity, coherent gradients, iterative learning, fortuitous forgetting, and much more.

Show more...
3 years ago
1 hour 47 minutes 28 seconds

Generally Intelligent
Minqi Jiang, UCL: Environment and curriculum design for general RL agents

Minqi Jiang is a Ph.D. student at UCL and FAIR, advised by Tim Rocktäschel and Edward Grefenstette. Minqi is interested in how simulators can enable AI agents to learn useful behaviors that generalize to new settings. He is especially focused on problems at the intersection of generalization, human-AI coordination, and open-ended systems. In this episode, we chat about environment and curriculum design for reinforcement learning, model-based RL, emergent communication, open-endedness, and artificial life.

Show more...
3 years ago
1 hour 53 minutes 59 seconds

Generally Intelligent
Conversations with builders and thinkers on AI's technical and societal futures. Made by Imbue.