The advent of large language models (LLMs) fundamentally changed the behaviour of computer systems that we learned to trust over the last several decades of development of computation.
In this special Wtf? on World Models, I will give you a break-down of what this whole hype is about, where it is misplaced and why it is a good kind of hype to have for those who are interested in the artificial general intelligence (AGI).
In this special Wtf? episode we unpack the concept of a latent space. You will see why it is so important for understanding both LLMs and the future systems that will exhibit the capabilities of artificial general intelligence (AGI).
Will the artificial super intelligence force humans into submission? Throughout the human history, a nation that commanded a superior learning capability ultimately prevailed over its opponents. The ability to learn, the key component in our definition of the artificial general intelligence (AGI) will, by that very definition, bring the AGI through super-intelligence to super-power. Will we be able to coexist peacefully?
Can we cure cancer with artificial intelligence? In this episode, we start unpacking the capabilities that an agent with artificial general intelligence (AGI) must possess in order to find the cure for cancer and transform the medicine as we know it.
You have to give it to Sam Altman. He can make even the great and powerful Wizard of Oz blush.
Altman can say something like: “You can choose to deploy five gigawatts of compute to cure cancer or you can choose to offer free education to everybody on Earth.” He then uses the fact that he himself cannot make that moral choice as a justification for him getting his hands on ten gigawatts of compute while leaving him under no obligation to either cure cancer or provide free education to anybody. But can we actually cure cancer with AI?
In our next episode on Monday, we will unpack the capabilities that an agent with artificial general intelligence, or AGI, must possess in order to find the cure for cancer and transform the medicine as we know it.
Thank you for listening and subscribing. I am Alex Chadyuk and This is AGI.
Listen every Monday morning on your favourite podcast platform.
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex agents that will channel the LLMs’ runaway creativity into self-perpetuating cycles of knowledge discovery.'This Is AGI', a podcast about the path to the artificial general intelligence. Listen every Monday morning on your favourite podcast platform.
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex agents that will channel the LLMs’ runaway creativity into self-perpetuating cycles of knowledge discovery.'This Is AGI', a podcast about the path to the artificial general intelligence. Listen every Monday morning on your favourite podcast platform.
In this episode of 'This is AGI', we unpack Adrian de Wynter’s large-scale study on how LLMs learn from examples, their limits in generalization, and what this means for the path toward artificial general intelligence.
What is AGI, really? In this episode of This is AGI, we cut through the hype to unpack the elusive definition of artificial general intelligence. We explore how transferability of skill across contexts and goals, focus on capability vs process, and rule-exploitation vs rule-following come together to define what makes intelligence truly general.
In this episode of This is AGI, we grade modern AI on five markers of rationality—precision, consistency, scientific method, empirical evidence, and bias. The mixed scorecard shows impressive strengths but troubling gaps, raising the question: can today’s AI really be called rational, and what does that mean for the road to AGI?