Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts123/v4/f3/43/82/f34382be-255e-22ca-7f89-a00381a2ab35/mza_4745243780619317506.png/600x600bb.jpg
Bridging the Gaps: A Portal for Curious Minds
Dr Waseem Akhtar
89 episodes
1 week ago
As artificial intelligence takes on a growing role in decisions about education, jobs, housing, loans, healthcare, and criminal justice, concerns about fairness have become urgent. Because AI systems are trained on data that reflect historical inequalities, they often reproduce or even amplify those disparities. In his book “AI Fairness: Designing Equal Opportunity Algorithms” Professor Derek Leben draws on classic philosophical theories of justice—especially John Rawls’s work—to propose a framework for evaluating the fairness of AI systems. This framework offers a way to think systematically about algorithmic justice: how automated decisions can align with ethical principles of equality and fairness. The book examines the trade-offs among competing fairness metrics and shows that it is often impossible to satisfy them all at once. As a result, organizations must decide which definitions of fairness to prioritize, and regulators must determine how existing laws should apply to AI. In this episode of Bridging the Gaps, I speak with Derek Leben. Derek Leben is Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems. I begin our discussion by asking Derek what “AI” means in the context of his work and how fairness fits into that picture. From there, we explore why fairness matters as AI systems increasingly influence critical decisions about employment, education, housing, loans, healthcare, and criminal justice. We discuss how historical inequalities in training data lead to biased outcomes, giving listeners a deeper understanding of the problem. While some view AI fairness as a purely technical issue that engineers can fix, the book argues that it is also a moral and political challenge—one that requires insights from philosophy and ethics. We then examine the difficulty of balancing multiple fairness metrics, which often cannot all be satisfied simultaneously, and discuss how organizations might prioritize among them. Derek explains his theory of algorithmic justice, inspired by John Rawls’s philosophy, and we unpack its key ideas. Later, we touch on questions of urgency versus long-term reform, exploring the idea of longtermism, and discuss the tension between fairness and accuracy. Finally, we consider how businesses can balance commercial goals with their broader social responsibilities. Overall, it is an informative and thought-provoking conversation about how we can make AI systems more just. Complement this discussion with ““The Line: AI and the Future of Personhood” with Professor James Boyle” available at: https://www.bridgingthegaps.ie/2025/04/the-line-ai-and-the-future-of-personhood-with-professor-james-boyle/ And then listen to “Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer” available at: https://www.bridgingthegaps.ie/2023/04/reclaiming-human-intelligence-and-how-to-stay-smart-in-a-smart-world-with-prof-gerd-gigerenzer/
Show more...
Science
RSS
All content for Bridging the Gaps: A Portal for Curious Minds is the property of Dr Waseem Akhtar and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
As artificial intelligence takes on a growing role in decisions about education, jobs, housing, loans, healthcare, and criminal justice, concerns about fairness have become urgent. Because AI systems are trained on data that reflect historical inequalities, they often reproduce or even amplify those disparities. In his book “AI Fairness: Designing Equal Opportunity Algorithms” Professor Derek Leben draws on classic philosophical theories of justice—especially John Rawls’s work—to propose a framework for evaluating the fairness of AI systems. This framework offers a way to think systematically about algorithmic justice: how automated decisions can align with ethical principles of equality and fairness. The book examines the trade-offs among competing fairness metrics and shows that it is often impossible to satisfy them all at once. As a result, organizations must decide which definitions of fairness to prioritize, and regulators must determine how existing laws should apply to AI. In this episode of Bridging the Gaps, I speak with Derek Leben. Derek Leben is Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems. I begin our discussion by asking Derek what “AI” means in the context of his work and how fairness fits into that picture. From there, we explore why fairness matters as AI systems increasingly influence critical decisions about employment, education, housing, loans, healthcare, and criminal justice. We discuss how historical inequalities in training data lead to biased outcomes, giving listeners a deeper understanding of the problem. While some view AI fairness as a purely technical issue that engineers can fix, the book argues that it is also a moral and political challenge—one that requires insights from philosophy and ethics. We then examine the difficulty of balancing multiple fairness metrics, which often cannot all be satisfied simultaneously, and discuss how organizations might prioritize among them. Derek explains his theory of algorithmic justice, inspired by John Rawls’s philosophy, and we unpack its key ideas. Later, we touch on questions of urgency versus long-term reform, exploring the idea of longtermism, and discuss the tension between fairness and accuracy. Finally, we consider how businesses can balance commercial goals with their broader social responsibilities. Overall, it is an informative and thought-provoking conversation about how we can make AI systems more just. Complement this discussion with ““The Line: AI and the Future of Personhood” with Professor James Boyle” available at: https://www.bridgingthegaps.ie/2025/04/the-line-ai-and-the-future-of-personhood-with-professor-james-boyle/ And then listen to “Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer” available at: https://www.bridgingthegaps.ie/2023/04/reclaiming-human-intelligence-and-how-to-stay-smart-in-a-smart-world-with-prof-gerd-gigerenzer/
Show more...
Science
https://i1.sndcdn.com/artworks-GxlBkJy1ARB8yyqN-hw2GAg-t3000x3000.jpg
“The Blind Spot: Why Science Cannot Ignore Human Experience” with Professor Adam Frank
Bridging the Gaps: A Portal for Curious Minds
43 minutes 13 seconds
1 year ago
“The Blind Spot: Why Science Cannot Ignore Human Experience” with Professor Adam Frank
Since the Enlightenment, humanity has turned to science to answer profound questions about who we are, where we come from, and where we’re headed. However, we've become stuck in the belief that we can fully understand the universe by viewing it from a detached, external perspective. In focusing solely on external physical realities, imagined from this objective standpoint, we overlook the vital role of our own lived experience. This is the "Blind Spot" that astrophysicist Adam Frank, theoretical physicist Marcelo Gleiser, and philosopher Evan Thompson discuss in their book “The Blind Spot: Why Science Cannot Ignore Human Experience”. They identify this “Blind Spot” as the root of many modern scientific challenges —whether it's in understanding time and the origin of the universe, quantum physics, the nature of life, artificial intelligence, consciousness, or Earth's function as a planetary system. In this episode of Bridging the Gaps, I speak with astrophysicist Adam Frank. Adam Frank is a renowned astrophysicist and professor in the Department of Physics and Astronomy at the University of Rochester. He is a leading expert on the final stages of stellar evolution, particularly for stars like the sun. At the University of Rochester, his computational research group has developed cutting-edge supercomputer tools to study the formation and death of stars. A passionate advocate for science, Frank describes himself as an “evangelist of science,” dedicated not only to uncovering the mysteries of the cosmos but also to sharing the beauty and power of science with the public. He is equally committed to exploring science's broader role within culture, emphasising its relevance and context in our understanding of the world. His contributions to the field have earned him prestigious recognition, including the Carl Sagan Medal. In this discussion we delve into why it is crucial to recognize this “Blind Spot” and the profound implications it has for how we approach science and knowledge. By focusing solely on external, objective facts, we miss a deeper understanding of reality—one that includes our subjective experience as an integral part of the equation. This Blind Spot has led to significant challenges in fields like quantum physics, cosmology, and the study of consciousness, where the limitations of purely objective observation become evident. We also explore an alternative vision for science: that scientific knowledge should not be viewed as a fixed, immutable set of facts, but rather as a dynamic, evolving narrative. This narrative emerges from the constant interplay between the external world and our lived experience of it. In this view, science becomes a process of continuous self-correction, where both the observer and the observed are part of an evolving relationship. Frank stresses that recognizing this interplay allows us to break free from the illusion of absolute knowledge and opens up a more holistic, adaptive, and integrated way of understanding the universe. This shift in perspective has the potential to reshape how we approach not only scientific inquiry but also our relationship with reality itself. This has been an incredibly enlightening and deeply informative discussion, offering valuable insights and fresh perspective. Complement this discussion with ““The Joy of Science” with Professor Jim Al-Khalili” available at: https://www.bridgingthegaps.ie/2022/05/the-joy-of-science-with-professor-jim-al-khalili/ And then listen to ““Sharing Our Science: How to Write and Speak STEM” with Professor Brandon Brown” available at: https://www.bridgingthegaps.ie/2024/02/sharing-our-science-how-to-write-and-speak-stem-with-professor-brandon-brown/
Bridging the Gaps: A Portal for Curious Minds
As artificial intelligence takes on a growing role in decisions about education, jobs, housing, loans, healthcare, and criminal justice, concerns about fairness have become urgent. Because AI systems are trained on data that reflect historical inequalities, they often reproduce or even amplify those disparities. In his book “AI Fairness: Designing Equal Opportunity Algorithms” Professor Derek Leben draws on classic philosophical theories of justice—especially John Rawls’s work—to propose a framework for evaluating the fairness of AI systems. This framework offers a way to think systematically about algorithmic justice: how automated decisions can align with ethical principles of equality and fairness. The book examines the trade-offs among competing fairness metrics and shows that it is often impossible to satisfy them all at once. As a result, organizations must decide which definitions of fairness to prioritize, and regulators must determine how existing laws should apply to AI. In this episode of Bridging the Gaps, I speak with Derek Leben. Derek Leben is Professor of Business Ethics at the Tepper School of Business at Carnegie Mellon University. As founder of the consulting group Ethical Algorithms, he has worked with governments and companies to develop policies on fairness and benefit for AI and autonomous systems. I begin our discussion by asking Derek what “AI” means in the context of his work and how fairness fits into that picture. From there, we explore why fairness matters as AI systems increasingly influence critical decisions about employment, education, housing, loans, healthcare, and criminal justice. We discuss how historical inequalities in training data lead to biased outcomes, giving listeners a deeper understanding of the problem. While some view AI fairness as a purely technical issue that engineers can fix, the book argues that it is also a moral and political challenge—one that requires insights from philosophy and ethics. We then examine the difficulty of balancing multiple fairness metrics, which often cannot all be satisfied simultaneously, and discuss how organizations might prioritize among them. Derek explains his theory of algorithmic justice, inspired by John Rawls’s philosophy, and we unpack its key ideas. Later, we touch on questions of urgency versus long-term reform, exploring the idea of longtermism, and discuss the tension between fairness and accuracy. Finally, we consider how businesses can balance commercial goals with their broader social responsibilities. Overall, it is an informative and thought-provoking conversation about how we can make AI systems more just. Complement this discussion with ““The Line: AI and the Future of Personhood” with Professor James Boyle” available at: https://www.bridgingthegaps.ie/2025/04/the-line-ai-and-the-future-of-personhood-with-professor-james-boyle/ And then listen to “Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer” available at: https://www.bridgingthegaps.ie/2023/04/reclaiming-human-intelligence-and-how-to-stay-smart-in-a-smart-world-with-prof-gerd-gigerenzer/