Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/52/41/35/524135fc-1c8b-5936-95c1-4a873dd332cd/mza_14835817320828022501.jpg/600x600bb.jpg
Pondering AI
Kimberly Nevala, Strategic Advisor - SAS
82 episodes
1 day ago
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Show more...
Technology
Business
RSS
All content for Pondering AI is the property of Kimberly Nevala, Strategic Advisor - SAS and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Show more...
Technology
Business
Episodes (20/82)
Pondering AI
What AI Values with Jordan Loewen-Colón

Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.

Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.

Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.

Related Resources

  • HBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values  
  • AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication 

A transcript of this episode is here.

Show more...
2 weeks ago
51 minutes

Pondering AI
Agentic Insecurities with Keren Katz

Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures. 


Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.

 

Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.
 

Related Resources

  • Article: The Silent Breach: Why Agentic AI Demands New Oversight
  • State of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/ 
  • The LLM Top 10: https://genai.owasp.org/llm-top-10/

A transcript of this episode is here.   

Show more...
4 weeks ago
49 minutes

Pondering AI
To Be or Not to Be Agentic with Maximilian Vogel

Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.   


Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work. 

 

Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.


Related Resources

  • Medium: https://medium.com/@maximilian.vogel

A transcript of this episode is here.   

Show more...
1 month ago
51 minutes

Pondering AI
The Problem of Democracy with Henrik Skaug Sætra

Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society. 


Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.  

 

Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.


Related Resources

  • Google Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=en
  • How to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686
  • AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063
  • Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL

A transcript of this episode is here.   

Show more...
1 month ago
54 minutes

Pondering AI
Generating Safety Not Abuse with Dr. Rebecca Portnoff

Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.  

Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.  

Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn’s critical work by donating here. 

Related Resources 

  • Thorn’s Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/  
  • Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/  
  • Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report  

A transcript of this episode is here.

Show more...
2 months ago
46 minutes

Pondering AI
Inclusive Innovation with Hiwot Tesfaye

Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course. 


Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI. 

Hiwot Tesfaye is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship. 

 

Related Resources

  • #35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot Tesfaye

A transcript of this episode is here.   

Show more...
3 months ago
50 minutes

Pondering AI
The Shape of Synthetic Data with Dietmar Offenhuber

Dietmar Offenhuber reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.  

Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.  

Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.  

Related Resources 

  • Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390  
  • Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/  
  • Reservoirs of Venice (project): https://res-venice.github.io/ 
  • Website: https://offenhuber.net/ 

A transcript of this episode is here.    

Show more...
3 months ago
52 minutes

Pondering AI
A Question of Humanity with Pia Lauritzen, PhD

Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. 


Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all. 


Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. 


Related Resources

  • Questions (Book): https://www.press.jhu.edu/books/title/23069/questions 
  • TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions 
  • Question Jam: www.questionjam.com
  • Forbes Column: forbes.com/sites/pialauritzen 
  • LinkedIn Learning: www.Linkedin.com/learning/pialauritzen 
  • Personal Website: pialauritzen.dk 

A transcript of this episode is here.   

Show more...
4 months ago
55 minutes

Pondering AI
A Healthier AI Narrative with Michael Strange

Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.  

Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system. 

Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures). 

Related Resources 

  • If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/ 
  • Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539 
  • Michael Strange (website): https://mau.se/en/persons/michael.strange/ 

 

A transcript of this episode is here.    

 

Show more...
4 months ago
59 minutes

Pondering AI
LLMs Are Useful Liars with Andriy Burkov

Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.  

Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word). 

Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer. 

Related Resources 

  • The Hundred Page Language Models Book: https://thelmbook.com/ 
  • The Hundred Page Machine Learning Book: https://themlbook.com/  
  • True Positive Weekly (newsletter): https://aiweekly.substack.com/ 

 A transcript of this episode is here.

Show more...
5 months ago
47 minutes

Pondering AI
Reframing Responsible AI with Ravit Dotan

Ravit Dotan, PhD asserts that beneficial AI adoption requires clarity of purpose, good judgment, ethical leadership, and making responsibility integral to innovation. 


Ravit and Kimberly discuss the philosophy of science; why all algorithms incorporate values; how technical judgements centralize power; not exempting AI from established norms; when lists of risks lead us astray; wasting water, eating meat, and using AI responsibly; corporate ethics washing; patterns of ethical decoupling; reframing the relationship between responsibility and innovation; measuring what matters; and the next phase of ethical innovation in practice.  


Ravit Dotan, PhD is an AI ethics researcher and governance advisor on a mission to enable everyone to adopt AI the right way. The Founder and CEO of TechBetter, Ravit holds a PhD in Philosophy from UC Berkeley and is a sought-after advisor on the topic of responsible innovation.

Related Resources

  • The AI Treasure Chest (Substack): https://techbetter.substack.com/
  • The Values Embedded in Machine Learning Research (Paper): https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533083

A transcript of this episode is here.   

Show more...
5 months ago
59 minutes

Pondering AI
Stories We Tech with Dr. Ash Watson

Dr. Ash Watson studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. 


Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.  


Dr. Ash Watson is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (ARC) Centre of Excellence for Automated Decision-Making and Society (CADMS). 


Related Resources:

  • Ash Watson (Website): https://awtsn.com/
  • The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article): https://doi.org/10.1111/1467-9566.13840
  • An imperative to innovate? Crisis in the sociotechnical imaginary (Article): https://doi.org/10.1016/j.tele.2024.102229

A transcript of this episode is here.   

Show more...
6 months ago
47 minutes

Pondering AI
Regulating Addictive AI with Robert Mahari

Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. 

Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.  

Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. 

A transcript of this episode is here.   

Additional Resources:

  • The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/
  • Robert Mahari (website): https://robertmahari.com/
Show more...
7 months ago
54 minutes

Pondering AI
AI Literacy for All with Phaedra Boinodiris

Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. 

Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.  

Phaedra Boinodiris is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. 

A transcript of this episode is here.    

Additional Resources: 

Phaedra’s Website -  https://phaedra.ai/ 

The Future World Alliance - https://futureworldalliance.org/ 

Show more...
7 months ago
43 minutes

Pondering AI
Auditing AI with Ryan Carrier

Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.  

Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.  

A transcript of this episode is here.    

Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight. 

Show more...
7 months ago
52 minutes

Pondering AI
Ethical by Design with Olivia Gambelin

Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. 

Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters.  A transcript of this episode is here. 

Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence, the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.  

Additional Resources: 

Responsible AI: Implement an Ethical Approach in Your Organization – Book

Plato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book   

The Values Canvas – RAI Design Tool 

Women Shaping the Future of Responsible AI – Organization 

In Pursuit of Good Tech | Subscribe - Newsletter

Show more...
8 months ago
51 minutes

Pondering AI
The Nature of Learning with Helen Beetham

Helen Beetham isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.     

Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.

Helen Beetham is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings, is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI.   

Additional Resources:

Imperfect Offerings - https://helenbeetham.substack.com/

Audrey Watters - https://audreywatters.com/ 

Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/ 

Anna Mills - https://www.linkedin.com/in/anna-mills-oer/ 

Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/ 

Tech(nically) Politics - https://www.technicallypolitics.org/ 

LOG OFF - logoffmovement.org/ 

Rest of World -  www.restofworld.org/

Derechos Digitales – www.derechosdigitales.org 

 

A transcript of this episode is here.

Show more...
8 months ago
45 minutes

Pondering AI
Ethics for Engineers with Steven Kelts

Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. 

Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.

Steven Kelts is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s Responsible University Network.

 

Additional Resources:

  • Princeton Agile Ethics Program: https://agile-ethics.princeton.edu
  • CITP Talk 11/19/24: Agile Ethics Theory and Evidence
  • Oktar, Lomborozo et al: Changing Moral Judgements
  • 4-Stage Theory of Ethical Decision Making: An Introduction
  • Enabling Engineers through “Moral Imagination” (Google)

A transcript of this episode is here.

Show more...
9 months ago
46 minutes

Pondering AI
Righting AI with Susie Alegre

Susie Alegre makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.  

Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology. 

 

Susie Alegre is an acclaimed international human rights lawyer and the author of Freedom to Think: The Long Struggle to Liberate Our Minds and Human Rights, Robot Wrongs: Being Human in the Age of AI. She is also a Senior Fellow at the Centre for International Governance and Innovation (CIGI) and Founder of the Island Rights Initiative. Learn more at her website: Susie Alegre  

 

A transcript of this episode is here. 

Show more...
9 months ago
46 minutes

Pondering AI
AI Myths and Mythos with Eryk Salvaggio

Eryk Salvaggio articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art.   

Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. 

 

Eryk Salvaggio is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the Siegel Family Endowment. Eryk is also a researcher on the AI Pedagogies Project at Harvard University’s metaLab and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering.  

 

Addition Resources:  

Cybernetic Forests:  mail.cyberneticforests.com 

The Age of Noise: https://mail.cyberneticforests.com/the-age-of-noise/ 

Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/ 

 

A transcript of this episode is here. 

Show more...
10 months ago
58 minutes

Pondering AI
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.