Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts122/v4/8a/6a/1e/8a6a1e0d-c3c2-3f03-1320-d074569adca8/mza_8164365220303318060.png/600x600bb.jpg
Deep Dive: AI
Deep Dive: AI
7 episodes
9 months ago
Deep Dive:AI is an online event from the Open Source Initiative. We’ll be exploring how Artificial Intelligence impacts Open Source software, from developers to businesses to the rest of us.
Show more...
Technology
Business
RSS
All content for Deep Dive: AI is the property of Deep Dive: AI and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Deep Dive:AI is an online event from the Open Source Initiative. We’ll be exploring how Artificial Intelligence impacts Open Source software, from developers to businesses to the rest of us.
Show more...
Technology
Business
https://i0.wp.com/deepdive.opensource.org/wp-content/uploads/2022/09/DDAI-Episode-Cover-Art-1200-%C3%97-628-px.png?fit=1200%2C628&ssl=1
Building creative restrictions to curb AI abuse
Deep Dive: AI
3 years ago
Building creative restrictions to curb AI abuse
Along with all the positive, revolutionary aspects of AI comes a more sinister side. Joining us today to discuss ethics in AI from the developer’s point of view is David Gray Widder. David is currently doing his Ph.D. at the School of Computer Science at Carnegie Mellon University and is investigating AI from an ethical perspective, honing in specifically on the ethics-related challenges faced by AI software engineers. His research has been conducted at Intel Labs, Microsoft, and NASA’s Jet Propulsion Lab. In this episode, we discuss the harmful uses of deep fakes and the ethical ramifications thereof in proprietary versus open source contexts. Widder breaks down the notions of technological inevitability and technological neutrality, respectively, and explains the importance of challenging these ideas. Widder has identified a continuum between implementation-based harms and use-based harms and fills us in on how each is affected in the open source development space. Tune in to find out more about the importance of curbing AI abuse and the creativity required to do so, as well as the strengths and weaknesses of open source in terms of AI ethics. Full transcript. Key points from this episode: Introducing David Gray Widder, a Ph.D. student researching AI ethics. Why he chose to focus his research on ethics in AI, and how he drives his research. Widder explains deep fakes and gives examples of their uses. Sinister uses of deep fakes and the danger thereof. The ethical ramifications of deep fake tech in proprietary versus open source contexts. The kinds of harms that can be prevented in open source versus proprietary contexts. The licensing issues that result in developers relinquishing control (and responsibility) over the uses of their tech. Why Widder is critical of the notions of both technological inevitability and neutrality. Why it’s important to challenge the idea of technological neutrality. The potential to build restrictions, even within the dictates of open source. The continuum between implementation-based harms and use-based harms. How open source allows for increased scrutiny of implementation harms, but decreased accountability for use-based harms. The insight Widder gleaned from observing NASA’s use of AI, pertaining to the deep fake case. Widder voices his legal concerns around Copilot. The difference between laws and norms. How we’ve been unsuspectingly providing data by uploading photos online. Why it’s important to include open source and public sector organizations in the ethical AI conversation. Open source strengths and weaknesses in terms of the ethical use of AI. Links mentioned in today’s episode: David Gray Widder David Gray Widder on Twitter Limits and Possibilities of “Ethical AI” in Open Source: A Study of Deep Fakes What is Deepfake Copilot Credits Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix. This podcast is sponsored by GitHub, DataStax and Google. No sponsor had any right or opportunity to approve or disapprove the content of this podcast.
Deep Dive: AI
Deep Dive:AI is an online event from the Open Source Initiative. We’ll be exploring how Artificial Intelligence impacts Open Source software, from developers to businesses to the rest of us.