Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts124/v4/b1/e7/03/b1e70326-4200-a297-ec08-bd8590545dc8/mza_12174766315714829581.jpg/600x600bb.jpg
Towards Data Science
The TDS team
130 episodes
5 days ago
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Show more...
Technology
RSS
All content for Towards Data Science is the property of The TDS team and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded_nologo400/473625/473625-1610835242571-7393225beb5b8.jpg
116. Katya Sedova - AI-powered disinformation, present and future
Towards Data Science
54 minutes 24 seconds
3 years ago
116. Katya Sedova - AI-powered disinformation, present and future

Until recently, very few people were paying attention to the potential malicious applications of AI. And that made some sense: in an era where AIs were narrow and had to be purpose-built for every application, you’d need an entire research team to develop AI tools for malicious applications. Since it’s more profitable (and safer) for that kind of talent to work in the legal economy, AI didn’t offer much low-hanging fruit for malicious actors.

But today, that’s all changing. As AI becomes more flexible and general, the link between the purpose for which an AI was built and its potential downstream applications has all but disappeared. Large language models can be trained to perform valuable tasks, like supporting writers, translating between languages, or write better code. But a system that can write an essay can also write a fake news article, or power an army of humanlike text-generating bots.

More than any other moment in the history of AI, the move to scaled, general-purpose foundation models has shown how AI can be a double-edged sword. And now that these models exist, we have to come to terms with them, and figure out how to build societies that remain stable in the face of compelling AI-generated content, and increasingly accessible AI-powered tools with malicious use potential.

That’s why I wanted to speak with Katya Sedova, a former Congressional Fellow and Microsoft alumna who now works at Georgetown University’s Center for Security and Emerging Technology, where she recently co-authored some fascinating work exploring current and likely future malicious uses of AI. If you like this conversation I’d really recommend checking out her team’s latest report — it’s called “AI and the future of disinformation campaigns”.

Katya joined me to talk about malicious AI-powered chatbots, fake news generation and the future of AI-augmented influence campaigns on this episode of the TDS podcast.

***

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

*** 

Chapters:

  • 2:40 Malicious uses of AI
  • 4:30 Last 10 years in the field
  • 7:50 Low handing fruit of automation
  • 14:30 Other analytics functions
  • 25:30 Authentic bots
  • 30:00 Influences of service businesses
  • 36:00 Race to the bottom
  • 42:30 Automation of systems
  • 50:00 Manufacturing norms
  • 52:30 Interdisciplinary conversations
  • 54:00 Wrap-up
Towards Data Science
Note: The TDS podcast's current run has ended. Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.