Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/27/05/3b/27053bda-c504-07d8-497d-7666cf447e4f/mza_7147201726467882660.png/600x600bb.jpg
The Georgian Impact Podcast | AI, ML & More
Georgian
100 episodes
3 months ago
On Georgian's Impact Podcast, we get into the latest tech trends and how they impact growth-stage software companies. Jon talks with folks from around the tech ecosystem at the intersection of business and technology.
Show more...
Arts
Technology,
Business
RSS
All content for The Georgian Impact Podcast | AI, ML & More is the property of Georgian and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
On Georgian's Impact Podcast, we get into the latest tech trends and how they impact growth-stage software companies. Jon talks with folks from around the tech ecosystem at the intersection of business and technology.
Show more...
Arts
Technology,
Business
https://episodes.castos.com/659ea4b0cac2c5-27807829/images/1688443/c1a-52j4v-rom7o8n9fjw2-cqlwro.png
Testing LLMs for trust and safety
The Georgian Impact Podcast | AI, ML & More
21 minutes 7 seconds
1 year ago
Testing LLMs for trust and safety
We all get a few chuckles when autocorrect gets something wrong, but there's a lot of time-saving and face-saving value with autocorrect. But do we trust autocorrect? Yeah. We do, even with its errors. Maybe you can use ChatGPT to improve your productivity. Ask it to a cool question and maybe get a decent answer. That's fine. After all, it's just between you and ChatGPT. But, what if you're a software company and you're leveraging these technologies? You could be putting generative AI output in front of your users. On this episode of the Georgian Impact Podcast, it is time to talk about GenAI and trust. Angeline Yasodhara, an Applied Research Scientist at Georgian, is here to discuss the new world of GenAI. You'll Hear About: Differences between closed and open-source large language models (LLMs), advantages and disadvantages of each. Limitations and biases inherent in LLMs due to their training on Internet data. Treating LLMs as untrusted users and the need to restrict data access to minimize potential risks. The continuous learning process of LLMs through reinforcement learning from human feedback. Ethical issues and biases associated with LLMs, and the challenges of fostering creativity while avoiding misinformation. Collaboration between AI and security teams to identify and mitigate potential risks associated with LLM applications. Who is Angelina Yasodhara? Angeline Yasodhara is an Applied Research Scientist at Georgian, where she collaborates with companies to help accelerate their AI products. With expertise in the ethical and security implications of LLMs, she provides valuable insights into the advantages and challenges of closed vs. open-source LLMs.
The Georgian Impact Podcast | AI, ML & More
On Georgian's Impact Podcast, we get into the latest tech trends and how they impact growth-stage software companies. Jon talks with folks from around the tech ecosystem at the intersection of business and technology.