With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
All content for Super Prompt: The Generative AI Podcast is the property of Tony Wan and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
AI Guidance for Educators | Alfred Guy, Assistant Dean of Academic Affairs at Yale College | Episode 19
Super Prompt: The Generative AI Podcast
1 hour 34 minutes
2 years ago
AI Guidance for Educators | Alfred Guy, Assistant Dean of Academic Affairs at Yale College | Episode 19
Alfred Guy, Assistant Dean of Academic Affairs at Yale College, and Director of Undergraduate Writing & Tutoring at the Poorvu Center and I discuss Yale's AI Guidance, and generative AI’s impact on teaching, learning, and evaluation. Do you have school age kids? Are you a product of a college or university education? if so, podcast may be of interest to you. Yale's AI guidance is published online here:https://poorvucenter.yale.edu/AIguidanceFor more information, check out https...
Super Prompt: The Generative AI Podcast
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...