With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
All content for Super Prompt: The Generative AI Podcast is the property of Tony Wan and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
Does chatGPT Possess Human-Like Intelligence? | ChatGPT’s Top Five Ethical Concerns | Asimov’s Robot Laws | Episode 17
Super Prompt: The Generative AI Podcast
19 minutes
2 years ago
Does chatGPT Possess Human-Like Intelligence? | ChatGPT’s Top Five Ethical Concerns | Asimov’s Robot Laws | Episode 17
“Does chatGPT possess human-like intelligence?” It turns out there's a right answer, and that answer is “NO”! Does this definite answer seem out of character for chatGPT which usually goes overboard with fair and balanced views? It did to me. That's the rabbit hole I explore in this episode. By probing around this accidentally-encountered guardrail, we discover the kinds of ethical issues chatGPT's creators are concerned about. And I wonder out loud we can't just...
Super Prompt: The Generative AI Podcast
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...