With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
All content for Super Prompt: The Generative AI Podcast is the property of Tony Wan and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...
Open Source AI Part 2 | Meta Llama | Mistral | How open is open source? | Episode 26
Super Prompt: The Generative AI Podcast
13 minutes
1 year ago
Open Source AI Part 2 | Meta Llama | Mistral | How open is open source? | Episode 26
So what are notable Open Source Large Language models? In this episode, I cover Open Source models from Meta the parent company of Facebook, a French AI company called Mistral currently valued at $2B dollars, in addition to Microsoft and Apple. Not all Open Source models are equally open, so I’ll go into restrictions you’ll want to know before using one of these models for your company or startup. Please enjoy this episode.For more information, check out https://www.superprompt.fm There...
Super Prompt: The Generative AI Podcast
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...