Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/1c/18/83/1c1883a3-8260-40c1-1483-a0261cac93d6/mza_8947180628080687200.jpg/600x600bb.jpg
AI: AX - introspection
mcgrof
8 episodes
2 days ago
The art of looking into a model and understanding what is going on through introspection is referred to AX.
Show more...
Technology
RSS
All content for AI: AX - introspection is the property of mcgrof and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The art of looking into a model and understanding what is going on through introspection is referred to AX.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44214955/44214955-1754722534071-bb9d45cf6b3f5.jpg
Jailbreaking LLMs
AI: AX - introspection
10 minutes 11 seconds
3 months ago
Jailbreaking LLMs

A long list of papers and articles are reviewed about jailbreaking LLMs:


These sources primarily explore methods for bypassing safety measures in Large Language Models (LLMs), often referred to as "jailbreaking," and proposed defense mechanisms. One key area of research involves "abliteration," a technique that directly modifies an LLM's internal activations to remove censorship without traditional fine-tuning. Another significant approach, "Speak Easy," enhances jailbreaking by decomposing harmful requests into smaller, multilingual sub-queries, significantly increasing the LLMs' susceptibility to generating undesirable content. Additionally, "Sugar-Coated Poison" investigates integrating benign content with adversarial reasoning to create effective jailbreak prompts. These papers collectively highlight the ongoing challenge of securing LLMs against sophisticated attacks, with researchers employing various strategies to either exploit or fortify these AI systems.


Sources:


1) May 2025 - An Embarrassingly Simple Defense Against LLM Abliteration Attacks - https://arxiv.org/html/2505.19056v1

2) June 2024 - Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing - https://arxiv.org/html/2405.18166v2

3) October 2024 - Scalable Data Ablation Approximations for Language Models through Modular Training and Merging - https://arxiv.org/html/2410.15661v1

4) February 2025 - Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions - https://arxiv.org/html/2502.04322v1

5) April 2025 - Sugar-Coated Poison: Benign Generation Unlocks LLM Jailbreaking - https://arxiv.org/html/2504.05652v1

6) June 2024 - Uncensor any LLM with abliteration - https://huggingface.co/blog/mlabonne/abliteration

7) Reddit 2024 - Why jailbreak ChatGPT when you can abliterate any local LLM? https://www.reddit.com/r/ChatGPTJailbreak/comments/1givhkk/why_jailbreak_chatgpt_when_you_can_abliterate_any/

8) May 2025 - WordGame: Efficient & Effective LLM Jailbreak via Simultaneous Obfuscation in Query and Response - https://arxiv.org/html/2405.14023v1

9) July 2024 - Jailbreaking Black Box Large Language Models in Twenty Queries - https://arxiv.org/pdf/2310.08419

10) October 2024 - Scalable Data Ablation Approximations for Language Models through

Modular Training and Merging - https://arxiv.org/pdf/2410.15661


AI: AX - introspection
The art of looking into a model and understanding what is going on through introspection is referred to AX.