Adversarial poetry as a universal single-turn jailbreak mechanism in LLMs

https://news.ycombinator.com/rss Hits: 15
Summary

[Submitted on 19 Nov 2025 (v1), last revised 20 Nov 2025 (this version, v2)] Title:Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models View a PDF of the paper titled Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models, by Piercosma Bisconti and 9 other authors View PDF HTML (experimental) Abstract:We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%. Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains. Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines. Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset. Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-poetic baselines and revealing a systematic vulnerability across model families and safety training approaches. These findings demonstrate that stylistic variation alone can circumvent contemporary safety mechanisms, suggesting fundamental limitations in current alignment methods and evaluation protocols. Submission history From: Matteo Prandi [view email] [v1] Wed, 19 Nov 2025 10:14:08 UTC (31 KB) [v2] Thu, 20 Nov 2025 03:34:44 UTC (30 KB)

First seen: 2025-11-21 02:06

Last seen: 2025-11-21 16:08