โ† Back to topics
3 tool-launch OpenAI confirmed 1 article

OpenAI launches GPT-5.5 biosafety bug bounty program

OpenAI announced a bug bounty program for GPT-5.5 offering up to $25,000 for identifying universal jailbreaks related to biosafety risks.

OpenAI launches GPT-5.5 biosafety bug bounty program
via OpenAI

๐Ÿ” Let's dive in

OpenAI has launched a red-teaming bug bounty challenge focused on GPT-5.5, inviting security researchers to identify universal jailbreaks that could expose biosafety vulnerabilities. The program offers rewards up to $25,000 for successful submissions. The initiative aims to strengthen safety measures before broader deployment.

Lead coverage: OpenAI โ€” GPT-5.5 Bio Bug Bounty โ†—

๐Ÿ•ฐ The timeline ยท 1 source

OpenAI first-party ยท 1d ago ยท 3/5

GPT-5.5 Bio Bug Bounty โ†—

OpenAI has launched a red-teaming bug bounty challenge focused on GPT-5.5, inviting security researchers to identify universal jailbreaks that could expose biosafety vulnerabilities. The program offers rewards up to $25,000 for successful submissions. The initiative aims to strengthen safety measures before broader deployment.

Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.
โ€” OpenAI
$25,000 to the first true universal jailbreak to clear all five questions.
โ€” OpenAI

๐Ÿท Tags

ChatGPT

๐Ÿ”ง Debug

Cluster ID
c366c2c39b
Importance (max)
3
Members
1
Sources
OpenAI
Earliest
2026-04-23T00:00:00.000Z
Latest
2026-04-23T00:00:00.000Z
Lead URL
https://openai.com/index/gpt-5-5-bio-bug-bounty