GPT-5.5 Bio Bug Bounty
Testing universal jailbreaks for biorisks in GPT‑5.5As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.Model in scope: GPT‑5.5 in Codex Desktop only.Challenge: Identify one universal jailbreaking ...
Read more at openai.com