SS
About Me
Frontier AI Paper BriefingsPersonal AI Telegram BotClinical Trial EnrollerLittle Human Names
DisclaimersPrivacy PolicyTerms of Use
Privacy Policy·Terms of Use·Disclaimers

© 2026 Silvia Seceleanu

← Back to Explorer
Alignment·OpenAI·Dec 2023

24. Preparedness Framework (Beta)

OpenAI's risk evaluation framework

Policy
Summary

Introduced OpenAI's framework for evaluating catastrophic risks from frontier models across four categories (cybersecurity, CBRN, persuasion, model autonomy), with risk levels that determine deployment decisions.

Key Concepts

Cybersecurity, CBRN, persuasion, and model autonomy — four threat vectors evaluated
Low/Medium/High/Critical thresholds determine whether deployment is allowed

Low, Medium, High, Critical. Models at "High" can be deployed with mitigations. Models at "Critical" cannot be deployed.

Safety Advisory Group reviews; board retains override power on deployment decisions

The Safety Advisory Group reviews assessments. The board can overrule deployment decisions.

Connections

24. Preparedness Fra…Dec 202323. OpenAI Board Cri…Nov 202329. OpenAI o1 System…Dec 202446. OpenAI Model SpecDec 2025Influenced byInfluences
Influenced by
23. OpenAI Board Crisis
Nov 2023
Influences
29. OpenAI o1 System Card
Dec 2024
46. OpenAI Model Spec
Dec 2025