AI companies should do model evals and uplift experiments to determine whether models have dangerous capabilities or how close they are. They should also prepare to check whether models will act well in high-stakes situations.
AIs might scheme, i.e. fake alignment and subvert safety measures in order to gain power. AI companies should prepare for risks from models scheming, especially during internal deployment: if they can't reliably prevent scheming, they should prepare to catch some schemers and deploy potentially scheming models safely.
AI companies should do (extreme-risk-focused) safety research, and they should publish it to boost safety at other AI companies. Additionally, they should assist external safety researchers by sharing deep model access and mentoring.
AI companies should prepare to prevent catastrophic misuse for deployments via API, once models are capable of enabling catastrophic harm.
AI companies should prepare to protect model weights and code by the time AI massively boosts R&D, even from top-priority operations by the top cyber-capable institutions.
AI companies should share information on incidents, risks, and capabilities — but not share some capabilities research.
AI companies should plan for the possibility that dangerous capabilities appear soon and safety isn't easy: both for evaluating and improving safety of their systems and for using their systems to make the world safer.