AI companies should do model evals and uplift experiments to determine whether models have dangerous capabilities or how close they are. They should also prepare to check whether models will act well in high-stakes situations.
44 % | 29 % | 32 % | 1 % | 1 % | 1 % | 0 % |
AIs might scheme, i.e. fake alignment and subvert safety measures in order to gain power. AI companies should prepare for risks from models scheming, especially during internal deployment: if they can't reliably prevent scheming, they should prepare to catch some schemers and deploy potentially scheming models safely.
2 % | 8 % | 2 % | 2 % | 2 % | 2 % | 2 % |
AI companies should do (extreme-risk-focused) safety research, and they should publish it to boost safety at other AI companies. Additionally, they should assist external safety researchers by sharing deep model access and mentoring.
68 % | 55 % | 37 % | 28 % | 0 % | 15 % | 8 % |
AI companies should prepare to prevent catastrophic misuse for deployments via API, once models are capable of enabling catastrophic harm.
12 % | 4 % | 5 % | 0 % | 0 % | 0 % | 0 % |
AI companies should prepare to protect model weights and code by the time AI massively boosts R&D, even from top-priority operations by the top cyber-capable institutions.
2 % | 5 % | 0 % | 0 % | 0 % | 0 % | 0 % |
I'm Zach Stein-Perlman. I'm worried about future powerful Als causing an existential catastrophe. Here at AI Lab Watch, I track what AI companies are doing in terms of safety.
In this scorecard, I collect actions AI companies can take to improve safety and metrics measuring how well they're doing, and I collect public information on what the companies are doing in terms of safety.
Click on a column or cell to see details about a company; click on a row to see details about a category.
Criteria are grouped into categories; both are weighted by how important they currently are for safety and how much signal the criteria provide/capture. These criteria are not exhaustive; some important variables are hard to measure. I endorse the words more than the numbers; the scores on particular criteria and the weights are largely judgment calls.
In addition to the scorecard, I write blogposts on what AI companies should do and what they are doing, and I maintain resources with information on these topics.
I'm Zach Stein-Perlman. I'm worried about future powerful Als causing an existential catastrophe. I track what AI companies are doing in terms of safety.
For some details on what AI companies should do and what they are doing in terms of safety, click around this scorecard. Or check out the articles below, the rest of my blog, or the resources I maintain.