At AI Lab Watch, I (Zach Stein-Perlman) collect safety recommendations for frontier AI companies and information about what they are doing, then evaluate them on safety. I also maintain collections of information and blog about these topics. I focus on what companies should do to prevent extreme risks such as AI takeover and human extinction.
In May I finished redoing the scorecard and AI Lab Watch came out of beta. My next priority is analyzing companies' reports on their model evals for dangerous capabilities. Ultimately this will expand to include companies' safeguards and safety cases. See my work in progress at AI Safety Claims Analysis.
Thanks to Lightcone for designing this website. AI Lab Watch is funded by SFF and me. It also depends on the many people who volunteer their expertise.
Contact me (using the button in the bottom-right corner on the desktop site) to: