Other labs

This page briefly discusses other AI labs, focusing on language models (LMs). This page is relatively uncertain; less information is available about smaller labs.

On large training runs, see Who Is Leading in AI? (Epoch: Cottier et al. 2023).

Some AI labs with models near the frontier:

There is little public information about most of these labs.

Some big tech companies are also important actors for AI outside of making powerful models:

What about big Chinese tech companies (viz. Baidu, Tencent, Alibaba, Huawei, ByteDance)? I don’t know.

What about academic or government labs? No academic lab is near the frontier; I don’t know about government labs.


What should non-frontier AI labs do to promote safety?

  • Do safety research (alignment, evals, interpretability, control, etc.) and publish it
  • Don’t publish capabilities research
  • Make a plan to avoid creating dangerous models: make a responsible scaling policy with a risk assessment plan and corresponding commitments about model development and deployment, or at least determine model eval or risk assessment results that would trigger making a real RSP, or explicitly plan to stay well behind the frontier
  • If planning to become a frontier lab, set up internal governance processes to make critical decisions well