This page briefly discusses other AI labs, focusing on language models (LMs). This page is relatively uncertain; less information is available about smaller labs.
On large training runs, see Who Is Leading in AI? (Epoch: Cottier et al. 2023).
Some AI labs with models near the frontier:
- Mistral AI: French; released Mistral Large via API; released the weights of Mistral 8x22B under a permissive license; has a partnership with Microsoft
- Databricks: released the weights of the DBRX model
- xAI: released Grok-1.5 via API; released the weights of Grok-1
- NVIDIA: Nemotron-4 15B
- Inflection: released Inflection-2.5 model via API; Pi product
- Cohere: brands itself as ‘LMs for enterprises’; released Command R+ via API; released the weights of Command R
- TII: Emirati; released the weights of Falcon models
- Imbue: AI agents
-
01.AI: Chinese; Yi-large model; released the weights of Yi models; see also TechCrunch 2023
- AI21 Labs: Israeli; released the weights of Jamba
- Aleph Alpha: German; brands itself as ‘LMs for enterprises’; Luminous models
There is little public information about most of these labs.
Some big tech companies are also important actors for AI outside of making powerful models:
- NVIDIA: largely controls the supply of new hardware; NVIDIA AI platform; various unremarkable LMs such as Nemotron-4 340B
- Microsoft: has a strong partnership with OpenAI; has a new division “Microsoft AI”; has Azure, an AI model platform; made Phi-3 models; responded to the UK request for information about AI companies’ safety policies
- Amazon: has Bedrock, an AI model platform; made Amazon Titan, an unremarkable family of LMs; is developing the Olympus family; invested in Anthropic; responded to the UK request for information about AI companies’ safety policies
- Google also invested in Anthropic
- Apple: foundation models; CoreNet; formerly OpenELM and MM1 models; reportedly had a 200B-parameter LM
What about big Chinese tech companies (viz. Baidu, Tencent, Alibaba, Huawei, ByteDance)? I don’t know.
What about academic or government labs? No academic lab is near the frontier; I don’t know about government labs.
What should non-frontier AI labs do to promote safety?
- Do safety research (alignment, evals, interpretability, control, etc.) and publish it
- Don’t publish capabilities research
- Make a plan to avoid creating dangerous models: make a responsible scaling policy with a risk assessment plan and corresponding commitments about model development and deployment, or at least determine model eval or risk assessment results that would trigger making a real RSP, or explicitly plan to stay well behind the frontier
- If planning to become a frontier lab, set up internal governance processes to make critical decisions well