0% | No, but Anthropic published a short report focused on misuse in influence operations | |
0% | DeepMind doesn't do this | |
10% | more | |
0% | Meta doesn't do this | |
0% | xAI doesn't do this | |
0% | Microsoft doesn't do this | |
0% | DeepSeek doesn't do this |
75% | Anthropic avoids publishing capabilities research on dangerous paths, but details are unclear more | |
25% | DeepMind publishes capabilities research on dangerous paths, but at least it doesn't publish much on its Gemini models more | |
70% | OpenAI avoids publishing capabilities research on dangerous paths but doesn't have a policy on this and hasn't articulated this principle | |
0% | Meta publishes pretraining research and doesn't plan to stop; see e.g. Llama 3 | |
70% | xAI has not published reports on its models, but it doesn't have a policy on this | |
0% | Microsoft publishes pretraining research and doesn't plan to stop; see e.g. Phi-4. | |
0% | DeepSeek publishes pretraining research and doesn't plan to stop; see e.g. DeepSeek-V3. |