AI Lab Watch

Categories

Companies

Resources

Blog

About

Open letters

Open letters related to AI safety

  • FLI, Oct 2015: Research Priorities for Robust and Beneficial Artificial Intelligence
  • FLI, Aug 2017: Asilomar AI Principles
  • FLI, Mar 2023: Pause Giant AI Experiments
  • DAIR et al., Apr 2023: Five considerations to guide the regulation of "General Purpose AI" in the EU's AI Act
  • CAIS, May 2023: Statement on AI Risk
  • Meta, Jul 2023: Statement of Support for Meta's Open Approach to Today's AI
  • Academic AI researchers, Oct 2023: Managing AI Risks in an Era of Rapid Progress
  • FLI and Encode Justice, Oct 2023: AI Licensing for a Better Future
  • CHAI et al., Oct 2023: Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity
  • Oct 2023: Urging an International AI Treaty
  • Mozilla, Oct 2023: Joint Statement on AI Safety and Openness
  • Nov 2023: Post-Summit Civil Society Communique
  • Nov 2023: EU AI Act Open Letter
  • IDAIS, Mar 2024: Consensus Statement on Red Lines in Artificial Intelligence
  • Mar 2024: Openness and Transparency in AI Provide Significant Benefits for Society
  • Frontier AI company staff members, Jun 2024: A Right to Warn about Advanced Artificial Intelligence
  • IDAIS, Sep 2024: Consensus Statement on AI Safety as a Global Public Good

Joint declarations between countries related to AI safety

  • Nov 2023: Bletchley Declaration
  • May 2024: Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity