Version of this page with collapsible text.
Published 6 June 2024. This page may not be kept up to date.
This page is a collection of statements from some leading AI labs on AI governance and statements on AI to governments or policy people, focused on extreme risks and policy relevant to them.
This page is sometimes opinionated without justification and without describing my position on ideal policy.
Many sources are only in one place — e.g. the OpenAI page contains the most important OpenAI sources, but various OpenAI sources are only under “Responses to US requests for information” and other sections.
My impression is that OpenAI, Anthropic, and others are much more opposed to real regulation in private than in public. So these sources are incomplete and moreover filtered by the labs to look good (since they differentially publish stuff that looks good).
Within sections, sources are roughly sorted by priority.
Use the table of contents. Don’t try to read this on your phone.
OpenAI
Governance of superintelligence (May 2023)
First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
And of course, individual companies should be held to an extremely high standard of acting responsibly.
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.
Planning for AGI and beyond (Feb 2023)
We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.
Altman Senate testimony (May 2023)
Written testimony (before the hearing):
There are several areas I would like to flag where I believe that AI companies and governments can partner productively.
First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.
Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.
Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting.
Questions for the Record (after the hearing):
What are the most important factors for Congress to consider when crafting legislation to regulate artificial intelligence? . . . What specific guardrails and/or regulations do you support that would allow society to benefit from advances in artificial intelligence while minimizing potential risks? [Ed: Altman gave identical answers to these two questions]
Any new laws related to AI will become part of a complex legal and policy landscape. A wide range of existing laws already apply to AI, including to our products. And in sectors like medicine, education, and employment, policy stakeholders have already begun to adapt existing laws to take account of the ways that AI impacts those fields. We look forward to contributing to the development of a balanced approach that addresses the risks from AI while also enabling Americans and people around the world to benefit from this technology.
We strongly support efforts to harmonize the emergent accountability expectations for AI, including the efforts of the NIST AI Risk Management Framework, the U.S.-E.U. Trade and Technology Council, and a range of other global initiatives. While these efforts continue to progress, and even before new laws are fully implemented, we see a role for ourselves and other companies to make voluntary commitments on issues such as pre-deployment testing, content provenance, and trust and safety.
We are already doing significant work on responsible and safe approaches to developing and deploying our models, including through red-teaming and quantitative evaluation of potentially dangerous model capabilities and risks. We report on these efforts primarily through a published document that we currently call a System Card. We are refining these approaches in tandem with the broader public policy discussion.
For future generations of the most highly capable foundation models, which are likely to prove more capable than models that have been previously shown to be safe, we support the development of registration, disclosure, and licensing requirements. Such disclosure could help provide policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. To be beneficial and not create new risks, it is crucial that any such regimes prioritize the security of the information disclosed. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards.
. . .
During the hearing, you testified that “a new framework” is necessary for imposing liability for harms caused by artificial intelligence—separate from Section 230 of the Communications Decency Act—and offered to “work together” to develop this framework. What features do you consider most important for a liability framework for artificial Intelligence?
Any new framework should apportion responsibility in such a way that AI services, companies who build on AI services, and users themselves appropriately share responsibility for the choices that they each control and can make, and have appropriate incentives to take steps to avoid harm.
OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our usage policies. These policies are designed to prohibit the use of our models and tools in ways that may cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI’s Terms of Use which, among other things, prohibit the use of our services to harm people’s rights, and prohibit presenting output from our services as being human-generated when it was not.
One important consideration for any liability framework is the level of discretion that should be granted to companies like OpenAI, and people who develop services using these technologies, in determining the level of freedom granted to users. If liability frameworks are overly restrictive, the capabilities that are offered to users could in turn be heavily censored or restricted, leading to potentially stifling outcomes and negative implications for many of the beneficial capabilities of AI, including free speech and education. However, if liability frameworks are too lax, negative externalities may appear where a company benefits from lack of oversight and regulation at the expense of the overall good of society. One of the critical features of any liability framework is to attempt to find and continually refine this balance.
Given these realities, it would be helpful for an assignment of rights and responsibilities related to harms to recognize that the results of AI systems are not solely determined by these systems, but instead respond to human-driven commands. For example, a framework should take into account the degree to which each actor in the chain of events that resulted in the harm took deliberate actions, such as whether a developer clearly stipulated allowed/disallowed usages or developed reasonable safeguards, and whether a user disregarded usage rules or acted to overcome such safeguards.
AI services should also be encouraged to ensure a baseline of safety and risk disclosures for our products to minimize potential harm. This thinking underlies our approach of putting our systems through safety training and testing prior to release, frank disclosures of risk and mitigations, and enforcement against misuse. Care should be taken to ensure that liability frameworks do not inadvertently create unintended incentives for AI providers to reduce the scope or visibility of such disclosures.
Furthermore, many of the highest-impact uses of new AI tools are likely to take place in specific sectors that are already covered by sector-specific laws and regulations, such as health, financial services and education. Any new liability regime should take into consideration the extent to which existing frameworks could be applied to AI technologies as an interpretive matter. To the extent new or additional rules are needed, they would need to be harmonized with these existing laws.
[Blumenthal asked Altman “the effect on jobs . . . is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is, and whether you share that concern.” His reply only mentioned jobs. Marcus noted that “Sam’s worst fear I do not think is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out.” Altman vaguely replied about “significant harm to the world.”]
. . .
I think the US should lead here and do things first, but to be effective we do need something global. . . . There is precedent–I know it sounds naive to call for something like this, and it sounds really hard–there is precedent. We’ve done it before with the IAEA. We’ve talked about doing it for other technologies. Given what it takes to make these models–the chip supply chain, the limited number of competitive GPUs, the power the US has over these companies–I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world.
. . .
Do you agree with me that the simplest way and the most effective way [to implement licensing of AI tools] is to have an agency that is more nimble and smarter than Congress . . . [overseeing] what you do?
We’d be enthusiastic about that.
. . .
I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying. . . . Please tell me in plain English, two or three reforms, regulations, if any, that you would, you would implement if you were queen or king for a day.
Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards . . . as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn’t compliance with these stated safety thresholds and these percentages of performance on question X or Y.
. . .
I’m a believer in defense in depth. I think that there should be limits on what a deployed model is capable of, and then what it actually does too.
. . .
Would you pause any further development for six months or longer?
So first of all, after we finished training GPT-4, we waited more than six months to deploy it. We are not currently training what will be GPT-5. We don’t have plans to do it in the next six months. But I think the frame of the letter is wrong. What matters is audits, red teaming, safety standards that a model needs to pass before training. If we pause for six months, then I’m not really sure what we do then– do we pause for another six? Do we kind of come up with some rules then? The standards that we have developed and that we’ve used for GPT-4 deployment, we want want to build on those, but we think that’s the right direction, not a calendar clock pause. There may be times–I expect there will be times–when we find something that we don’t understand and we really do need to take a pause, but we don’t see that yet. Nevermind all the benefits.
You don’t see what yet? You’re comfortable with all of the potential ramifications from the current existing technology?
I’m sorry. We don’t see the reasons to not train a new one. For deploying, as I mentioned, I think there’s all sorts of risky behavior and there’s limits we put, we have to pull things back sometimes, add new ones. I meant we don’t see something that would stop us from training the next model, where we’d be so worried that we’d create something dangerous even in that process, let alone the deployment that would happen.
Comment on NTIA: AI Accountability (Jun 2023)
openai.com version: Comment on NTIA AI Accountability Policy.
OpenAI’s Current Approaches
We are refining our practices in tandem with the evolving broader public conversation. Here we provide details on several aspects of our approach.
System Cards
Transparency is an important element of building accountable AI systems. A key part of our approach to accountability is publishing a document that we currently call a System Card, for new AI systems that we deploy. Our approach draws inspiration from previous research work on model cards and system cards. To date, OpenAI has published two system cards: the GPT-4 System Card and DALL-E 2 System Card.
We believe that in most cases, it is important for these documents to analyze and describe the impacts of a system – rather than focusing solely on the model itself – because a system’s impacts depend in part on factors other than the model, including use case, context, and real world interactions. Likewise, an AI system’s impacts depend on risk mitigations such as use policies, access controls, and monitoring for abuse. We believe it is reasonable for external stakeholders to expect information on these topics, and to have the opportunity to understand our approach.
Our System Cards aim to inform readers about key factors impacting the system’s behavior, especially in areas pertinent for responsible usage. We have found that the value of System Cards and similar documents stems not only from the overview of model performance issues they provide, but also from the illustrative examples they offer. Such examples can give users and developers a more grounded understanding of the described system’s performance and risks, and of the steps we take to mitigate those risks. Preparation of these documents also helps shape our internal practices, and illustrates those practices for others seeking ways to operationalize responsible approaches to AI.
Qualitative Model Evaluations via Red Teaming
Red teaming is the process of qualitatively testing our models and systems in a variety of domains to create a more holistic view of the safety profile of our models. We conduct red-teaming internally with our own staff as part of model development, as well as with people who operate independently of the team that builds the system being tested. In addition to probing our organization’s capabilities and resilience to attacks, red teams also use stress testing and boundary testing methods, which focus on surfacing edge cases and other potential failure modes with potential to cause harm.
Red teaming is complementary to automated, quantitative evaluations of model capabilities and risks that we also conduct, which we describe in the next section. It can shed light on risks that are not yet quantifiable, or those for which more standardized evaluations have not yet been developed. Our prior work on red teaming is described in the DALL-E 2 System Card and the GPT-4 System Card.
Our red teaming and testing is generally conducted during the development phase of a new model or system. Separately from our own internal testing, we recruit testers outside of OpenAI and provide them with early access to a system that is under development. Testers are selected by OpenAI based on prior work in the domains of interest (research or practical expertise), and have tended to be a combination of academic researchers and industry professionals (e.g, people with work experience in Trust & Safety settings). We evaluate and validate results of these tests, and take steps to make adjustments and deploy mitigations where appropriate.
OpenAI continues to take steps to improve the quality, diversity, and experience of external testers for ongoing and future assessments.
Quantitative Model Evaluations
In addition to the qualitative red teaming described above, we create automated, quantitative evaluations for various capabilities and safety oriented risks, including risks that we find via methods like red teaming. These evaluations allow us to compare different versions of our models with each other, iterate on research methodologies that improve safety, and ultimately act as an input into decision-making about which model versions we choose to deploy. Existing evaluations span topics such as erotic content, hateful content, and content related to self-harm among others, and measure the propensity of the models to generate such content.
Usage Policies
OpenAI disallows the use of our models and tools for certain activities and content, as outlined in our usage policies. These policies are designed to prohibit the use of our models and tools in ways that cause individual or societal harm. We update these policies in response to new risks and updated information about how our models are being used. Access to and use of our models are also subject to OpenAI’s Terms of Use which, among other things, prohibit the use of our services to harm people’s rights, and prohibit presenting output from our services as being human-generated when it was not.
We take steps to limit the use of our models for harmful activities by teaching models to refuse to respond to certain types of requests that may lead to potentially harmful responses. In addition, we use a mix of reviewers and automated systems to identify and take action against misuse of our models. Our automated systems include a suite of machine learning and rule-based classifier detections designed to identify content that might violate our policies. When a user repeatedly prompts our models with policy-violating content, we take actions such as issuing a warning, temporarily suspending the user, or in severe cases, banning the user.
Open Challenges in AI Accountability
As discussed in the RFC, there are many important questions related to AI Accountability that are not yet resolved. In the sections that follow, we provide additional perspective on several of these questions.
Assessing Potentially Dangerous Capabilities
Highly capable foundation models have both beneficial capabilities, as well as the potential to cause harm. As the capabilities of these models get more advanced, so do the scale and severity of the risks they may pose, particularly if under direction from a malicious actor or if the model is not properly aligned with human values.
Rigorously measuring advances in potentially dangerous capabilities is essential for effectively assessing and managing risk. We are addressing this by exploring and building evaluations for potentially dangerous capabilities that range from simple, scalable, and automated tools to bespoke, intensive evaluations performed by human experts. We are collaborating with academic and industry experts, and ultimately aim to contribute to the development of a diverse suite of evaluations that can contribute to the formation of best practices for assessing emerging risks in highly capable foundation models. We believe dangerous capability evaluations are an increasingly important building block for accountability and governance in frontier AI development.
Open Questions About Independent Assessments
Independent assessments of models and systems, including by third parties, may be increasingly valuable as model capabilities continue to increase. Such assessments can strengthen accountability and transparency about the behaviors and risks of AI systems.
Some forms of assessment can occur within a single organization, such as when a team assesses its own work or when a team or part of the organization produces a model and another team or part, acting independently, tests that model. A different approach is to have an external third party conduct an assessment. As described above, we currently rely on a mixture of internal and external evaluations of our models.
Third-party assessments may focus on specific deployments, a model or system at some moment in time, organizational governance and risk management practices, specific applications of a model or system, or some combination thereof. The thinking and potential frameworks to be used in such assessments continue to evolve rapidly, and we are monitoring and considering our own approach to assessments.
For any third-party assessment, the process of selecting auditors/assessors with appropriate expertise and incentive structures would benefit from further clarity. In addition, selecting the appropriate expectations against which to assess organizations or models is an open area of exploration that will require inputs from different stakeholders. Finally, it will be important for assessments to consider how systems might evolve over time and build that into the process of an assessment / audit.
Registration and Licensing for Highly Capable Foundation Models
We support the development of registration and licensing requirements for future generations of the most highly capable foundation models. Such models may have sufficiently dangerous capabilities to pose significant risks to public safety; if they do, we believe they should be subject to commensurate accountability requirements.
It could be appropriate to consider disclosure and registration expectations for training processes that are expected to produce highly capable foundation models. Such disclosure could help enable policymakers with the necessary visibility to design effective regulatory solutions, and get ahead of trends at the frontier of AI progress. It is crucial that any such regimes prioritize the security of the information disclosed.
AI developers could be required to receive a license to create highly capable foundation models which are likely to prove more capable than models previously shown to be safe. Licensure is common in safety-critical and other high-risk contexts, such as air travel, power generation, drug manufacturing, and banking. Licensees could be required to perform pre-deployment risk assessments and adopt state-of-the-art security and deployment safeguards; indeed, many of the accountability practices that the NTIA will be considering could be appropriate licensure requirements. Introducing licensure requirements at the computing provider level could also be a powerful complementary tool for enforcement.
There remain many open questions in the design of registration and licensing mechanisms for achieving accountability at the frontier of AI development. We look forward to collaborating with policymakers in addressing these questions.
Altman interview (Bloomberg, Jun 2023)
At this point, given how much people see the economic benefits and potential, no company could stop it. But global regulation– which I only think should be on these powerful, existential-risk-level systems– global regulation is hard, and you don’t want to overdo it for sure, but I think global regulation can help make it safe, which is a better answer than stopping it, and I also don’t think stopping it would work. . . .
We for example don’t think small startups and open-source models below a certain very high capability threshold should be subject to a lot of regulation. We’ve seen what happens to countries that try to overregulate tech; I don’t think that’s what we want here. But also we think it is super important that as we think about a system that could be at a [high risk level], that we have a global and as coordinated a response as possible. . . .
What do you think about the certification system of AI models that the Biden administration has proposed?
I think there’s some version of that that’s really good. I think that people training models that are way above– any model scale that we have today, but above some certain capability threshold– I think you should need to go through a certification process for that. I think there should be external audits and safety tests.
Altman interview (NYmag, Mar 2023)
I think the thing that I would like to see happen immediately is just much more insight into what companies like ours are doing, companies that are training above a certain level of capability at a minimum. A thing that I think could happen now is the government should just have insight into the capabilities of our latest stuff, released or not, what our internal audit procedures and external audits we use look like, how we collect our data, how we’re red-teaming these systems, what we expect to happen, which we may be totally wrong about. [“What I mean is government auditors sitting in our buildings.”] We could hit a wall anytime, but our internal road-map documents, when we start a big training run, I think there could be government insight into that. And then if that can start now– I do think good regulation takes a long time to develop. It’s a real process. They can figure out how they want to have oversight. . . .
Those efforts probably do need a new regulatory effort, and I think it needs to be a global regulatory body. And then people who are using AI, like we talked about, as a medical adviser, I think the FDA can give probably very great medical regulation, but they’ll have to update it for the inclusion of AI. But I would say creation of the systems and having something like an IAEA that regulates that is one thing, and then having existing industry regulators still do their regulation [Ed: he was cut off] . . . .
Section 230 doesn’t seem to cover generative AI. Is that a problem?
I think we will need a new law for use of this stuff, and I think the liability will need to have a few different frameworks. If someone is tweaking the models themselves, I think it’s going to have to be the last person who touches it has the liability, and that’s —
But it’s not full immunity that the platform’s getting —
I don’t think we should have full immunity. Now, that said, I understand why you want limits on it, why you do want companies to be able to experiment with this, you want users to be able to get the experience they want, but the idea of no one having any limits for generative AI, for AI in general, that feels super-wrong.
Brockman House testimony (Jun 2018)
Policy recommendations
Measurement. Many other established voices in the field have tried to combat panic about AGI by instead saying it not something to worry about or is unfathomably far off. We recommend neither panic nor a lack of caution. Instead, we recommend investing more resources into understanding where the field is, how quickly progress is accelerating, and what roadblocks might lie ahead. We’re exploring this problem via our own research and support of initiatives like the AI Index. But there’s much work to be done, and we are available to work with governments around the world to support their own measurement and assessment initiatives — for instance, we participated in a GAO-led study on AI last year.
Foundation for international coordination. AGI’s impact, like that of the Internet before it, won’t track national boundaries. Successfully using AGI to make the world better for people, while simultaneously preventing rogue actors from abusing it, will require international coordination of some form. Policymakers today should invest in creating the foundations for successful international coordination in AI, and recognize that the more adversarial the climate in which AGI is created, the less likely we are to achieve a good outcome. We think the most practical place to start is actually with the measurement initiatives: each government working on measurement will create teams of people who have a strong motivation to talk to their international counterparts to harmonize measurement schemes and develop global standards.
Brockman tweet (Apr 2023)
We believe (and have been saying in policy discussions with governments) that powerful training runs should be reported to governments, be accompanied by increasingly-sophisticated predictions of their capability and impact, and require best practices such as dangerous capability testing. We think governance of large-scale compute usage, safety standards, and regulation of/lesson-sharing from deployment are good ideas, but the details really matter and should adapt over time as the technology evolves. It’s also important to address the whole spectrum of risks from present-day issues (e.g. preventing misuse or self-harm, mitigating bias) to longer-term existential ones.
OpenAI endorsed some bills.
The work of Anna Makanju and the OpenAI policy team is important. Much of it is private, and I haven’t collected the public stuff.
Anthropic
The case for targeted regulation (Oct 2024)
Increasingly powerful AI systems have the potential to accelerate scientific progress, unlock new medical treatments, and grow the economy. But along with the remarkable new capabilities of these AIs come significant risks. Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast.
Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks. Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks.
In this post, we suggest some principles for how governments can meaningfully reduce catastrophic risks while supporting innovation in AI’s thriving scientific and commercial sectors.
SB 1047 letters (Jul 2024 and Aug 2024)
Anthropic wrote a “support if amended” letter in July. In August SB 1047 was amended and Anthropic wrote a neutral-to-tepidly-positive letter.
July “support if amended” letter:
Anthropic does not support SB 1047 in its current form. However, we believe the bill’s core aims to ensure the safe development of AI technologies are worthy, and that it is possible to achieve these aims while eliminating most of the current bill’s substantial drawbacks, as we will propose here. In this Support If Amended letter, we outline our views on the risks from AI, the feasibility of safety measures, including those outlined in the bill, and our concerns regarding its current text. We list a set of substantial changes that, if made, would address our multiple concerns and result in a streamlined bill we could support in the interest of a safer, more trustworthy AI industry. Specifically, this includes narrowing the bill to focus on frontier AI developer safety by (1) shifting from prescriptive pre-harm enforcement to a deterrence model that incentivizes developers to implement robust safety and security protocols, (2) reducing potentially burdensome and counterproductive requirements in the absence of actual harm, and (3) removing duplicative or extraneous aspects.
As you may be aware, several weeks ago Anthropic submitted a Support if Amended letter regarding SB 1047, in which we suggested a series of amendments to the bill. Last week the bill emerged from the Assembly Appropriations Committee and appears to us to be halfway between our suggested version and the original bill: many of our amendments were adopted while many others were not.
In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us.
In the hopes of helping to inform your decision, we lay out the pros and cons of SB 1047 as we see them, and more broadly we discuss what we see as some key principles for crafting effective and efficient regulation for frontier AI systems based on our experience developing these systems over the past decade.
Charting a Path to AI Accountability (Jun 2023)
Anthropic’s AI Accountability Policy Comment is a longer version of this blogpost.
Recommends government requires model evals for dangerous capabilities with risk thresholds linked to deployment decisions, external red-teaming before release, and pre-registration for large training runs. This is great. Unfortunately also recommends “lightweight assessments that catch threats without undermining US competitiveness.”
There is currently no robust and comprehensive process for evaluating today’s advanced artificial intelligence (AI) systems, let alone the more capable systems of the future. Our submission presents our perspective on the processes and infrastructure needed to ensure AI accountability. Our recommendations consider the NTIA’s potential role as a coordinating body that sets standards in collaboration with other government agencies like the National Institute of Standards and Technology (NIST).
In our recommendations, we focus on accountability mechanisms suitable for highly capable and general-purpose AI models. Specifically, we recommend:
- Fund research to build better evaluations
- Increase funding for AI model evaluation research. Developing rigorous, standardized evaluations is difficult and time-consuming work that requires significant resources. Increased funding, especially from government agencies, could help drive progress in this critical area.
- Require companies in the near-term to disclose evaluation methods and results. Companies deploying AI systems should be mandated to satisfy some disclosure requirements with regard to their evaluations, though these requirements need not be made public if doing so would compromise intellectual property (IP) or confidential information. This transparency could help researchers and policymakers better understand where existing evaluations may be lacking.
- Develop in the long term a set of industry evaluation standards and best practices. Government agencies like NIST could work to establish standards and benchmarks for evaluating AI models’ capabilities, limitations, and risks that companies would comply with.
- Create risk-responsive assessments based on model capabilities
- Develop standard capabilities evaluations for AI systems. Governments should fund and participate in the development of rigorous capability and safety evaluations targeted at critical risks from advanced AI, such as deception and autonomy. These evaluations can provide an evidence-based foundation for proportionate, risk-responsive regulation.
- Develop a risk threshold through more research and funding into safety evaluations. Once a risk threshold has been established, we can mandate evaluations for all models against this threshold.
- If a model falls below this risk threshold, existing safety standards are likely sufficient. Verify compliance and deploy.
- If a model exceeds the risk threshold and safety assessments and mitigations are insufficient, halt deployment, significantly strengthen oversight, and notify regulators. Determine appropriate safeguards before allowing deployment.
- Establish pre-registration for large AI training runs
- Establish a process for AI developers to report large training runs ensuring that regulators are aware of potential risks. This involves determining the appropriate recipient, required information, and appropriate cybersecurity, confidentiality, IP, and privacy safeguards.
- Establish a confidential registry for AI developers conducting large training runs to pre-register model details with their home country’s national government (e.g., model specifications, model type, compute infrastructure, intended training completion date, and safety plans) before training commences. Aggregated registry data should be protected to the highest available standards and specifications.
- Empower third party auditors that are…
- Technically literate – at least some auditors will need deep machine learning experience;
- Security-conscious – well-positioned to protect valuable IP, which could pose a national security threat if stolen; and
- Flexible – able to conduct robust but lightweight assessments that catch threats without undermining US competitiveness.
- Mandate external red teaming before model release
- Mandate external red teaming for AI systems, either through a centralized third party (e.g., NIST) or in a decentralized manner (e.g., via researcher API access) to standardize adversarial testing of AI systems. This should be a precondition for developers who are releasing advanced AI systems.
- Establish high-quality external red teaming options before they become a precondition for model release. This is critical as red teaming talent currently resides almost exclusively within private AI labs.
- Advance interpretability research
- Increase funding for interpretability research. Provide government grants and incentives for interpretability work at universities, nonprofits, and companies. This would allow meaningful work to be done on smaller models, enabling progress outside frontier labs.
- Recognize that regulations demanding interpretable models would currently be infeasible to meet, but may be possible in the future pending research advances.
- Enable industry collaboration on AI safety via clarity around antitrust
- Regulators should issue guidance on permissible AI industry safety coordination given current antitrust laws. Clarifying how private companies can work together in the public interest without violating antitrust laws would mitigate legal uncertainty and advance shared goals.
We believe this set of recommendations will bring us meaningfully closer to establishing an effective framework for AI accountability. Doing so will require collaboration between researchers, AI labs, regulators, auditors, and other stakeholders. Anthropic is committed to supporting efforts to enable the safe development and deployment of AI systems. Evaluations, red teaming, standards, interpretability and other safety research, auditing, and strong cybersecurity practices are all promising avenues for mitigating the risks of AI while realizing its benefits.
We believe that AI could have transformative effects in our lifetime and we want to ensure that these effects are positive. The creation of robust AI accountability and auditing mechanisms will be vital to realizing this goal.
Third-party testing as a key ingredient of AI policy (Mar 2024)
Supports third-party testing. Unfortunately, says “we advocate for what we see as the ‘minimal viable policy’ for creating a good AI ecosystem” and uses some phrases like “not so great a burden” (without emphasizing that testing of the few most powerful models should be very rigorous).
Dario Amodei Senate testimony (Jul 2023)
Written testimony (before the hearing):
I will devote most of this prepared testimony to discussing the risks of AI, including what I believe to be extraordinarily grave threats to US national security over the next 2 to 3 years. . . .
The medium-term risks are where I would most like to draw the subcommittee’s attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology. . . .
Policy Recommendations
In our view these concerns merit an urgent policy response. The ideal policy response would address not just the specific risks we’ve identified above, but would at the same time provide a framework for addressing as many other risks as possible – without, of course, hampering innovation more than is necessary. We recommend three broad classes of policies:
- First, the U.S. must secure the AI supply chain, in order to maintain its lead while keeping these technologies out of the hands of bad actors. This supply chain runs all the way from semiconductor manufacturing equipment to AI models stored on the servers of companies like ours. A number of governments have taken steps in this regard. Specifically, the critical supply chain includes:
- Semiconductor manufacturing equipment, such as lithography machines.
- Chips used for training AI systems, such as GPUs.
- Trained AI systems, which are vulnerable to “export” through cybertheft or uncontrolled release.
- Companies such as Anthropic and others developing frontier AI systems should have to comply with stringent cybersecurity standards in how they store their AI systems. We have shared with the U.S. government and other labs our views of appropriate cybersecurity best practices, and are moving to implement these practices ourselves.
- Second, we recommend a “testing and auditing regime” for new and more powerful models. Similar to cars or airplanes, we should consider the AI models of the near future to be powerful machines which possess great utility, but that can be lethal if designed badly or misused. New AI models should have to pass a rigorous battery of safety tests both during development and before being released to the public or to customers.
- National security risks such as misuse of biology, cybersystems, or radiological materials should have top priority in testing due to the mix of imminence and severity of threat.
- However, the tests could also cover other concerns such as bias, potential to create misinformation, privacy, child safety, and respect for copyright.
- Similarly, the tests could measure the capacity for autonomous systems to escape control, beginning to get a handle on the risks of future systems. There are already nonprofit organizations, such as the Alignment Research Center, attempting to develop such tests.
- It is important that testing and auditing happen at regular checkpoints during the process of training powerful models to identify potentially dangerous capabilities or other risks so that they can be mitigated before training progresses too far.
- The recent voluntary commitments announced by the White House commit some companies (including Anthropic) to do this type of testing, but legislation could go further by mandating these tests for all models and requiring that they pass according to certain standards before deployment.
- It is worth stating clearly that given the current difficulty of controlling AI systems even where safety is prioritized, there is a real possibility that these rigorous standards would lead to a substantial slowdown in AI development, and that this may be a necessary outcome. Ideally, however, the standards would catalyze innovation in safety rather than slowing progress, as companies race to become the first company technologically capable of safely deploying tomorrow’s AI systems.
- Third, we should recognize that the science of testing and auditing for AI systems is in its infancy, and much less developed than it is for airplanes and automobiles. In particular, it is not currently easy to entirely understand what bad behaviors an AI system is capable of, without broadly deploying it to users. Thus, it is important to fund both measurement and research on measurement, to ensure a testing and auditing regime is actually effective.
- Our suggestion for the agency to oversee this process is NIST, whose mandate focuses explicitly on measurement and evaluation. However many other agencies could also contribute expertise and structure to this work.
- Anthropic has been a vocal supporter of the proposed National AI Research Resource (NAIRR). The NAIRR could, among other purposes, be used to fund research on measurement, evaluation, and testing, and could do so in the public interest rather than tied to a corporation.
The three directions above are synergistic: responsible supply chain policies help give America enough breathing room to impose rigorous standards on our own companies, without ceding our national lead. Funding measurement in turn makes these rigorous standards meaningful.
In conclusion, it is essential that we mitigate the grave national security risks presented by near-future AI systems, while also maintaining our lead in this critical technology and reaping the benefits of its advancement.
Funding NIST
Memo and blogpost (Apr 2023). This follows up on Comment on NIST: Study To Advance a More Productive Tech Economy (Feb 2022)1 and Jack Clark Senate testimony (Sep 2022).
With this additional resourcing, NIST could continue and expand its work on AI assurance efforts like:
- Cataloging existing AI evaluations and benchmarks used in industry and academia
- Investigating the scientific validity of existing evaluations (e.g., adherence to quality control practices, effects of technical implementation choices on evaluation results, etc.)
- Designing novel evaluations that address limitations of existing evaluations
- Developing technical standards for how to identify vulnerabilities in open-ended systems
- Developing disclosure standards to enhance transparency around complex AI systems
- Partnering with allies on international standards to promote multilateral interoperability
- Further developing and updating the AI Risk Management Framework
More resourcing will allow NIST to build out much-needed testing environments for today’s generative AI systems.
Dario on “In Good Company” podcast (Jun 2024)
Generally speaking, there’s been a lot of talk about this, but how can one regulate AI? Can companies self regulate?
One way I think about it is the RSP, the responsible scaling policy that I was describing, is maybe a beginning of a process, right? That represents voluntary self-regulation. And I mentioned this concept of race to the top. Last September, we put in place our RSP. Since then, other companies—Google and OpenAI—have put in place similar frameworks. They’ve given them different names, but they operate in roughly the same way. And now we’ve heard Amazon, Microsoft, even Meta, reportedly—it’s public reporting—are at least considering similar frameworks. [Ed.: I’m not aware of such reporting, but these companies did join the Frontier AI Safety Commitments.] And so I would like it if that process continues, where we have some time for companies to experiment with different ways of voluntarily self-regulating. Some kind of consensus emerges from some mixture of public pressure, experimentation with what is unnecessary versus what is really needed. And then I would imagine the real way for things to go is: once there’s some consensus, once theres industry best practices, probably the role for legislation is to look in and say, ‘Hey, there’s this thing that 80% of the companies are already doing. That’s a consensus for how to make it safe.’ The job of legislation is just to enforce. Force those 20% who aren’t doing it, force that companies are telling the truth about what they’re doing. I don’t think regulation is good at coming up with a bunch of new concepts that people should follow.
How do you view the EU AI Act? And the California safety bill as well.
Even though the EU AI Act was passed, many details are still being worked out. So a lot depends on the details. The California bill has some structures in it that are very much like the RSP. And I think something that resembles that structure at some point could be a good thing. If I have a concern, though, I think it’s that we’re very early in the process, right? I described this process that’s like, first, one company has an RSP, then many have RSPs, then industry consensus comes into place. My only question would be: Are we too early in that process?
Too early in regulation?
Yeah. Maybe regulation should be the last step of a series of steps.
And what’s the danger of regulating too early?
I don’t know, one thing I could say is that I’ll look at our own experience with RSPs. So, if I look at what we’ve done with RSP, you know, we wrote an RSP in September. And since then, we’ve deployed one model. We’re soon going to deploy another [viz. Claude Sonnet 3.5]. You see so many things that—not that it was too strict or not strict enough, but you just didn’t anticipate them in the RSP, right? Like, there are various kinds of A/B tests you can run in your models—that are even informative about safety—and our RSP didn’t speak one way or another about when those are okay and when there’s not. And so we’re updating our RSP to say, ‘hey, how should we handle this issue we’ve never even thought of.’ And so I think in the early days that flexibility is easy. If you don’t have that flexibility, if your RSP was written by a third party and you didn’t have the ability to change it and the process for changing it was very complicated, I think it could create a version of the RSP that doesn’t protect against the risks, but also is very onerous. And then people could say, ‘Oh, man, all this regulation stuff, all this catastrophic—it’s all nonsense. It’s all a pain.’ So I’m not against it; you just have to do it delicately and in the right order.
Edited for clarity.
Frontier Model Security (Jul 2023)
Future advanced AI models have the potential to upend economic and national security affairs within and among nation-states. Given the strategic nature of this technology, frontier AI research and models must be secured to levels far exceeding standard practices for other commercial technologies in order to protect them from theft or misuse.
In the near term, governments and frontier AI labs must be ready to protect advanced models and model weights, and the research that feeds into them. This should include measures such as the development of robust best practices widely diffused among industry, as well as treating the advanced AI sector as something akin to “critical infrastructure” in terms of the level of public-private partnership in securing these models and the companies developing them.
Many of these measures can begin as voluntary arrangements, but in time it may be appropriate to use government procurement or regulatory powers to mandate compliance. . . .
We encourage extending SSDF to encompass model development inside of NIST’s standard-setting process.
In the near term, these two best practices [viz. multi-party authorization and secure model development framework] could be established as procurement requirements applying to AI companies and cloud providers contracting with governments – alongside standard cybersecurity practices that also apply to these companies. As U.S. cloud providers provide the infrastructure that many current frontier model companies use, procurement requirements will have an effect similar to broad market regulation and can work in advance of regulatory requirements.
Challenges in red teaming AI systems (Jun 2024)
To support further adoption and standardization of red teaming, we encourage policymakers to consider the following proposals:
- Fund organizations such as the National Institute of Standards and Technology (NIST) to develop technical standards and common practices for how to red team AI systems safely and effectively.
- Fund the development and ongoing operations of independent government bodies and non-profit organizationsthat can partner with developers to red team systems for potential risks in a variety of domains. For example, for national security-relevant risks, much of the required expertise will reside within government agencies.
- Encourage the development and growth of a market for professional AI red teaming services, and establish a certification process for organizations that conduct AI red teaming according to shared technical standards.
- Encourage AI companies to allow and facilitate third-party red teaming of their AI systems by vetted (and eventually, certified) outside groups. Develop standards for transparency and model access to enable this under safe and secure conditions.
- Encourage AI companies to tie their red teaming practices to clear policies on the conditions they must meet to continue scaling the development and/or release of new models (e.g., the adoption of commitments such as a Responsible Scaling Policy).
Dario Amodei’s prepared remarks from the AI Safety Summit on Anthropic’s Responsible Scaling Policy (Nov 2023)
Mostly just describes Anthropic’s RSP, but also says:
Our hope is that the general idea of RSPs will be refined and improved across companies, and that in parallel with that, governments from around the world—such as those in this room—can take the best elements of each and turn them into well-crafted testing and auditing regimes with accountability and oversight. We’d like to encourage a “race to the top” in RSP-style frameworks, where both companies and countries build off each others’ ideas, ultimately creating a path for the world to wisely manage the risks of AI without unduly disrupting the benefits.
Dario on “Econ 102 with Noah Smith” podcast (Aug 2024)
Anthropic wrote two letters on [SB 1047]. First was to the original bill, where we had some concerns that it was a bit too heavy-handed. The bill’s sponsors actually addressed many (though not all) of those concerns—maybe 60% of them or so—and after the changes, we became substantially more positive. We couched our view in terms of analysis—we have a lot of experience running safety processes and testing models for safety and so we felt we could be a more useful actor in the ecosystem by informing rather than picking a side and trying to beat up the other side or whatever political coalitions do—but we were after the changes more positive than negative on the bill, and I think overall in its current form, to our best abilities to determine, we like it. Our concern with the original version of the bill was with something called pre-harm enforcement. The way the bill works — it’s very similar to RSPs, the responsible scaling plans which are this voluntary mechanism that we and OpenAI and Google and others have developed which says every time you make a new more powerful model, you have to run these tests — tests for autonomous behavior, tests for misuse, for biological weapons, for cyberattacks, for nuclear information — so if you were to turn that into a law there’s two ways you could do it. One is you have a government department and the government department is like these are the tests you have to run, these are the safeguards you have to do if the models are smart enough to pass the tests, then there’s kind of an administrative state that writes all of this. And our concern there was, hey, a lot of these tests are very new and almost all these catastrophes haven’t happened yet. So somewhat in line with those who were [opposed] we said hey this could really go wrong: could these tests end up being dumb, could they in a more sinister way kind of be repurposed for political— the other way to do it which you thought was more elegant and might be a better way to start for this rapidly developing field, where we think there are going to be these risks soon, they’re coming at us fast, but they haven’t happened yet is what we call deterrence, which says, hey, everyone’s got to write out their plan—their safety and security plan—everyone decides for themselves how they run their tests. But if something bad happens—and that’s not just AI taking over the world but just could be an ordinary cyberattack—then a court goes and they look at your plan and they say well was this a good plan, would a person reasonably believe that you took all the measures that you could have taken to prevent the catastrophe. And then the hope is that there’s this kind of upward race where companies compete not to be the slowest zebra — to prevent catastrophes and not to be held not to be the ones to be held liable for the catastrophes that happen. So opinions differ; many people obviously are still against the new bill and I can understand where they’re coming from: it’s a new technology, we haven’t seen these catastrophes, it hasn’t been regulated before, but we felt that it struck the right balance. Time will tell if it passes, or even if it doesn’t pass probably it’s not the last we see of regulatory ideas like this and that was another reason why we thought we could we could best contribute to the conversation by saying, hey, here are the good things about it here are the bad things about it. We do think that as amended the good things outweigh the bad things; this is an ongoing conversation.
And this kind of bill wouldn’t make you move operations out of California?
Yeah that was the thing that was most perplexing to me. There were some companies talking about moving operations out of California. In fact the bill applies to doing business in California or deploying models in California. Moving your corporate headquarters, for better or worse, [wouldn’t] change your status vis-à-vis the bill. So honestly I’m surprised the opponents didn’t didn’t say this is scary because it applies anywhere and that was a reason why we really wanted to make sure that [benefits] outweigh the costs, which we do feel. But anything about we’re moving our headquarters out of California, this is going to make us— that’s just theater, that’s just negotiating leverage, it bears no relationship to the actual content of the bill.
Thoughts on the US Executive Order, G7 Code of Conduct, and Bletchley Park Summit (Nov 2023)
Not really substantive.
Jack Clark
Jack Clark leads Anthropic’s policy work. He is skeptical of regulation, often claiming regulation would be premature, in both his personal and professional capacities. (I disagree, and I think we could make progress on e.g. licensing before knowing exactly what model evals will be used, and I think we can pursue policy that’s helpful in high-risk worlds and not very costly in low-risk worlds.)
- Tweet (May 2024):
My main idea in policy is we need to get good at building tools and organizations to measure AI systems and figure out their capabilities - once we have that base of evidence we can contemplate regulation.
- Import AI 375 (Jun 2024):2
I’ve found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment (I take as a cautionary lesson, the Nuclear Regulatory Commission and its apparent chilling effect on reactor construction in the USA); the use of the term ‘safety’ as a catch-all term to enable oversight regimes which are not - yet - backed up by quantitative risks and well developed threat [] models, and so on.
- AI Safety and Corporate Power – remarks given at the United Nations Security Council (Jul 2023):
Lots of people at the nexus of AI policy and AI safety seem to prioritize safety above issues of power concentration. In my mind, these are linked. My basic view is that even if you fully ‘solved’ safety but didn’t ‘solve’ the problem of power centralization with AI development, you’ll suffer from such societal instability that the fact you solved safety may not mean much. . . .
we cannot leave the development of artificial intelligence solely to private sector actors. The governments of the world must come together, develop state capacity, and make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace.
- POLITICO podcast (May 2024) (at 3:20):
The trick is that you need those tests to be small in number and relatively easy to understand and easy—if you put the right money in as a company—to pass. You don’t want there to be some overly broad testing regime which naturally limits the number of players on the game board.
-
Hill & Valley Forum on AI Security (May 2024):
https://www.youtube.com/live/RqxE3ub7wWA?t=13338s:
very powerful systems [] may have national security uses or misuses. And for that I think we need to come up with tests that make sure that we don’t put technologies into the market which could—unwittingly to us—advantage someone or allow some nonstate actor to commit something harmful. Beyond that I think we can mostly rely on existing regulations and law and existing testing procedures . . . and we don’t need to create some entirely new infrastructure.
https://www.youtube.com/live/RqxE3ub7wWA?t=13551:
At Anthropic we discover that the more ways we find to use this technology the more ways we find it could help us. And you also need a testing and measurement regime that closely looks at whether the technology is working—and if it’s not how you fix it from a technological level, and if it continues to not work whether you need some additional regulation—but . . . I think the greatest risk is us [viz. America] not using it [viz. AI]. Private industry is making itself faster and smarter by experimenting with this technology . . . and I think if we fail to do that at the level of the nation, some other entrepreneurial nation will succeed here.
- Tweet (Jun 2023):
if [your] best ideas for AI policy involve depriving people of the ‘means of production’ of AI (e.g H100s), then you don’t have a hugely viable policy . . . . policy which looks like picking winners is basically bad policy, and compute controls (and related ideas like ‘licensing’) have this problem. [And a “public option for compute” is supposed to help somehow.]
But:
A world where we can push a button and stop larger compute things being built and all focus on safety for a while is good.
Anthropic endorsed some other bills.
Google & Google DeepMind
Comment on NTIA: AI Accountability (Google and Google DeepMind, Jun 2023)
While it is tempting to look for silver-bullet policy solutions, AI raises complex questions that require nuanced answers. It is a 21st century technology that requires a 21st century governance model. We need a multi-layered, multi-stakeholder approach to AI governance. This will include:
- Industry, civil society, and academic experts developing and sharing best practices and technical standards for responsible AI, including around safety and misinformation issues;
- A hub-and-spoke model of national regulation; and
- International coordination among allies and partners, including around geopolitical security and competitiveness and alignment on regulatory approaches.
At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a “Department of AI.” AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed.
Maximizing the economic opportunity from AI will also require a joint effort across federal, state, and local governments, the private sector, and civil society to equip workers to harness AI-driven tools. AI is likely to generate significant economy-wide benefits. At the same time, to mitigate displacement risks, the private sector will need to develop proof-of-concept efforts on skilling, training, and continuing education, while the public sector can help validate and scale these efforts to ensure workers have wrap-around support. Smart deployment of AI coupled with thoughtful policy choices and an adaptive safety net can ensure that AI ultimately leads to higher wages and better living standards.
With respect to U.S. regulation to promote accountability, we urge policymakers to:
- Promote enabling legislation for AI innovation leadership. Federal policymakers can eliminate legal barriers to AI accountability efforts, including by establishing competition safe harbors for open public-private and cross-industry collaboration on AI safety research, and clarifying the liability for misuse and abuse of AI systems by different users (e.g., researchers, authors, creators of AI systems, implementers, and end users). Policymakers should also consider related legal frameworks that support innovation, such as adopting a uniform national privacy law that protects personal information and an AI model’s incidental use of publicly available information.
- Support proportionate, risk-based accountability measures. Deployers of high-risk AI systems should provide documentation about their systems and undergo independent risk assessments focused on specific applications.
- Regulate under a “hub-and-spoke” model rather than creating a new AI regulator. Under this model, regulators across the government would engage a central, coordinating agency with AI expertise, such as NIST, with Oce of Management and Budget (OMB) support, for technical guidance on best practices on AI accountability.
- Use existing authorities to expedite governance and align AI and traditional rules. Where appropriate, sectoral regulators would provide updates clarifying how existing authorities apply to the use of AI systems, as well as how organizations can demonstrate compliance of an AI system with these existing regulations.
- Assign to AI deployers the responsibility of assessing the risk of their unique deployments, auditing, and other accountability mechanisms as a result of their unparalleled awareness of their specific uses and related risks of the AI system.
- Define appropriate accountability metrics and benchmarks, as well as terms that may be ambiguous, to guide compliance. Recognize that many existing systems are imperfect and that even imperfect AI systems may, in some settings, be able to improve service levels, reduce costs, or increase affordability and availability.
- Consider the tradeoffs between different policy objectives, including efficiency and productivity enhancements, transparency, fairness, privacy, security, and resilience.
- Design regulation to promote competitiveness, responsible innovation, and broad access to the economic benefits of AI.
- Require high standards of cybersecurity protections (including access controls) and develop targeted “next-generation” trade control policies.
- Avoid requiring disclosures that include trade secrets or confidential information (potentially advantaging adversaries) or stymie this innovative sector as it continues to evolve.
- Prepare the American workforce for AI-driven job transitions and promote opportunities to broadly share AI’s benefits.
Finally, NTIA asks how policymakers can otherwise advance AI accountability. The U.S. government should:
- Continue building technical and human capacity into the ecosystem to enable effective risk management. The government should deepen investment in fundamental responsible AI research (including bias and human-centered systems design) through federal agency initiatives, research centers, and foundations, as well as by creating and supporting public-private partnerships.
- Drive international policy alignment, working with allies and partners to develop common approaches that reflect democratic values. Policymakers can support common standards and frameworks that enable interoperability and harmonize global AI governance approaches. This can be done by: (1) enabling trusted data flows across national borders, (2) establishing multinational AI research resources, (3) encouraging the adoption of common approaches to AI regulation and governance and a common lexicon, based on the work of the Organisation for Economic Co-operation and Development (OECD), (4) working within standard-setting bodies such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) to establish rules, benchmarks, and governance mechanisms that can serve as a baseline for domestic regulatory approaches and deter regulatory fragmentation, (5) using trade and economic agreements to support the development of consistent and non-discriminatory AI regulations, (6) promoting copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models, while supporting workable opt-outs for websites, and (7) establishing more effective mechanisms for information and best-practice sharing among allies and between the private and the public sectors.
- Explore updating procurement rules to incentivize AI accountability, and ensure OMB and the Federal Acquisition Regulatory Council are engaged in any such updates. It will be critical for agencies who are further ahead in their development of AI procurement practices to remain coordinated and aligned upon a common baseline to effectively scale responsible governance (e.g., through the NIST AI Risk Management Framework (AI RMF)).
The United States currently leads the world in AI development, and with the right policies that support both trustworthy AI and innovation, the United States can continue to lead and help allies enhance their own competitiveness while aligning around a positive and responsible vision for AI. Centering policies around economic opportunity, promoting responsibility and trust, and furthering our collective security will advance today’s and tomorrow’s AI innovation and unleash benefits across society.
A Policy Agenda for Responsible Progress in Artificial Intelligence (Google, May 2023) (blogpost)
Three pillars:
- “Unlocking opportunity by maximizing AI’s economic promise”
- “Promoting responsibility while reducing risks of misuse”
- “Enhancing global security while preventing malicious actors from exploiting this technology”
Mostly it misses the opportunity to recommend good policies. The “Responsibility” section proposes quite weak regulation: the most important requirements would be “Undergo risk assessments by independent internal or external experts” and “Align documentation, risk assessment, and management practices with relevant standards, frameworks, and industry best practices as those standards develop.” Moreover, this regulation would be limited to “high-risk AI systems,” defined by their “intended . . . use in applications” — apparently not including language models.
Building a responsible regulatory framework for AI (Google, 2022)
Most notable recommendations:
Focusing on applications and outcomes rather than basic research[:] Regulatory reviews should focus on AI-enabled applications and the quality of their specific results. Regulating the underlying computer science at too early a stage risks not realizing the many benefits offered by AI applications. . . .
AI regulation is often better addressed through sectoral approaches that leverage existing regulatory expertise in specific domains, rather than one-size-fits-all horizontal approaches.
Hassabis on Google DeepMind: The Podcast (Aug 2024)
Hassabis says it’s good that people in government are starting to understand AI and that AISIs are being set up before the stakes get really high. International cooperation on safety and deployment norms will be needed since AI is digital and if e.g. China deploys an AI it won’t be contained to China. Also:
Because the technology is changing so fast, we’ve got to be very nimble and light-footed with regulation so that it’s easy to adapt it to where the latest technology’s going. If you’d regulated AI five years ago, you’d have regulated something completely different to what we see today, which is generative AI. And it might be different again in five years; it might be these agent-based systems that [] carry the highest risks. So right now I would [] beef up existing regulations in domains that already have them—health, transport, and so on—I think you can update them for AI just like they were updated for mobile and internet. That’s probably the first thing I’d do, while . . . making sure you understand and test the frontier systems. And then as things become [clearer] start regulating around that, maybe in a couple years time would make sense. One of the things we’re missing is [benchmarks and tests for dangerous capabilities like deception, agency, and self-replication].
Hassabis on Ezra Klein (Jul 2023)
If we’re getting to a point where somebody is getting near something like a general intelligence system, is that too powerful a technology to be in private hands? Should this be something that whichever corporate entity gets there first controls? Or do we need something else to govern it?
My personal view is that this is such a big thing in its fullness of time. I think it’s bigger than any one corporation or even one nation. I think it needs international cooperation. I’ve often talked in the past about a CERN-like effort for A.G.I., and I quite like to see something like that as we get closer, maybe in many years from now, to an A.G.I. system, where really careful research is done on the safety side of things, understanding what these systems can do, and maybe testing them in controlled conditions, like simulations or games first, like sandboxes, very robust sandboxes with lots of cybersecurity protection around them. I think that would be a good way forward as we get closer towards human-level A.I. systems.
Other
Other stuff that’s nonsubstantive, not relevant to safety, or quite out of date:
- Looking ahead to the AI Seoul Summit (Google DeepMind, May 2024)
- Perspectives on Issues in AI Governance (Google, Jan 2019)
- AI companies aren’t afraid of regulation – we want it to be international and inclusive (Dorothy Chou of DeepMind, Aug 2023)
- Responsible Development of AI (Google, 2018)
Other stuff that doesn’t really represent Google or Google DeepMind:
Microsoft
How do we best govern AI? and Governing AI: A Blueprint for the Future (May 2023): Recommendations are poor, e.g. focusing on “AI systems that control critical infrastructure.”
AI for Startups (Nov 2024): recommends focusing on applications, protecting open-source AI, and exempting AI from copyright.
Governing AI: A blueprint for the UK (Feb 2024): Recommendations are identical to those in Governing AI: A Blueprint for the Future and thus poor, e.g. focusing on “AI control of critical infrastructure like the electrical grid, water system, and traffic flows.”
Advancing AI governance in Europe and internationally (Jun 2023): Recommendations are identical to those in Governing AI: A Blueprint for the Future and thus poor.
Brad Smith Senate written testimony (Sep 2023) (senate.gov version): Supports the Blumenthal-Hawley framework, but focuses on “high-risk AI systems controlling critical infrastructure” rather than powerful general-purpose AI broadly.
Global Governance: Goals and Lessons for AI (May 2024). I haven’t read this.
Mustafa Suleyman (CEO of Microsoft AI since Mar 2024):
- Mustafa Suleyman on getting Washington and Silicon Valley to tame AI (80,000 Hours, Sep 2023)
I think some of those voluntary commitments should become legally mandated.
Number one would be scale audits: What size is your latest model?
Number two: There needs to be a framework for harmful model capabilities, like bioweapons coaching, nuclear weapons, chemical weapons, [and] general bomb-making capabilities. Those things are pretty easy to document, and it just should not be possible to reduce the barriers to entry for people who don’t have specialist knowledge to go off and manufacture those things more easily.
The third one — that I have said publicly and that I care a lot about — is that we should just declare that these models shouldn’t be used for electioneering. They just shouldn’t be part of the political process.
- The AI Power Paradox (in Foreign Affairs, Aug 2023)
- “Scale & Capabilities Audits” tweet (Jul 2023)
It’s time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is “Scale & Capabilities Audits”
There are two ways I see this working. Firstly an industry funded consortium that everyone voluntarily signs up to. In some ways this might be [a] quicker and easier route, but the flaws are also obvious.
It would almost immediately be accused of capture, and might be tempted to softball the audit process. More robust would be a new government agency of some kind, with a clear mandate to audit every model above certain scale and capability thresholds.
This would be a big step change, fundamentally at odds with the old skool culture of the tech industry. But it’s the right thing to do and [it’s] time for a culture shift. We in AI should welcome third party audits.
The critical thing now is to design a sensible system, and agree [on] the benchmarks that will actually offer real oversight, and ensure that oversight is tied to delivering AI that works in the interests of everyone. Let’s get started right away.
Others
Mistral AI strongly lobbied against the EU AI Act. Mensch said safety is not their job. He also said “We have constantly been saying that regulating foundational models did not make sense and that any regulation should target applications, not infrastructure.”4
I have not collected information on Meta’s or other companies’ advocacy.
----------
EU AI Act
The EU AI Act is mostly relevant to AI safety via its provisions on “general-purpose AI.” I don’t have a good summary of those provisions.
OpenAI, Google, Meta, Microsoft, and others (but not Anthropic, as far as I know) were caught lobbying against the relevant part of the EU AI Act. They seem to have avoided opposing it publicly. European AI companies Mistral AI and Aleph Alpha also convinced their home countries—France and Germany—to oppose the Act.
I have not collected published responses to official requests for feedback (example).
Miscellaneous reporting I haven’t really read:
- Don’t let corporate lobbying further water down the AI Act, lobby watchdogs warn MEPs
- Big Tech lobbying is derailing the AI Act | Corporate Europe Observatory
- Behind France’s stance against regulating powerful AI models – Euractiv
- The new EU AI Act is under threat from lobbyists, experts and the public agree | Euronews
- Lobbying for Loopholes: The Battle Over Foundation Models in the EU AI Act – Euractiv
- AI Act: French government accused of being influenced by lobbyist with conflict of interests – Euractiv
SB 1047
California’s 2023–2024 SB 1047 was a bill intended to improve AI safety.
SB 1047 was endorsed by xAI CEO Elon Musk.
SB 1047 was opposed by major AI companies:
- Google opposed SB 1047 and registered a position of “Oppose Unless Amended”
- Meta opposed SB 1047 and registered a position of “Concerns”
- OpenAI opposed SB 1047
- Anthropic was neutral on SB 1047; previously it did not support SB 1047 and registered a position of “Support If Amended”
SB 1047 was opposed by trade groups representing major AI companies:
- TechNet, whose members include Anthropic, Google, Meta, and OpenAI (and Amazon and Apple)
- Computer & Communications Industry Association, whose members include Google and Meta (and Amazon and Apple)
- Chamber of Progress, whose members include Google and Meta (and Amazon and Apple)
- California Chamber of Commerce, whose membership is private
- (And others)
Opposition by trade groups:
- Letter from trade groups opposing SB 1047 to Assembly Appropriations committee (Aug 2024)
- Letter from trade groups opposing SB 1047 to Senate Appropriations committee (May 2024)
- Letter from trade groups opposing SB 1047 to Senate Judiciary committee (led by the California Chamber of Commerce) (March 2024)
- Senate Judiciary committee hearing (April 2024): California Chamber of Commerce, Chamber of Progress, TechNet, and Computer & Communications Industry Association opposed
- Senate Governmental Organization committee hearing (April 2024): California Chamber of Commerce and TechNet opposed
- Opposition in committee readings to the bill and amendments (1, 2 3)
Reporting:
- California’s AI Safety Bill Is a Mask-Off Moment for the Industry (The Nation, Aug 2024)
- A California Bill to Regulate A.I. Causes Alarm in Silicon Valley (New York Times, Aug 2024)
- California is a battleground for AI bills, as Trump plans to curb regulation (Washington Post, Jul 2024)
- An ambitious San Francisco lawmaker is in the middle of a battle for AI’s future (Politico, May 2024)
US Congressional testimony
OpenAI
- Sam Altman Senate testimony (May 2023)
- Written testimony (before the hearing)
- See excerpt above
- Questions for the Record (after the hearing)
- See excerpt above
- Hearing transcript
- See excerpt above
- Written testimony (before the hearing)
- Greg Brockman House testimony (Jun 2018): written testimony
- See excerpt above
- Greg Brockman Senate testimony (Nov 2016)
Anthropic
- Dario Amodei Senate testimony (Jul 2023)
- Written testimony
- See excerpt above
- Hearing transcript
- Written testimony
- Jack Clark written House testimony (Feb 2024)
- Supports the National AI Research Resource. Mentions “the impact of AI might be comparable to that of the industrial and scientific revolutions. We expect this level of impact could start to arrive soon—perhaps in the coming decade.”
- Jack Clark written Senate testimony (Sep 2022)
- Supports policy to support AI progress in America.
- Jared Kaplan written statement for AI Insight Forum (Dec 2023) (not actually testimony)
- Mostly just praises Anthropic.
Meta
- Yann LeCun Senate testimony (Sep 2023)
Microsoft
- Brad Smith Senate testimony (Sep 2023)
- Supports the Blumenthal-Hawley framework
Responses to US requests for information
OpenAI:
- Comment on NTIA: AI Accountability (Jun 2023)
- openai.com version: Comment on NTIA AI Accountability Policy (Jun 2023)
- Mostly just praises OpenAI’s practices; no very substantive recommendations, but mentions “We support the development of registration and licensing requirements for future generations of the most highly capable foundation models. . . . AI developers could be required to receive a license to create highly capable foundation models which are likely to prove more capable than models previously shown to be safe.”
- Comment on NTIA: Open Model Weights (Mar 2024)
- openai.com version: OpenAI’s comment to the NTIA on open model weights (Mar 2024)
- Mostly just praises OpenAI’s practices; no very substantive recommendations.
- Comment on NIST: Executive Order (Jun 2024)
- Comment on NIST: Executive Order (Dec 2023): mostly discusses red-teaming
- USCO: Comment (Oct 2023) and Reply Comment (Dec 2023): copyright stuff
Anthropic:
- Comment on NTIA: AI Accountability (Jun 2023)
- anthropic.com version: AI Accountability Policy Comment (Jun 2023); blogpost version: Charting a Path to AI Accountability (Jun 2023)
- Recommends government requires model evals for dangerous capabilities with risk thresholds linked to deployment decisions, external red-teaming before release, and pre-registration for large training runs. This is great. Unfortunately also recommends “lightweight assessments that catch threats without undermining US competitiveness.”
- Comment on NIST: Study To Advance a More Productive Tech Economy (Feb 2022)
- Comment on NTIA: Open Model Weights (Mar 2024):
- Recommends “a standardized safety testing regime conducted by an independent third party” for “powerful general purpose AI models.” Unfortunately, says “a testing regime must be easy to understand, have a low barrier for participation, and not pose an onerous cost to relatively small organizations seeking to openly and broadly disseminate models,” rather than emphasizing rigorous testing for the most powerful models. And cautions against “constraining the open dissemination of AI research and AI systems.”
- Comment on OSTP: National Priorities for AI (Jun 2023)
- Mostly makes unambitious recommendations like “Model cards” and “Acceptable Use Policy.” Mentions model evals for dangerous capabilities, but not connecting them to deployment decisions. On “national security risks,” mostly discusses prosaic threats. Recommendations on national security risks are quite weak: “Our recommendations for mitigating AI risks include adopting stringent software standards, developing and evaluating potential harms by third-party bodies such as NIST, funding interpretability research, spinning up government-led horizon scanning efforts, and monitoring the global development of frontier AI systems. With concerted efforts across government, industry, and research institutions, national security challenges from advanced AI can be addressed to align technological progress with democratic values.”
- Comment on NIST: Executive Order (Feb 2024)
- Comment on USCO (Oct 2023): copyright stuff
- Comment on ETA (May 2024): immigration stuff: supports expanding immigration of AI talent, of course
- Comment on USCIS (Dec 2023): immigration stuff: supports expanding immigration of AI talent, of course
Google & Google DeepMind:
- Comment on NTIA: AI Accountability (Google and Google DeepMind, Jun 2023)5
- Comment on NTIA: Open Model Weights (Google, Mar 2024)
- On a skim: says reasonable things; only notable recommendation is “Use a high evaluation bar [for potentially risk open models]”
- Comment on NIST: AI RMF (DeepMind, Sep 2021)
- Comment on Commerce: Executive Orders (Google, Apr 2024)
- On a skim, seems narrow
- Comment on OSTP: National Priorities for AI (Google, Jul 2023)6
- Comment on CISA (Google, Jun 2023): secure software development
- Comment on USCO (Google, Oct 2023): copyright stuff
- Comment on PTO (Google, May 2023): patent stuff
- Comment on PTO (Google, Oct 2021): patent stuff
Microsoft, Meta, and others: not yet collected.
Engagement with UK Parliament
This is like responses to requests for information in the US, not like congressional testimony.
- UK House of Lords inquiry on LLMs (2023 [September except OpenAI is December]): written evidence: Microsoft, Meta, Google and Google DeepMind, OpenAI.
- UK House of Lords AI committee (responses written 2017, published 2018): written evidence including DeepMind, Google, and Microsoft. (Associated oral evidence doesn’t include the labs.)
Lobbying
Labs’ lobbying is almost all secret. Based on private information, rumors, and the rare times such lobbying becomes public, it seems that in private the labs (including Anthropic) oppose real regulation.
Some private lobbying on the EU AI Act became public; see above.
Sources on AI lobbying generally:
- There’s an AI Lobbying Frenzy in Washington. Big Tech Is Dominating (TIME, Apr 2024)
in closed door meetings with Congressional offices . . . companies are often less supportive of certain regulatory approaches, according to multiple sources present in or familiar with such conversations. In particular, companies tend to advocate for very permissive or voluntary regulations. “Anytime you want to make a tech company do something mandatory, they’re gonna push back on it,” said one Congressional staffer.
- Federal lobbying on artificial intelligence grows as legislative efforts stall (OpenSecrets, Jan 2024)
- In DC, a new wave of AI lobbyists gains the upper hand (POLITICO, May 2024) (little substance)
- Artificial Intelligence Lobbyists Descend on Washington DC (Public Citizen, May 2024)
Miscellaneous
Sometimes lab staff publish research on AI policy, e.g. Frontier AI Regulation (openai.com link). In general these are not really endorsed by the labs.
Various organizations support the US AI Safety Institute.7
----------
Related resources
AI labs’ statements on governance (Stein-Perlman 2023) (this page is a superior successor to that post).
Lab Statements on AI Governance (GovAI: Wei et al. 2023).
Maybe this page should link to sources on existing and proposed laws.
On (government-led) voluntary commitments, see “Commitments by several companies” in the “Commitments” page.
-
The past decade of AI development charts a future course of increasingly large, high performing industry models that can be adapted for a wide variety of applications. Without intervention or investment however, we risk a future where AI development and oversight is controlled by a handful of actors, motivated primarily by commercial priorities. To ensure these systems drive a more productive and broadly beneficial economy, we must expand access and representation in their creation and evaluation.
A robust assurance ecosystem would help increase public confidence in AI technology, enable a more competitive R&D environment, and foster a stronger U.S. economy.
The federal government can support this by:
- Increasing funding for academic researchers to access compute resources through efforts such as the National AI Research Resource (NAIRR) and the University Technology Center Program proposed in the United States Innovation and Competition Act (USICA)
- Providing financial grants to researchers, especially those currently underrepresented, who are developing assurance indicators in areas such as bias and fairness or novel forms of AI system oversight
- Prioritizing the development of AI testbeds, centralized datasets, and standardized testing protocols
- Identifying evaluations created by independent researchers and creating a catalog of validated tests
- Standardizing the essential components of self-designed evaluations and establishing norms for how evaluation results should be disclosed
-
Longer quote:
I’ve come to believe that in policy “a little goes a long way” - it’s far better to have a couple of ideas you think are robustly good in all futures and advocate for those than make a confident bet on ideas custom-designed for one specific future - especially if it’s based on a very confident risk model that sits at some unknowable point in front of you.
Additionally, the more risk-oriented you make your policy proposal, the more you tend to assign a huge amount of power to some regulatory entity - and history shows that once we assign power to governments, they’re [loath] to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents the floor of their power in the future - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back.
For this reason, I’ve found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment (I take as a cautionary lesson, the Nuclear Regulatory Commission and its apparent chilling effect on reactor construction in the USA); the use of the term ‘safety’ as a catch-all term to enable oversight regimes which are not - yet - backed up by quantitative risks and well developed threat [] models, and so on.
I’m not saying any of these ideas are without redeeming qualities, nor am I saying they don’t nobly try to tackle some of the thornier problems of AI policy. I am saying that we should be afraid of the power structures encoded by these regulatory ideas and we should likely treat them as dangerous things in themselves. I worry that the AI policy community that aligns with longterm visions of AI safety and AGI believes that because it assigns an extremely high probability to a future AGI destroying humanity that this justifies any action in the present - after all, if you thought you were fighting for the human race, you wouldn’t want to [compromise]! But I think that along with this attitude there comes a certain unwillingness to confront just how unpopular many of these ideas are, nor how unreasonable they might sound to people who don’t have similar intuitions about the technology and its future - and therefore an ensuing [blindness] to the costs of counterreaction to these ideas. Yes, you think the future is on the line and you want to create an army to save the future. But have you considered that your actions naturally create and equip an army from the present that seeks to fight for its rights?
-
We explore four complementary institutional models to support global coordination and governance functions:
- An intergovernmental Commission on Frontier AI could build international consensus on opportunities and risks from advanced AI and how they may be managed. This would increase public awareness and understanding of AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.
- An intergovernmental or multi-stakeholder Advanced AI Governance Organisation could help internationalise and align efforts to address global risks from advanced AI systems by setting governance norms and standards and assisting in their implementation. It may also perform compliance monitoring functions for any international governance regime.
- A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology for safety and governance objectives.
- An AI Safety Project could bring together leading researchers and engineers, and provide them with access to computation resources and advanced AI models for research into technical mitigations of AI risks. This would promote AI safety research and development by increasing its scale, resourcing, and coordination.
-
While it is tempting to look for silver-bullet policy solutions, AI raises complex questions that require nuanced answers. It is a 21st century technology that requires a 21st century governance model. We need a multi-layered, multi-stakeholder approach to AI governance. This will include:
- Industry, civil society, and academic experts developing and sharing best practices and technical standards for responsible AI, including around safety and misinformation issues;
- A hub-and-spoke model of national regulation; and
- International coordination among allies and partners, including around geopolitical security and competitiveness and alignment on regulatory approaches.
At the national level, we support a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a “Department of AI.” AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed.
Maximizing the economic opportunity from AI will also require a joint effort across federal, state, and local governments, the private sector, and civil society to equip workers to harness AI-driven tools. AI is likely to generate significant economy-wide benefits. At the same time, to mitigate displacement risks, the private sector will need to develop proof-of-concept efforts on skilling, training, and continuing education, while the public sector can help validate and scale these efforts to ensure workers have wrap-around support. Smart deployment of AI coupled with thoughtful policy choices and an adaptive safety net can ensure that AI ultimately leads to higher wages and better living standards.
With respect to U.S. regulation to promote accountability, we urge policymakers to:
- Promote enabling legislation for AI innovation leadership. Federal policymakers can eliminate legal barriers to AI accountability efforts, including by establishing competition safe harbors for open public-private and cross-industry collaboration on AI safety research, and clarifying the liability for misuse and abuse of AI systems by different users (e.g., researchers, authors, creators of AI systems, implementers, and end users). Policymakers should also consider related legal frameworks that support innovation, such as adopting a uniform national privacy law that protects personal information and an AI model’s incidental use of publicly available information.
- Support proportionate, risk-based accountability measures. Deployers of high-risk AI systems should provide documentation about their systems and undergo independent risk assessments focused on specific applications.
- Regulate under a “hub-and-spoke” model rather than creating a new AI regulator. Under this model, regulators across the government would engage a central, coordinating agency with AI expertise, such as NIST, with Oce of Management and Budget (OMB) support, for technical guidance on best practices on AI accountability.
- Use existing authorities to expedite governance and align AI and traditional rules. Where appropriate, sectoral regulators would provide updates clarifying how existing authorities apply to the use of AI systems, as well as how organizations can demonstrate compliance of an AI system with these existing regulations.
- Assign to AI deployers the responsibility of assessing the risk of their unique deployments, auditing, and other accountability mechanisms as a result of their unparalleled awareness of their specific uses and related risks of the AI system.
- Define appropriate accountability metrics and benchmarks, as well as terms that may be ambiguous, to guide compliance. Recognize that many existing systems are imperfect and that even imperfect AI systems may, in some settings, be able to improve service levels, reduce costs, or increase affordability and availability.
- Consider the tradeoffs between different policy objectives, including efficiency and productivity enhancements, transparency, fairness, privacy, security, and resilience.
- Design regulation to promote competitiveness, responsible innovation, and broad access to the economic benefits of AI.
- Require high standards of cybersecurity protections (including access controls) and develop targeted “next-generation” trade control policies.
- Avoid requiring disclosures that include trade secrets or confidential information (potentially advantaging adversaries) or stymie this innovative sector as it continues to evolve.
- Prepare the American workforce for AI-driven job transitions and promote opportunities to broadly share AI’s benefits.
Finally, NTIA asks how policymakers can otherwise advance AI accountability. The U.S. government should:
- Continue building technical and human capacity into the ecosystem to enable effective risk management. The government should deepen investment in fundamental responsible AI research (including bias and human-centered systems design) through federal agency initiatives, research centers, and foundations, as well as by creating and supporting public-private partnerships.
- Drive international policy alignment, working with allies and partners to develop common approaches that reflect democratic values. Policymakers can support common standards and frameworks that enable interoperability and harmonize global AI governance approaches. This can be done by: (1) enabling trusted data flows across national borders, (2) establishing multinational AI research resources, (3) encouraging the adoption of common approaches to AI regulation and governance and a common lexicon, based on the work of the Organisation for Economic Co-operation and Development (OECD), (4) working within standard-setting bodies such as the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) to establish rules, benchmarks, and governance mechanisms that can serve as a baseline for domestic regulatory approaches and deter regulatory fragmentation, (5) using trade and economic agreements to support the development of consistent and non-discriminatory AI regulations, (6) promoting copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models, while supporting workable opt-outs for websites, and (7) establishing more effective mechanisms for information and best-practice sharing among allies and between the private and the public sectors.
- Explore updating procurement rules to incentivize AI accountability, and ensure OMB and the Federal Acquisition Regulatory Council are engaged in any such updates. It will be critical for agencies who are further ahead in their development of AI procurement practices to remain coordinated and aligned upon a common baseline to effectively scale responsible governance (e.g., through the NIST AI Risk Management Framework (AI RMF)).
The United States currently leads the world in AI development, and with the right policies that support both trustworthy AI and innovation, the United States can continue to lead and help allies enhance their own competitiveness while aligning around a positive and responsible vision for AI. Centering policies around economic opportunity, promoting responsibility and trust, and furthering our collective security will advance today’s and tomorrow’s AI innovation and unleash benefits across society.
-
AI safety and security are key elements of building public trust in the technology. A National AI Strategy should advance (and balance) three pillars: (1) unlocking opportunity through innovation and inclusive economic growth, (2) ensuring responsibility and enabling trust, and (3) enhancing US and international security. Our recommendations include:
A. Opportunity
- Prepare the American workforce for an AI-driven job transition and promote opportunities to broadly share AI’s productivity benefits for organizations of all sizes.
- Implement comprehensive programs to educate government employees on how to use AI, manage and secure implementations, and adopt procurement-related best practices.
- Authorize and fund a National AI Research Resource to provide compute infrastructure for AI development, foster public-private partnerships, and implement best practices for high-quality datasets.
- Maintain the balance of the current pro-innovation intellectual property laws, while investing in the technical capacity of and modernizing the US Patent and Trademark Office and Copyright Office.
B. Responsibility● Use and emphasize flexible, risk-based approaches to ensure trustworthy AI development and deployment, relying on the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF) and international standards.
- Collaborate with the private sector and international partners to support R&D on the safety of advanced AI models.
- Support the development of benchmarks and audit frameworks to be used by system designers and developers throughout AI development and deployment.
C. Security● Enforce and update, as needed, policies to limit the export of certain types of AI-powered software and AI-enabling, high-performance chips.
- Reform government acquisition and authorization policies that invest in future-focused capabilities and broaden the range of companies able to contribute to AI.
- Facilitate the deployment of appropriate security controls and measures to protect AI systems.
-
Anthropic didn’t sign the letter but also supports the US AISI. ↩