Surveys

Public surveys

This section has not been updated to include recent surveys.

Surveys on what Americans say they want labs to do or government to require:1

  • Ipsos (survey completed Jul 24, 2023)
    • “AI companies committing to internal and external security testing before their release”: 77% support, 14% oppose.
    • “AI companies committing to sharing information across the tech industry, government, civil society, and academia to help manage AI risk”: 69% support, 20% oppose.
    • “AI companies agreeing to investing in cybersecurity and insider threat safeguard to protect model weights, the most important part of an AI system”: 72% support, 16% oppose.
    • “AI companies committing to using third-parties to help discover and report system vulnerabilities after AI is released”: 66% support, 20% oppose.
    • “AI companies developing systems so that users know when content is created by AI”: 76% support, 15% oppose.
    • “AI companies publicly reporting their systems capabilities and their appropriate and inappropriate use”: 75% support, 15% oppose.
    • “AI companies committing to conduct research on risk[s] that AI can pose to society, like bias, discrimination, and protecting privacy”: 70% support, 17% oppose.
    • “AI companies committing to developing AI systems to help address society’s biggest challenges”: 64% support, 23% oppose.
  • The public supports slowing AI development
    • Public First: 11% we should accelerate AI development, 33% slow, 39% continue around the same pace.
    • Data for Progress: 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
    • Ipsos: “Unchecked development of AI” is a bigger risk (75%), “Government regulation slowing down the development of AI” is a bigger risk (21%).
    • AI Policy Institute / YouGov (1, 2):
      • 82% “We should go slowly and deliberately,” 8% “We should speed up development” (after reading brief arguments).
      • “It would be a good thing if AI progress was stopped or significantly slowed”: 62% agree, 26% disagree.
      • 72% “We should slow down the development and deployment of artificial intelligence,” 12% “We should more quickly develop and deploy artificial intelligence.”
  • The public supports pausing some AI development (but this is easy to overstate, since they don’t all mean the same kinds of AI development, much less mean the kind of AI that is actually dangerous)
    • YouGov (Apr 2023): A six-month pause on some kinds of AI development: 58% support, 23% oppose.
    • Rethink Priorities: Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
    • YouGov (Jun 2023): A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
    • AI Policy Institute / YouGov: “A legally enforced pause on advanced artificial intelligence research”: 49% support, 25% oppose.
    • YouGov (Aug 2023): A six-month pause on some kinds of AI development: 58% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose.
  • AI Policy Institute / YouGov (survey completed Jul 21, 2023) (online survey of voters, 1001 responses)
    • Policy proposal: “any advanced AI model should be required to demonstrate safety before they are released”: 65% support, 11% oppose (after reading brief arguments).
    • Policy proposal: “any organization producing advanced AI models must require a license, [] all advanced models must be evaluated for safety, and [] all models must be audited by independent experts”: 64% support, 13% oppose (after reading brief arguments).
    • Policy proposal: regulate supply chain for specialized chips: 56% support, 18% oppose (after reading brief arguments).
    • Policy proposal: “require all AI generated images to contain proof that they were generated by a computer”: 76% support, 9% oppose (after reading brief arguments).
    • Policy proposal: international regulation of military AI: 60% support, 17% oppose (after reading brief arguments).
  • Sharing model weights
    • AI Policy Institute / YouGov Blue (survey completed Sep 6, 2023) (online survey of voters, 1118 responses): 23% “we should open source powerful AI models,” 47% we should not (after reading brief arguments).

Note that the AI Policy Institute is a pro-caution advocacy organization, and Public First, Data for Progress, and Rethink Priorities may have agendas too.

Expert surveys

  • AI Impacts 2023 (online survey of authors of papers at NeurIPS, ICML, ICLR, AAAI, JMLR, or IJCAI 2022, 2778 responses)
    • Average respondent says 50% chance that high-level machine intelligence2 will exist by 2047.
    • Three questions about AI causing human extinction: median 5%, 10%, and 5%.3
    • “How much should society prioritize AI safety research, relative to how much it is currently prioritized?” 70% say more; 8% say less.4
  • AI Impacts 2022 (online survey of authors of papers at ICML or NeurIPS 2021, 738 responses)
    • Average respondent says 50% chance that high-level machine intelligence2 will exist by 2059.
    • Two questions about AI causing human extinction: median 5% and 10%.5
    • “How much should society prioritize AI safety research, relative to how much it is currently prioritized?” 69% say more; 11% say less.6
  1. Drawing on Surveys of US public opinion on AI (AI Impacts: Stein-Perlman 2023). 

  2. “High-level machine intelligence” was defined by:

    High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.

    Respondents were told to assume “human scientific activity continues without major negative disruption.”  2

  3. “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?” Median 5%.

    “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?” Median 10%.

    “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?” Median 5%. 

  4. “AI safety research” was defined by:

    Let ‘AI safety research’ include any AI-related research that, rather than being primarily aimed at improving the capabilities of AI systems, is instead primarily aimed at minimizing potential risks of AI systems (beyond what is already accomplished for those goals by increasing AI system capabilities).

    Examples of AI safety research might include:

    • Improving the human-interpretability of machine learning algorithms for the purpose of improv- ing the safety and robustness of AI systems, not focused on improving AI capabilities
    • Research on long-term existential risks from AI systems
    • AI-specific formal verification research
    • Policy research about how to maximize the public benefits of AI
    • Developing methodologies to identify, measure, and mitigate biases in AI models to ensure fair and ethical decision-making

  5. “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?” Median 5%; 44% at least 10%.

    “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?” Median 10%; 56% at least 10%. 

  6. “AI safety research” was defined by:

    Let ‘AI safety research’ include any AI-related research that, rather than being primarily aimed at improving the capabilities of AI systems, is instead primarily aimed at minimizing potential risks of AI systems (beyond what is already accomplished for those goals by increasing AI system capabilities).

    Examples of AI safety research might include:

    • Improving the human-interpretability of machine learning algorithms for the purpose of improv- ing the safety and robustness of AI systems, not focused on improving AI capabilities
    • Research on long-term existential risks from AI systems
    • AI-specific formal verification research
    • Policy research about how to maximize the public benefits of AI