Report

Taking Further Agency Action on AI

How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence

This joint report from Governing for Impact and the Center for American Progress maps select agencies’ existing statutory authority to protect consumers, workers, and families from potential artificial intelligence harms.

In this article
U.S. Vice President Kamala Harris stands smiling to the left of a sign for the AI Safety Summit.
U.S. Vice President Kamala Harris attends the U.K. AI Safety Summit at Bletchley Park, England, on November 2, 2023. (Getty/Daniel Leal)

Chapters

Read the fact sheets

The accompanying fact sheets list all of the recommendations detailed in the five chapters of this report:

Introduction and summary

In response to the surge of attention, excitement, and fear surrounding AI developments since the release of OpenAI’s ChatGPT in November 2022,1 governments worldwide2 have rushed to address the risks and opportunities of AI.3 In the United States, policymakers have sharply disagreed about the necessity and scope of potential new AI legislation.4 By contrast, stakeholders ranging from government officials and advocates to academics and companies seem to agree that it is essential for policymakers to utilize existing laws to address the risks and opportunities of AI where possible, especially in the absence of congressional action.5

What this means in practice, however, remains murky. What are the statutory authorities and policy levers available to the federal government in the context of AI? And how should policymakers use them? To date, there has been no comprehensive survey to map the federal government’s existing ability to impose guardrails on the use of AI across the economy. In 2019, the Trump administration issued Executive Order 13859,6 which directed agencies to “review their [regulatory] authorities relevant to applications of AI.”7 Subsequent 2020 OMB guidance further required: “The agency plan must identify any statutory authorities specifically governing agency regulation of AI applications, as well as collections of AI-related information from regulated entities.”8 Unfortunately, it appears the U.S. Department of Health and Human Services (HHS) was the only agency to respond in detail.9

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Since taking office, the Biden administration has taken critical strides to prepare the federal government for the potential proliferation of AI. Its 2023 executive order on AI10 and the subsequent 2024 OMB memo on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”11 (OMB M-24-10 AI guidance) directed agencies to undertake specific AI-related tasks and provided guidance on federal agency use of AI.

But what comes next? These Biden administration actions hardly represent the culmination of the federal government’s interventions into or involvement with AI. As technologies advance, new AI risks and benefits will emerge, sometimes demanding new federal responses. Agencies must be ready to deploy every tool at their disposal to ensure that the AI revolution benefits everyday Americans, rather than just the tech giants developing new models.

With that in mind, GFI and CAP undertook research to identify existing authorities that can be used to address AI. In the interest of keeping this initial report to a reasonable length, only a sample of federal agencies were selected, including:

  • The White House and its subordinate agencies, including the OMB and the Office of Information and Regulatory Affairs (Chapter 1)
  • The Department of Labor (Chapter 2)
  • The Department of Education (Chapter 3)
  • The housing regulators (Chapter 4)
  • Financial regulatory agencies (Chapter 5):
    • The Treasury Department
    • The Office of the Comptroller of the Currency
    • The Board of Governors of the Federal Reserve System
    • The Federal Deposit Insurance Corporation
    • The Commodity Futures Trading Commission
    • The National Credit Union Administration
    • The Securities and Exchange Commission
    • The Consumer Financial Protection Bureau
    • The Financial Stability Oversight Council

This report is structured to include a chapter for each of the above agencies, covering:

  • An overview of the agency and its intersection with AI
  • AI risks and opportunities within the specific agency and its jurisdiction
  • The current state of the agency and its efforts to address AI
  • The specific relevant authorities the agency could invoke to regulate AI risks
  • Recommendations for how the agency could use each identified authority to regulate AI
  • A fact sheet to accompany each chapter with a summary of all the recommendations in that chapter for the specific agency or agencies

Recognizing that many readers may only be interested in a specific agency or agencies, each chapter is designed to be read and understood independently of the other chapters. The report is accessible online and in PDF form. Finally, the report includes fact sheets detailing all recommendations from each chapter, available both online and in PDF form.

Background

Generative AI and its ability to generate synthetic text, images, audio, and video represents the most user-accessible form of AI, and new generations of AI are poised to interface with and control our devices and programs directly.

When OpenAI released its ChatGPT large language model (LLM) generative AI chatbot to the public in November 2022,12 it quickly became one of the fastest-growing consumer technology applications ever.13 Generative AI and its ability to generate synthetic text, images, audio, and video represents the most user-accessible form of AI, and new generations of AI are poised to interface with and control our devices and programs directly.14 Meanwhile, behind the scenes, automated systems increasingly control health care, finance, and housing decisions. In the finance sector, lenders deploy AI-based systems to make lending decisions or depend on third-party models to guide their lending processes. Similarly, in the housing sector, AI is now employed in both public and private housing screening. AI is set to affect almost every sector of our economy. As Bill Gates has suggested, we may well be living in the “Age of AI”—a technological inflection point as momentous as the invention of the personal computer, the internet, and the mobile phone.15

The explosion in growth of this new AI technology raised immediate concern among the public, lawmakers, and regulators about how society and government can and should best respond. The immediate opportunities and challenges of AI are clear to many; however, the solutions to these very real benefits and harms are far less clear, yet critically important, to address as this technology spreads with a rapidity not seen in recent history. It is imperative to examine all the tools in the toolkit to address AI, from legislation that may take years to draft, pass, and implement to existing authorities that can be exercised by agencies now.

Federal government action

A crucial task for federal regulators moving forward will be to scope their existing ability to regulate AI in the absence of new AI legislation.

Passing new AI legislation, though vital, does not appear to be imminently forthcoming from Congress. In 2023, Senate Majority Leader Chuck Schumer (D-NY) hosted a series of eight closed-door AI insight forums with senators and leading experts16 that culminated in a May 2024 AI white paper.17 Sen. Schumer has previously announced to the 118th Congress that his approach to AI legislation will be through the regular order committee process.18 Meanwhile, the House of Representatives did not announce a bipartisan AI task force until February 2024,19 with no clear legislative path outlined. To date, the most Congress has done is hold numerous hearings on AI,20 and the prospects for comprehensive AI legislation in the 118th Congress appear distant.

As a result, the primary federal actor in the AI policy space has been and will likely continue to be the executive branch. The Trump administration issued two executive orders on AI21 and OMB guidance that included requiring agencies with regulatory authorities to “identify any statutory authorities specifically governing agency regulation of AI applications”22 and submit them to the OMB, with which only HHS complied.23 In the wake of ChatGPT’s release, the Biden administration immediately began to announce a series of steps to address AI—building on its 2022 “Blueprint for an AI Bill of Rights”24—which started with voluntary commitments from leading AI companies.25 This culminated with the October 2023 executive order on AI26 and the subsequent March 2024 release of the OMB M-24-10 memorandum, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” for federal government use of AI.27 A crucial task for federal regulators moving forward will be to scope their existing ability to regulate AI in the absence of new AI legislation.

Support for existing authorities

As the International Association of Privacy Professionals noted, “[A]t least in the short term, AI regulation in the U.S. will consist more of figuring out how existing laws apply to AI technologies, rather than passing and applying new, AI-specific laws.”28 This enforcement of existing laws has been repeated strongly by the administration, enforcement agencies and regulators, and Congress.

As made clear by its 2023 executive order on AI,29 the unambiguous position of the Biden administration is to “ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.”30 The order further notes: “The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.”31 Additionally, Vice President Kamala Harris stated: “[E]ven now, ahead of congressional action, there are many existing laws and regulations that reflect our nation’s longstanding commitment to the principles of privacy, transparency, accountability, and consumer protection. These laws and regulations are enforceable and currently apply to AI companies.”32

This was echoed early by federal enforcement agencies, including the U.S. Department of Justice (DOJ), Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission (EEOC), in an April 2023 joint statement on AI that clearly said: “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.”33 And in her press statement accompanying the joint statement, FTC Chair Lina Khan said: “There is no AI exemption to the laws on the books.”34 In April 2024, 10 federal enforcement agencies issued the “Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems,” which declared: “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”35

The bipartisan Senate AI Working Group—which was led by Senate Majority Leader Chuck Schumer (D-NY) along with Sens. Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-AK)—noted in its May 2024 AI white paper: “The AI Working Group believes that existing laws, including related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developers, deployers, and users.”36

Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. DOJ, FTC, CFPB, EEOC joint statement on AI

About this report

Despite consensus around the need to apply existing laws to novel AI applications, more work remains to be done. While it is true that existing statutes may allow agencies to regulate the use of AI, regulations on implementation may still need to be updated accordingly. As a result, a central challenge will be identifying with specificity how agencies may need to adapt or revise their regulatory regimes for an AI era.

Inspired by the HHS response to the 2020 OMB request for a catalog of agencies’ existing authorities to address AI37 and recognizing the need for a deeper examination of existing authorities as they relate to AI, GFI and CAP have undertaken extensive research to outline potential statutory authorities that selected federal agencies could leverage to address the challenges and opportunities presented by AI. This joint report outlines those potential statutory authorities and offers initial recommendations on utilizing those authorities.

About GFI

Governing for Impact is a regulatory policy organization dedicated to ensuring the federal government works on behalf of everyday Americans, not corporate lobbyists. The policies it designs and the legal insights it develops help increase opportunity for those not historically represented in the regulatory policy process: working people.

For additional information about GFI, please visit https://governingforimpact.org/.

GFI and CAP engaged in an intensive effort to canvas existing authorities and identify potential recommendations to address AI. This included extensive analysis of existing statues, consultation with numerous subject matter experts, and review from various stakeholders. This report does not purport to be perfectly comprehensive, even on the agencies selected for consideration. Instead, it aims to highlight the authorities where the strongest intersection exists between existing authority and actionable recommendations.

Initial research revealed that some agencies were already making significant progress. For example, the FTC has led the way among agencies considering applying its existing authorities to address AI.38 Similarly, the Department of Commerce is actively exploring and utilizing its existing authorities to address AI-related concerns.39 Of course, there are more federal agencies than those covered in this report, and every state has agencies and authorities that could be leveraged to address AI.40 A similar analysis of other federal or state agencies’ statutory authorities to effectively mitigate AI-related harms via regulation could be valuable. As noted above, GFI and CAP also encourage agencies that have yet to do so to respond to OMB Circular M-21-06 with an inventory of their regulatory authorities applicable to AI.41

Authors’ note: For this report, the authors use the definition of artificial intelligence (AI) from the 2020 National Defense Authorization Act, which established the National Artificial Intelligence Initiative.42 This definition was also used by the 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”43 Similarly, this report makes repeated reference to “Appendix I: Purposes for Which AI is Presumed to be Safety-Impacting and Rights-Impacting” of the 2024 OMB M-24-10 memo, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”44

A note on the Supreme Court and Congress

At the time of this report’s publication, several pending U.S. Supreme Court cases could affect federal agencies’ ability to regulate.45 Chevron deference has served as the foundation of agency rulemaking for nearly 40 years, enshrining the simple but critical legal maxim that federal agencies, because of their expertise, should be given deference in interpreting and implementing laws passed by Congress. If, as seems likely, the Supreme Court severely limits or overturns Chevron, it will be more important than ever for agencies and policy advocates to ground regulatory policy proposals in the sort of statutory analysis that is undertaken throughout this report. The authors have not, however, analyzed the litigation risk associated with each of the policy recommendations included in this report, which is a necessary precursor to action, particularly considering this court’s anti-regulatory bent. This report’s focus on existing authorities aims to illustrate the tools agencies currently have at their disposal. Understanding the strengths and limitations of these statutes is essential in helping Congress understand what may be needed in future legislation. CAP has advocated for and continues to believe that AI legislation46 and broader regulation of online services47 will be necessary to address the growing risks and challenges of technology.

Conclusion

For the past 30 years, Congress has largely failed to take meaningful action on technology policy, with the recent exception of banning a single application.48 While the authors believe this to be an unsustainable status quo, current congressional dysfunction does not inspire confidence that legislative action is imminent. In the event of continued congressional inaction, existing statutory authorities, executive action, and voluntary measures at the federal level, along with existing state regulations and new state laws, will remain the sole tools for addressing the risks and opportunities of AI in America.

The 2023 executive order on AI was detailed and prescriptive in its initial tasking to agencies, outlining eight policies and principles in an ambitious attempt to direct government action at the challenges and opportunities of AI. This report details more than 80 recommendations that agencies can take using existing authorities to address AI in furtherance of those AI policies and principles, representing a starting point in thinking about a subsequent stage of AI regulation. The goal of this report is not to set a definitive regulatory policy agenda for AI, but rather to put forward a range of potential proposals for consideration that agencies could assemble into a future roadmap. Some may prove especially effective; others may not be worth pursuing. Ultimately, the hope is that feedback from policymakers, academics, civil society groups, and private firms will help to identify the most promising recommendations for more exhaustive research—an important step before the federal government begins adopting any proposal contained in this report. Examining federal agencies’ existing authorities and developing regulatory proposals that utilize those authorities is thus essential to address the immediate risks and opportunities of AI. GFI and CAP hope this report helps spur the next phase of discussion by providing initial analysis and recommendations for immediate action on AI.

Acknowledgements

While the opinions and proposals put forward in this report solely represent the views of the authors, we would like to thank the following individuals for their feedback and helpful contributions: Aiha Nguyen of Data & Society, Alvin Velasquez, Brian Chen of Data & Society, David Brody of the Lawyers’ Committee for Civil Rights Under Law, Jesse Lehrich of Accountable Tech, Kate Dunn of the National Fair Housing Alliance, Kristin Woelfel of the Center for Democracy and Technology, Mariah De Leon of Upturn, Matt Scherer of the Center for Democracy and Technology, Michael Akinwumi of the National Fair Housing Alliance, Michele Evermore of The Century Foundation, Michelle Miller of the Center for Labor and a Just Economy, Natasha Duarte of Upturn, Sarah Myers West of the AI Now Institute, Snigdha Sharma of the National Fair Housing Alliance, and Tanya Goldman of Workshop.

We would also like to thank the following CAP staff for their help: Maggie O’Neill, Zachary Geiger, Alice Lillydahl, David Madland, Karla Walter, Marc Jarsulic, Alex Thornton, Lilith Fellowes-Granda, Jared Bass, Lisette Partelow, Molly Weston Williamson, Veronica Goodman, Rozina Kiflom, Mona Alsaidi, Audrey Juarez, Nicolas Del Vecchio, Christian Rodriguez, Carl Chancellor, Beatrice Aronson, Steve Bonitatibus, Shanée Simhoni, Bill Rapp, Chester Hawkins, Keenan Alexander, Sam Hananel, and Billy Flanagan.

Endnotes

  1. OpenAI, “Introducing ChatGPT,” November 30, 2022, available at https://openai.com/blog/chatgpt.
  2. Reuters, “OpenAI’s ChatGPT breaches privacy rules, says Italian watchdog,” January 30, 2024, available at https://www.reuters.com/technology/cybersecurity/italy-regulator-notifies-openai-privacy-breaches-chatgpt-2024-01-29; European Parliament, “Artificial Intelligence Act: MEPs adopt landmark law,” Press release, March 13, 2024, available at https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.
  3. Prime Minister’s Office, Foreign, Commonwealth & Development Office, and Department for Science, Innovation and Technology, “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” Press release, November 1, 2023 available at https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
  4. Brendan Bordelon and Mohar Chatterjee, “‘It’s got everyone’s attention’: Inside Congress’s struggle to rein in AI,” Politico, May 4, 2023, available at https://www.politico.com/news/2023/05/04/congresss-scramble-build-ai-agenda-00095135.
  5. Vice President Kamala Harris, “Remarks by Vice President Harris on the Future of Artificial Intelligence|London, United Kingdom,” The White House, November 1, 2023, available at https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom/; Office of Sen. Mike Rounds, “Rounds Delivers Opening Remarks at Banking Hearing on AI,” Press release, September 21, 2023, available at https://www.rounds.senate.gov/newsroom/press-releases/rounds-delivers-opening-remarks-at-banking-hearing-on-ai.
  6. Executive Office of the President, “Executive Order 13859: Maintaining American Leadership in Artificial Intelligence,” Federal Register 84 (31) (2019): 3967–3972, available at https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence.
  7. Ibid.
  8. Russell T. Vought, “M-21-06 Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications” (Washington: Office of Management and Budget: November 17, 2020), available at https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf.
  9. U.S. Department of Health and Human Services, “OMB M-21-06 (Guidance for Regulation of Artificial Intelligence Applications),” available at https://www.hhs.gov/sites/default/files/department-of-health-and-human-services-omb-m-21-06.pdf (last accessed February 2024).
  10. Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  11. Shalanda D. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (Washington: Office of Management and Budget, 2024), available at https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
  12. OpenAI, “Introducing ChatGPT.”
  13. Andrew R. Chow, “How ChatGPT Managed to Grow Faster Than TikTok or Instagram,” Time, February 8, 2023, available at https://time.com/6253615/chatgpt-fastest-growing/.
  14. Stephanie Palazzolo and Amir Efrati, “OpenAI Shifts AI Battleground to Software That Operates Devices, Automates Tasks,” The Information, February 7, 2024, available at https://www.theinformation.com/articles/openai-shifts-ai-battleground-to-software-that-operates-devices-automates-tasks?rc=cwjrji.
  15. Bill Gates, “The Age of AI has begun,” GatesNotes, March 21, 2023, available at https://www.gatesnotes.com/The-Age-of-AI-Has-Begun.
  16. Sen. Chuck Schumer, “Majority Leader Schumer Delivers Remarks To Launch SAFE Innovation Framework For Artificial Intelligence At CSIS,” Press release, Senate Democrats, June 21, 2023, available at https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-delivers-remarks-to-launch-safe-innovation-framework-for-artificial-intelligence-at-csis#.
  17. Bipartisan Senate AI Working Group, “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate” (Washington: U.S. Senate, 2024), available at https://www.schumer.senate.gov/imo/media/doc/Roadmap_Electronic1.32pm.pdf.
  18. Sen. Chuck Schumer, “Majority Leader Schumer Floor Remarks On The Senate’s Bipartisan Commitment To Making Progress On Artificial Intelligence Legislation,” Press release, Senate Democrats, January 11, 2024, available at https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-floor-remarks-on-the-senates-bipartisan-commitment-to-making-progress-on-artificial-intelligence-legislation.
  19. Office of House Democratic Leader Hakeem Jeffries, “House Launches Bipartisan Task Force on Artificial Intelligence,” Press release, February 20, 2024, available at https://democraticleader.house.gov/media/press-releases/house-launches-bipartisan-task-force-artificial-intelligence.
  20. U.S. Senate Committee on Homeland Security and Governmental Affairs, “Artificial Intelligence in Government,” May 16, 2023, available at https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/; U.S. Senate Committee on the Judiciary, “Oversight of A.I.: The Future of Journalism,” January 10, 2024, available at https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-the-future-of-journalism; U.S. Senate Committee on Homeland Security and Governmental Affairs, “Harnessing AI to Improve Government Services and Customer Experience,” January 10, 2024, available at https://www.hsgac.senate.gov/hearings/harnessing-ai-to-improve-government-services-and-customer-experience/; U.S. Senate Committee on Homeland Security and Governmental Affairs, “The Philosophy of AI: Learning From History, Shaping Our Future,” November 8, 2023, available at https://www.hsgac.senate.gov/hearings/the-philosophy-of-ai-learning-from-history-shaping-our-future/; U.S. Senate Committee on Banking, Housing, and Urban Affairs, “Artificial Intelligence in Financial Services,” September 20, 2023, available at https://www.banking.senate.gov/hearings/artificial-intelligence-in-financial-services.
  21. Executive Office of the President, “Executive Order 13859: Maintaining American Leadership in Artificial Intelligence”; Executive Office of the President, “Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” Federal Register 85 (236) (2020): 78939–78943, available at https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government.
  22. Vought, “M-21-06 Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications.”
  23. U.S. Department of Health and Human Services, “OMB M-21-06 (Guidance for Regulation of Artificial Intelligence Applications).”
  24. The White House, “Blueprint for an AI Bill of Rights” (Washington: 2022), available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
  25. The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” Press release, July 21, 2023, available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/; The White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI,” Press release, September 12, 2023, available at https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
  26. Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  27. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
  28. Müge Fazlioglu, “US federal AI governance: Laws, policies and strategies,” International Association of Privacy Professionals, available at https://iapp.org/resources/article/us-federal-ai-governance/ (last accessed May 2024).
  29. The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  30. Ibid.
  31. Ibid.
  32. Vice President Harris, “Remarks by Vice President Harris on the Future of Artificial Intelligence | London, United Kingdom.”
  33. Rohit Chopra and others, “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” Consumer Financial Protection Bureau and others, Press release, April 25, 2023, available at https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
  34. Federal Trade Commission, “FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI,” Press release, April 25, 2023, available at https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.
  35. Rohit Chopra and others, “Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems,” Consumer Financial Protection Bureau and others, Press release, April 4, 2024, available at https://www.dol.gov/sites/dolgov/files/OFCCP/pdf/Joint-Statement-on-AI.pdf.
  36. Bipartisan Senate AI Working Group, “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.”
  37. Vought, “M-21-06 Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications”; U.S. Department of Health and Human Services, “OMB M-21-06 (Guidance for Regulation of Artificial Intelligence Applications).”
  38. Michael Atleson, “Keep your AI claims in check,” Federal Trade Commission, February 27, 2023, available at https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check; Staff in the Bureau of Competition and Office of Technology, “Generative AI Raises Competition Concerns,” Federal Trade Commission, June 29, 2023, available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/06/generative-ai-raises-competition-concerns; Michael Atleson, “Can’t lose what you never had: Claims about digital ownership and creation in the age of generative AI,” Federal Trade Commission, August 16, 2023, available at https://www.ftc.gov/business-guidance/blog/2023/08/cant-lose-what-you-never-had-claims-about-digital-ownership-creation-age-generative-ai; FTC’s Office of Technology and Division of Marketing Practices, “Preventing the Harms of AI-enabled Voice Cloning,” Federal Trade Commission, November 16, 2023, available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning; Staff in the Office of Technology, “AI Companies: Uphold Your Privacy and Confidentiality Commitments,” Federal Trade Commission, January 9, 2024, available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/01/ai-companies-uphold-your-privacy-confidentiality-commitments; Federal Trade Commission, “A few key principles: An excerpt from Chair Khan’s Remarks at the January Tech Summit on AI,” February 8, 2024, available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/few-key-principles-excerpt-chair-khans-remarks-january-tech-summit-ai; Staff in the Office of Technology and the Division of Privacy and Identity Protection, “AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive,” Federal Trade Commission, February 13, 2024, available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive.
  39. U.S. Department of Commerce, “At the Direction of President Biden, Department of Commerce to Establish U.S. Artificial Intelligence Safety Institute to Lead Efforts on AI Safety,” Press release, November 1, 2023, available at https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-establish-us-artificial.
  40. National Association of State Chief Information Officers, “Your AI Blueprint: 12 Key Considerations as States Develop Their Artificial Intelligence Roadmaps” (Lexington, KY: 2023), available at https://www.nascio.org/resource-center/resources/your-ai-blueprint-12-key-considerations-as-states-develop-their-artificial-intelligence-roadmaps/.
  41. Vought, “M-21-06 Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications.”
  42. Office of the Law Revision Counsel, “15 USC 9401(3): Definitions,” available at https://uscode.house.gov/view.xhtml?req=(title:15%20section:9401%20edition:prelim) (last accessed May 2024).
  43. Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  44. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
  45. Josh Gerstein and Alex Guillén, “Supreme Court move could spell doom for power of federal regulators,” Politico, May 1, 2023, available at https://www.politico.com/news/2023/05/01/supreme-court-chevron-doctrine-climate-change-00094670; Devon Ombres, Jeevna Sheth, and Sydney Bryant, “How the Supreme Court Could Limit Government’s Ability To Serve Americans in All Areas of Life,” Center for American Progress, January 10, 2024, available at https://www.americanprogress.org/article/how-the-supreme-court-could-limit-governments-ability-to-serve-americans-in-all-areas-of-life/; Jeevna Sheth and Devon Ombres, “Loper Bright and Relentless: Ending Judicial Deference To Cement Judicial Activism in the Courts” (Washington: Center for American Progress, 2024), available at https://www.americanprogress.org/article/loper-bright-and-relentless-ending-judicial-deference-to-cement-judicial-activism-in-the-courts/.
  46. Megan Shahi and Adam Conner, “Priorities for a National AI Strategy,” Center for American Progress, August 10, 2023, available at https://www.americanprogress.org/article/priorities-for-a-national-ai-strategy/.
  47. Erin Simpson and Adam Conner, “How To Regulate Tech: A Technology Policy Framework for Online Services” (Washington: Center for American Progress, 2021), available at https://www.americanprogress.org/article/how-to-regulate-tech-a-technology-policy-framework-for-online-services/.
  48. Cristiano Lima-Strong, “Biden signs bill that could ban TikTok, a strike years in the making,” The Washington Post, April 24, 2024, available at https://www.washingtonpost.com/technology/2024/04/23/tiktok-ban-senate-vote-sale-biden/; Adam Conner, Dr. Alondra Nelson, and Ben Olinsky, “Congress Must Take More Steps on Technology Regulation Before It Is Too Late,” Center for American Progress, May 13, 2024, available at https://www.americanprogress.org/article/congress-must-take-more-steps-on-technology-regulation-before-it-is-too-late/.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Chapters

Authors

Will Dobbs-Allsopp

Policy Director

Governing for Impact

Reed Shaw

Policy Counsel

Governing for Impact

Anna Rodriguez

Policy Counsel

Governing for Impact

Todd Phillips

Assistant Professor of Law

Robinson College of Business at Georgia State University

Rachael Klarman

Executive Director

Governing for Impact

Adam Conner

Vice President, Technology Policy

Center For American Progress

Nicole Alvarez

Senior Policy Analyst

Center For American Progress

Ben Olinsky

Senior Vice President, Structural Reform and Governance; Senior Fellow

Center For American Progress

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.