Center for American Progress

Fact Sheet: Recommendations for the White House To Take Further Action on AI
Fact Sheet

Fact Sheet: Recommendations for the White House To Take Further Action on AI

This fact sheet offers recommendations for how the White House, including the Office of Management and Budget and the Office of Information and Regulatory Affairs, can utilize its authorities to address artificial intelligence (AI).

President Joe Biden sits at a table with California Gov. Gavin Newsom (D) to the right Arati Prabhakar to the left.
U.S. President Joe Biden meets with AI experts and researchers in San Francisco on June 20, 2023. (Getty/Jane Tyska/East Bay Times)
Read the full report

This fact sheet collects the recommendations from Chapter 1: “The White House” of the joint report from Governing for Impact (GFI) and the Center for American Progress, “Taking Further Agency Action on AI: How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence.” The chapter notes how the White House and its subordinate agencies, including the Office of Management and Budget (OMB) and the Office of Information and Regulatory Affairs (OIRA), should consider addressing potential artificial intelligence (AI) risks and opportunities beyond the October 2023“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”1 The White House can use existing regulations and executive actions—including the administration of federal grants and federal contracts, the Defense Production Act, and the use of emergency powers such as the International Emergency Economic Powers Act (IEEPA)—to do so. The goal of these recommendations is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. This menu of potential recommendations demonstrates that there are more options for agencies to explore beyond their current work, and that agencies should immediately utilize existing authorities to address AI.

Read the full chapter

The Office of Management and Budget

Uniform guidance for federal awards

The OMB could consider the following actions:

  • Develop guidance that adapts the recent OMB M-24-10 AI guidance2 to apply to AI use by other recipients of federal funds, including grants, loans, and other forms of financial assistance. The guidance could establish a similar framework for agencies to assess the safety- and rights-impacting purposes of AI from the OMB M-24-10 AI guidance3 and mitigate the harmful consequences of the applicable risks thereof, using minimum practices for AI risk management. The guidance could urge agencies to impose conditions on federal funds to the extent the statutory sources of those funds allow such conditions.
  • Update the uniform guidance for federal awards at 2 C.F.R. Part 200, pursuant to 31 U.S.C. §§ 6307 and 503(a)(2), to incorporate AI risk assessment—and the steps that applicants are taking to mitigate risks—into agencies’ consideration of applications for federal funding, as permitted by the statutory sources for such funding. Specifically, the OMB could update 2 C.F.R. § 200.206(b)(2) to include an assessment of AI risk within its risk evaluation requirements; update 2 C.F.R. § 200.204(c) to require or suggest that the full text of funding opportunity announcements include any AI risk evaluation requirements; and update 2 C.F.R. § 200.211 to require or recommend that federal award publications include the results of AI risk analyses produced during the application process. The current risk evaluation section permits a federal agency to consider the “applicant’s ability to effectively implement statutory, regulatory, or other requirements imposed on non-Federal entities.”4 A revised uniform guidance could explicitly suggest that federal agencies consider the potential for grantees’ use of AI to impact their ability to comply with such requirements and the impact AI use could have on the other categories of risk specified in the current guidance.

Updates to regulatory review

The president, OMB, and OIRA could consider the following actions:

  • Issue a new requirement in the regulatory review process that would require agencies to include a brief assessment of 1) the potential effects of significant regulatory actions on AI development, risks, harms, and benefits, and 2) an assessment of the current and anticipated use of AI by regulated entities and how that use is likely to affect the ability of any proposed or final rule to meet its stated objectives. This requirement could follow the format of the benefit-cost analysis required by the current Executive Order 12866. The modification to the regulatory review process could take the form of a new executive order, a presidential memorandum,5 or an amendment to Executive Order 12866 that adds a subsection to §1(b) and/or §6(a).
  • Issue a presidential memorandum directing agencies and encouraging independent agencies to review their existing statutory authorities to address known AI risks and consider whether addressing AI use by regulated entities through new or ongoing rulemakings would help ensure that this use does not undermine core regulatory or statutory goals. Such a presidential memorandum would primarily give general direction, similar to the Obama administration’s behavioral sciences action,6 rather than require a specific analysis on every regulation.

    The presidential memorandum could direct executive departments and agencies, or perhaps even the chief AI officer established in the 2023 executive order on AI and further detailed in the OMB M-24-10 AI guidance,7 to:

    • Identify whether their policies, programs, or operations could be undermined or impaired by the private sector use of AI tools.
    • Comprehensively complete the inventory of statutory authorities first requested in OMB Circular M-21-06,8 which directed agencies to evaluate their existing authorities to regulate AI applications in the private sector.
    • Outline strategies for deploying such statutory authorities to achieve agency goals in the face of identified private sector AI applications.

Federal contracting

Federal procurement policy and Federal Property and Administrative Services Act (FPASA)

As the OMB prepares the forthcoming procurement guidance mentioned in OMB M-24-10 AI guidance,9 it may also want to consider whether it can include standards that:

  • Ensure baseline levels of competition and interoperability, such that agencies do not get locked into using the services of a single AI firm.

Under its FPASA authority, the Federal Acquisition Regulatory Council,10 which is chaired by OMB’s administrator for federal procurement policy, can promulgate a rule that outlines protections for all employees at firms that hold a federal contract as it relates to AI, including potentially through the following actions:

  • Incorporate the presumed safety-impacting and rights-impacting uses of AI from the OMB M-24-10 AI guidance to apply to federal contractors and their use of AI systems for workplace management.11
  • Require federal contractors employing automated systems to use predeployment testing and ongoing monitoring to ensure safety and that workers are paid for all compensable time and to mitigate other harmful impacts.
  • Establish specific requirements regarding pace of work, quotas, and worker input to reduce the safety and health impacts of electronic surveillance and automated management.
  • Mandate disclosure requirements when employees are subject to automation or other AI tools.
  • Provide discrimination protections related to algorithmic tools, including ensuring that automated management tools can be adjusted to make reasonable accommodations for workers with disabilities.
  • Ensure privacy protections for employees and users of AI.

The Executive Office of the President

International Emergency Economic Powers Act (IEEPA), the Communications Act, and Federal Procurement Policy

To prepare the government to use the above powers in the event of an AI system posing emergency threats to the United States, the White House could consider the following actions:

  • Direct the National Security Council to develop a memorandum that outlines scenarios wherein AI applications could pose an emergency threat to the country and identifies actions that the president could take through existing statutory schemes and their inherent executive authority under Article II of the Constitution to resolve the threat. The memorandum should study the landscape of imaginable AI applications and devise criteria that would trigger emergency governmental action. Such a memorandum could complement or be incorporated as part of the National Security Memorandum required by the October 2023 executive order on AI.12 The memorandum’s design could echo the National Response Plan, originally developed after 9/11 to formalize rapid government response to terrorist attacks and other emergency scenarios.13 The memorandum could consider authorities:
    • Inherent to the president’s constitutional prerogative to protect the nation: For example, the memorandum could identify when it could be appropriate for the president to take military or humanitarian action without prior congressional authorization when immediate action is required to prevent imminent loss of life or property damage.14
    • Under the IEEPA: For example, the memorandum could consider the administration’s authority to expand the policies established in the August 2023 IEEPA executive order, using the statute to freeze assets associated with AI technologies and countries of concern that contribute to the crisis at hand.15Follow-up executive action could identify new countries of concern as they arise. As another example, the memorandum could identify triggers for pursuing sanctions under 50 U.S.C. § 1708(b) on foreign persons that support the use of proprietary data to train AI systems or who steal proprietary AI source code from sources in the United States. The memorandum could also explore the president’s authority to investigate, regulate, or prohibit certain transactions or payments related to run away or dangerous AI models in cases where the models are trained or operate on foreign-made semiconductors and the president determines that such action is necessary to “deal with” a national security threat. Even if that model is deployed domestically or developed by a domestic entity, it may still fall within reach of the IEEPA’s potent §1702 authorities if, per 50 U.S.C. §1701, the model: 1) poses an “unusual or extraordinary threat,” and 2) “has its source in whole or substantial part outside the United States.” The administration can explore whether AI models’ dependence on foreign-made semiconductors for training and continued operation meets this second requirement. Indeed, scholars have previously argued that the interconnectedness of the global economy likely subjects an array of domestic entities to IEEPA in the event sufficiently exigent conditions arise.16
    • Under the Communications Act: For example, the memorandum could identify scenarios in which the president could consider suspending or amending regulations under 47 U.S.C. § 606(c) regarding wireless devices to respond to a national security threat.17 The bounds of this authority are quite broad, covering an enormous number of everyday devices, including smartphones that can emit electromagnetic radiation.18
    • To modify federal contracts: For example, the memorandum could identify possibilities for waiving procurement requirements in a national emergency if quickly making a federal contract with a particular entity would help develop capabilities to combat a rapidly deploying and destructive AI.19
    • To take other statutorily or constitutionally authorized actions: The memorandum could organize a process through which the White House and national security apparatus would, upon the presence of the criteria outlined in the memorandum, assess an emergent AI-related threat, develop a potential response, implement that response, and notify Congress and the public of such a response.20 It could also request a published opinion from the Office of Legal Counsel on the legality of the various response scenarios and decision-making processes drawn up pursuant to the recommendations above. This will help ensure that the president can act swiftly but responsibly in an AI-related emergency.
  • Share emergency AI plans with the public: The administration should share such emergency processes and memoranda they develop with Congress, relevant committees, and the public where possible.

Endnotes

  1. Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Federal Register 88 (210) (2023): 75191–75226, available at https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
  2. Shalanda D. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” (Washington: Office of Management and Budget, 2024), available at https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
  3. Ibid., Appendix I.
  4. Legal Information Institute, “2 CFR 200.206 – Federal awarding agency review of risk posed by applicants,” available at https://www.law.cornell.edu/cfr/text/2/200.206 (last accessed February 2024).
  5. Executive orders and presidential memoranda differ mostly in form, not substance or effect. See John Contrubis, “Executive Orders and Proclamations” (Washington: American Law Division of the Congressional Research Service, 1999), available at https://sgp.fas.org/crs/misc/95-772.pdf. See also, Abigail A. Graber, “Executive Orders: An Introduction” (Washington: Congressional Research Service, 2021), p. 20, available at https://crsreports.congress.gov/product/pdf/R/R46738; Todd Garvey, “Executive Orders: Issuance, Modification, and Revocation” (Washington: Congressional Research Service, 2014), p. 1–2, available at https://crsreports.congress.gov/product/pdf/RS/RS20846.
  6. Executive Office of the President, “Executive Order 13707: Using Behavioral Science Insights To Better Serve the American People,” Federal Register 80 (181) (2015): 56365–56367, available at https://www.federalregister.gov/documents/2015/09/18/2015-23630/using-behavioral-science-insights-to-better-serve-the-american-people.
  7. Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
  8. Russell T. Vought, “M-21-06 Memorandum for the Heads of Executive Departments and Agencies: Guidance for Regulation of Artificial Intelligence Applications” (Washington: Office of Management and Budget, 2020), available at https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf. Most agencies either failed to comply with this directive or did so incompletely. Compare HHS’ response with the Department of Energy’s. U.S. Department of Health and Human Services, “OMB M-21-06 (Guidance for Regulation of Artificial Intelligence Applications),” available at https://www.hhs.gov/sites/default/files/department-of-health-and-human-services-omb-m-21-06.pdf (last accessed 2024); U.S. Department of Energy, “DOE AI Report to OMB regarding M-21-06” (Washington: 2021), available at https://www.energy.gov/articles/m-21-06-regulations-artificial-intelligence.
  9. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” p. 24.
  10. Acquisition.gov, “Federal Acquisition Regulatory Council: FAR Council Members,” available at https://www.acquisition.gov/far-council-members (last accessed February 2024).
  11. Young, “M-24-10 Memorandum for the Heads of Executive Departments and Agencies: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”
  12. Executive Office of the President, “Executive Order 14110: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Section 4.8.
  13. Joshua L. Friedman, “Emergency Powers of the Executive: The President’s Authority When All Hell Breaks Loose,” Journal of Law and Health 25 (2) (2012): 265–306, available at https://www.law.csuohio.edu/sites/default/files/academics/jlh/friedman_final_version_of_article-2.pdf.
  14. Friedman, “Emergency Powers of the Executive: The President’s Authority When All Hell Breaks Loose.”
  15. Legal Information Institute, “50 U.S.C. Chapter 35 – International Emergency Economic Powers,” § 1701 et. seq., available at https://www.law.cornell.edu/uscode/text/50/chapter-35 (last accessed May 2024).
  16. Cristopher A. Casey, Dianne E. Rennack, and Jennifer K. Elsea, “The International Emergency Economic Powers Act: Origins, Evolution, and Use” (Washington: Congressional Research Service, 2024), available at https://sgp.fas.org/crs/natsec/R45618.pdf.
  17. See Legal Information Institute, “47 U.S.C. § 606(d) – War powers of President,” available at https://www.law.cornell.edu/uscode/text/47/606 (last accessed May 2024).
  18. Government of Canada, “Everyday things that emit radiation,” available at https://www.canada.ca/en/health-canada/services/health-risks-safety/radiation/everyday-things-emit-radiation.html (last accessed February 2024); Michael J. Socolow, “In a State of Emergency, the President Can Control Your Phone, Your TV, and Even Your Light Switches,” Reason Magazine, February 15, 2019, available at https://reason.com/2019/02/15/in-a-state-of-emergency-the-president-ca/.
  19. Federal Procurement Policy, “41 U.S.C. Subtitle I – Federal Procurement Policy,” § 3304, available at https://www.law.cornell.edu/uscode/text/41/3304 (last accessed May 2024).
  20. See Legal Information Institute, “19 U.S.C. § 1862(b)-(c) – Safeguarding national security,” available at https://www.law.cornell.edu/uscode/text/19/1862 (last accessed May 2024).

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Authors

Reed Shaw

Policy Counsel

Governing for Impact

Will Dobbs-Allsopp

Policy Director

Governing for Impact

Anna Rodriguez

Policy Counsel

Governing for Impact

Adam Conner

Vice President, Technology Policy

Center For American Progress

Nicole Alvarez

Senior Policy Analyst

Center For American Progress

Ben Olinsky

Senior Vice President, Structural Reform and Governance; Senior Fellow

Center For American Progress

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.