Skip to main content

NTUF Comments to OMB on AI Governance

(PDF)
December 5, 2023
 
Office of Management and Budget
Executive Office of the President 
725 17th Street NW
Washington, DC 20503
 
Re: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum
 
Ryan Nabil
Director and Senior Fellow, Technology Policy
National Taxpayers Union Foundation
122 C St NW, Suite 700
Washington, DC 20001

 
On behalf of National Taxpayers Union Foundation (NTUF), I welcome the opportunity to submit the following written comments in response to the Office of Management and Budget (OMB)’s request for comments on the U.S. government’s draft memorandum on advancing governance, innovation, and risk management for agency use of artificial intelligence (AI). Located in Washington, DC, National Taxpayers Union is the oldest taxpayer advocacy organization in the United States. Its affiliated think-tank, NTUF, conducts evidence-based research on economic and technology policy issues of interest to taxpayers, including U.S. and international approaches to artificial intelligence, emerging technologies, and data protection. 
 
NTUF appreciates the recognition by the Biden and Trump administrations of the need to create a well-balanced approach to AI governance that seeks to promote AI innovation while reducing potential AI risks and AI misuse by federal agencies. As the U.S. government seeks to develop the U.S. AI strategy further, it has an opportunity to craft well-designed AI policies that will contribute to America’s position as a global center of responsible AI innovation. We believe that the U.S. government’s guidelines for agency use of AI would benefit from the following recommendations: 
 
1. The U.S. government should develop institutional mechanisms to support the implementation of U.S. AI strategy (Question 2).
 
2. The U.S. government should design risk assessment and coordinating mechanisms to continuously monitor and mitigate future AI risks. Non-sensitive, non-confidential information from such a framework should be made available to the public (Question 2 and Question 8).
 
3. U.S. federal agencies should consider developing artificial intelligence sandboxes to understand context-specific AI risks and promote AI innovation (Question 3).
 
4. The U.S. government should consider strengthening closer bilateral cooperation and multilateral engagement in AI governance to develop responsible AI norms and rules in the United States and internationally (Question 2).
 

I. Developing Coordinating Mechanisms to Support the Implementation of the U.S. AI Strategy (Question 2) 

The U.S. government should consider developing mechanisms to support the implementation of the U.S. AI framework and help ensure that AI principles and guidelines are applied uniformly across different sectors. While a principles-based, context-specific approach to AI would allow the United States to develop flexible and well-calibrated rules for AI in different sectors, this strategy comes with specific challenges that would need to be addressed in the U.S. AI framework. 
 
A central challenge is that since individual regulators have the flexibility to issue guidelines and adjust rules based on broader AI principles, there is a risk that such guidelines are not applied uniformly across different sectors. Such differences will not only create market uncertainties but will also pose a particular challenge when certain AI applications come under the jurisdiction of multiple regulators. A hypothetical example would entail the regulation of an AI-enabled investment advisory product dealing with the personal data of users – which could be subject to the overlapping jurisdiction of the Federal Trade Commission, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and even state regulators. The U.S. AI frameworks should, therefore, design preemptive mechanisms to address the potential challenge of regulatory inconsistencies that could arise in a more flexible, decentralized AI governance approach. 
 
The UK’s proposed mechanisms for the implementation of its AI framework could provide a useful starting point for U.S. policymakers to think more analytically about such issues and design policies accordingly. The UK’s AI White Paper proposes seven supporting mechanisms for the following objectives: i) monitoring the overall effectiveness of the AI framework; ii) supporting the coherent application of AI principles across the economy; iii) assessing and addressing cross-sectoral risks from AI applications; iv) providing support and guidance to businesses; v) improving business awareness and consumer awareness of trustworthy AI; vi) conducting horizontal scanning for emerging risks and regulatory trends; and vii) monitoring global regulatory developments. 
 
While the precise mechanisms would need to be calibrated and adapted to U.S. policy objectives and regulatory architecture, these proposals point to important challenges that U.S. lawmakers should consider while pursuing a more decentralized approach to AI regulation. The table below provides some potential mechanisms – based on the UK government’s AI White Paper – that Congress and the Biden administration could consider while designing the U.S. AI framework (Table 1). 
 
Table 1. Functions to Support the Implementation of a Potential U.S. AI Framework 
 
Functions Potential Activities 
1) Monitoring, Assessment, and Feedbacki) Develop and maintain monitoring and evaluation mechanisms to assess the economic impacts of the U.S. AI framework across different sectors and for the entire economy.
ii) Collect data and stakeholder input from regulators, the private sector, think tanks, and academic institutions to evaluate the U.S. AI framework’s overall effectiveness.
iii) Monitor the framework’s effectiveness in maintaining a proportionate approach to AI. 
iv) Assess the effectiveness of regulatory coordination between different agencies in regulating AI.
2) Coherent Implementation of AI Principlesi) Develop guidelines to support regulators in implementing the U.S. AI framework. 
3) Cross-Sectoral Risk Assessmenti) Create a risk register of potential AI risks to evaluate different risks and support the development of the cross-sector risk assessment framework.
4) Support for Innovators
 
i) Identify potential regulatory barriers to AI innovation in different sectors.
5) Education and Awareness i) Provide informal guidance to businesses on navigating the U.S. AI regulatory landscape.
ii) Advise start-ups and companies on identifying and applying to the appropriate sandbox. 
iii) Improve consumer awareness and public trust about how AI is regulated in the U.S.
iv) Support the creation of innovation hubs, which are typically launched by regulators to provide start-ups and companies information about AI-related legal obligations, identify business opportunities, and invest in the U.S. AI ecosystem. Innovation hubs can also help start-ups identify and apply to the appropriate sectoral AI sandbox. 
6) Horizontal Regulatory Scanningi) Monitor emerging trends in U.S. and global AI governance, new technological developments, and emerging AI risks.
7) International Regulatory Frameworks i) Monitor AI-related foreign legislation and global regulatory developments and evaluate potential implications for the U.S. regulatory approach and the broader AI ecosystem. 
Source: Author based on recommendations by the UK Department of Science and Technology and the Office for AI (2023).
 

II. Developing Risk Assessment and Coordinating Mechanisms to Monitor and Mitigate Future AI Risks Continuously (Question 2)

A major challenge in AI governance is to develop and implement a proportionate risk-management framework to identify, prioritize, and mitigate potential risks. The differences in how various jurisdictions seek to evaluate and mitigate such risks can provide insights into how U.S. lawmakers could develop an agile, multi-stakeholder framework to identify and mitigate future risks. At the risk of oversimplification, the EU’s proposed AI Act classifies AI systems into four categories of risks i) “Minimal-risk” AI systems, which require AI developers to comply with a code of conduct; ii) “Limited-risk” AI systems that require providers to comply with certain transparency requirements; iii) “High-risk” AI systems that must undergo a more rigorous conformity assessment; and iv) AI systems with “unacceptable risks,” which are banned across the EU (Table 2). The EU also provides lists of AI usage that would be classified as “limited” and “high risk.” (Table 2).
 
Table 2. Categories of AI Risks Under the European Union’s Proposed AI Act 
 
Category and RequirementExamples Statutory Basis 
Unacceptable risk: ProhibitedSocial scoring, facial recognition, dark-pattern AI, and manipulation Art. 5 
High risk: Conformity assessmentEducation, employment, justice, immigration, and law Art. 6 & ss. 
Limited risk: TransparencyChatbots, deep fakes, and emotional recognition Art. 52
Minimal risk: Code of conduct Spam filters and video gamesArt. 69
Source: Lilian Edwards, Ada Lovelace Institute (2022)
 
While the European Union’s risk-based approach sounds reasonable on a prima facie basis, it has two major problems. First, unlike the UK’s context-specific approach, the EU’s one-size-fits-all approach to risk assessment fails to distinguish adequately between risks associated with different AI applications within the same sector. For instance, whereas the EU’s AI Act would treat all AI-enabled tasks related to the operation and management of critical infrastructure as “high risk,” the UK’s context-specific approach recognizes that, even within high-risk sectors like critical infrastructure, not all AI-enabled tools carry the same risks and should not be subject to uniform compliance and liability standards. 
 
Under the European Union’s proposed AI Act, low-risk AI applications within sectors classified as “high-risk” –such as education, employment, and law – would be subject to much more restrictive regulations than would be the case under the UK’s AI framework (Table 2). For example, since the EU considers the use of AI in education as high risk, AI-enabled language proficiency examinations by online platforms  – which often provide a much cheaper and more accessible alternative to traditional language tests like the TOEFL and IELTS– would be subject to the same compliance standards as the use of AI in other high-risk areas like medical diagnostics and critical infrastructure. Such a restrictive approach risks hampering innovation in online learning platforms, legal services, and other areas that the EU classifies as “high risk” under the AI Act. 
 
Notwithstanding this restrictive approach, the AI Act’s risk assessment framework might struggle to be flexible in addressing future risks. Although generative AI applications like ChatGPT have become widespread since the autumn of last year, the pace and scope of their rapid development would have been difficult to predict even five years ago. Likewise, despite the best efforts of lawmakers, regulators, and technologists alike, the business of making predictions about future AI risks remains a highly uncertain one. As such, it is difficult to accurately predict the AI landscape ten years from now and the unique set of risks and challenges such developments will pose. If the U.S. adopts a similar approach of classifying prespecified AI uses as “high risk” in statutes, it risks constraining innovation while remaining less flexible in identifying and mitigating future AI risks. 
 
Compared to the EU’s approach, the UK’s proposed strategy of continuously monitoring AI risks and enabling public-private collaboration to identify emerging risks represents a more flexible approach to risk management. Instead of classifying a list of AI applications as high risk, the UK has proposed a principles-based risk assessment framework, which sectoral regulators will use to evaluate risks within their regulatory remit. Furthermore, the UK government has proposed the creation of “central risk functions” – separate from sectoral regulators – that would play a central role in monitoring the effectiveness of the AI framework, monitoring current and future AI risks, and providing advice to the government on which risks should be prioritized. With closer regulatory cooperation between the government, regulators, and the private sector, this approach is more likely to enable more robust monitoring of potential AI risks, as well as the introduction or calibration of appropriate statutory instruments to address future risks as they emerge. 
 
A comparable U.S. mechanism – involving Congress and the federal government, sectoral regulators, the private sector, and independent risk evaluators – could be designed to identify and respond to future AI risks (Table 3). As part of this arrangement, Congress and the federal government would establish the overall U.S. AI framework and clarify risk management guidelines for sectoral regulators based on the AI framework. In turn, the sectoral regulators would enforce such guidelines within their regulatory remit, address prioritized AI risks, calibrate rules based on regulatory experience and stakeholder input, and recommend whether the U.S. AI framework should prioritize other emerging risks (Table 3). The central risk function – ideally comprising experts, officials, and private sector representatives independent of the sectoral regulators – would evaluate the effectiveness of this framework, identify emerging AI risks, advise Congress and the federal government whether an intervention is required to address such risks, and if so, which regulators are best suited to address such emerging risks (Table 3). 
 
Table 3. Designing a U.S. Central Risk Function Mechanism for Artificial Intelligence Risks 
 
StakeholderIdentification*EnforcementMonitoring* 
Congress and the Federal Governmenti) Creates the U.S. AI framework to identify AI risks; ii) Decides which risks to tolerate, regulate, and prioritize.Delegates the enforcement of the AI Framework to sectoral regulators. Updates the statutory framework to address new risks if identified. 
Central Risk Function Mechanism i) Identifies and prioritizes new AI risks; ii) Provides recommendations if the new risks require government intervention. i) Recommends which regulator(s) should address those risks; ii) Create overall risk assessment frameworks; iii) Provides advice to regulators on technical aspects of regulation; iv) Shares AI regulatory best practices.  Monitors risks and reports them to Congress and the Executive.  
 
Sectoral Regulatorsi) Identifies and prioritizes sector-specific AI risks; i) Creates regulatory guidance for businesses based on the central risk function’s risk assessment framework; ii) Updates regulatory guidelines and rules based on stakeholder feedback on how effectively they are working; iii) Enforces actions against companies for violations.Reports on the effectiveness of addressing AI risks.
BusinessesProvides information to sectoral regulators and the central risk function, as necessary and appropriate.Complies with regulatory guidance and rules and incorporates the risk assessment framework in internal practice.Informs the relevant regulator(s) and the central risk function mechanism if risk mitigation measures fail to address the risks.

* The mechanisms highlighted in gray comprise a regulatory feedback loop between the federal government, sectoral regulators, the central risk function, and businesses subject to the AI framework to identify and mitigate emerging AI risks. 

Source: Author based on DSTI and UK Office for AI (2023) 
 
While such proposals need to be more carefully evaluated and adjusted to suit the unique features of the U.S. regulatory architecture and policy objectives, they provide a useful starting point for thinking more strategically about ways to address future AI risks while maintaining a flexible regulatory approach. Furthermore, identifying and addressing potential AI risks and developing mechanisms to identify and address emerging risks would help improve public trust in AI. 
 

III. Developing Artificial Intelligence Sandboxes to Understand Context-Specific AI Risks and Promote AI Innovation (Question 3)

The U.S. government should create sectoral AI sandboxes to maximize the benefit of a flexible, innovation-focused approach to AI regulation. Such programs would allow innovative companies to offer innovative AI products under close regulatory supervision for a limited period and receive regulatory waivers, expedited registration, and/or guidance for compliance with relevant laws. Meanwhile, regulators can gain a more in-depth understanding of how emerging AI technologies and business models interact with the existing sectoral rules. Based on such insights, policymakers can craft better rules that help promote AI innovation while minimizing potential risks. 
 
Recognizing the innovation potential of AI sandboxes, the OECD recommends the creation of such programs at the national level. Following its recently concluded consultation, the UK government is currently evaluating different models of designing AI sandbox program(s). Likewise, as outlined in the draft EU Act, the European Commission encourages the creation of national AI sandboxes in member states (Spain launched the first such sandbox last year). However, such programs need to be designed appropriately to maximize their innovation potential, as pointed out in our recent letter to the White House and consultation response to the UK government. 
 
More specifically, U.S. lawmakers should consider creating sector-specific sandboxes to promote AI innovation in specific sectors and update sectoral legal frameworks accordingly. Furthermore, while sandboxes should entail close regulatory supervision and appropriate consumer protection provisions, they must also provide regulatory relief and guidance to make such programs attractive to innovative businesses. Finally, making AI sandboxes open to non-U.S. companies could help attract innovative foreign businesses to the United States and promote AI innovation. 
 

IV. Strengthened Bilateral Cooperation and Multilateral Engagement in Responsible AI (Question 2)

Beyond AI sandboxes, the United States should consider other mechanisms – such as joint declarations, Executive agreements, and joint research programs – to strengthen tech cooperation at the bilateral level. In this context, the Joint U.S.-UK Declaration on Cooperation in AI Research and Development in September 2020 and the Atlantic Declaration in June 2023 were steps in the right direction.  Likewise, the U.S.-EU Digital Trade and Technology Council represents another forum through which the United States could pursue closer economic and technological cooperation with the EU and EU member states. Similar opportunities also exist for bilateral cooperation with Switzerland and Japan, which seek to adopt a flexible, light-touch approach to AI governance. Establishing research partnerships – similar to Canada and the UK’s arrangements with Japan and the EU–could also help deepen U.S. technology cooperation with like-minded nations.
 
Ultimately, the U.S. government must look beyond bilateral relationships and strengthen its multilateral engagement in global AI governance. Although the United States is part of several multilateral fora and institutions that are active in AI governance – such as the OECD and the Global Partnership on AI – the U.S. appears to punch below its weight in shaping AI norms through these organizations. By participating more actively in such organizations–-as has been the case with Japan and the UK’s multi-stakeholder, multilateralist approach to AI governance–-the U.S. government can more actively contribute to the development of international AI norms and technical standards.  
 
The development of such norms could be particularly beneficial for emerging-market and developing countries, many of which lack a robust AI governance infrastructure and look to international institutions to develop best practices in responsible AI. Along with like-minded partners – such as the EU, the UK, and Japan – the United States could play a more critical role in establishing multilateral platforms for AI governance dialogues between the governments of industrialized and emerging-market countries. As officials and lawmakers in various jurisdictions seek to develop national AI strategies, the United States and partner countries can play a more significant role in advocating a principles-based, innovation-focused AI approach that promotes economic growth and innovation while mitigating current and future AI risks.