Unveiling the Landscape of AI Regulation in the United States: A Glimpse into 2023

As we navigate the hyped but still golden age of artificial intelligence (AI), the need for its careful regulation grows in tandem with its ubiquity in our lives. This is a hot topic on a global scale, but today, let’s zoom into the United States, where the regulatory landscape is less clear than in regions such as Europe, but is undeniably evolving.

The European Precedent: A Guiding Light or a Contrasting Path?

In Europe, the regulatory landscape is defined by several enacted and impending legislative acts, such as the anticipated EU AI Act. The United Kingdom, however, indicates a possible deviation from the European blueprint. The US, in contrast, has yet to seriously consider an analogous measure or sweeping federal legislation to govern AI use. While state privacy laws may extend to AI systems handling specific personal data types, no substantial state legislation is currently in effect.

Despite this, the seeds of AI regulation in the US are being sown, with initial sector-specific activities providing hints about the federal government’s views on AI and its future regulation.

NIST: Crafting a Framework for AI Risk Management

A key player in this evolving landscape is the National Institute of Standards and Technology (NIST). As part of the US Department of Commerce, NIST released its AI Risk Management Framework 1.0 (RMF) in January 2023. This voluntary guide targets technology companies involved in designing, developing, deploying, or using AI systems. It aims to manage AI’s multifaceted risks and promote responsible and trustworthy development and use of AI systems.

NIST’s role as the federal AI standards coordinator places it in a strategic position to collaborate with US and international government and industry leaders. It contributes to developing technical standards to encourage AI adoption, as detailed in the „Technical AI Standards“ section on its website. Furthermore, the National Defense Authorization Act for Fiscal Year 2021 directed NIST to develop a voluntary risk management framework for trustworthy AI systems, resulting in the RMF.

The RMF provides useful insights into the considerations likely to influence future AI regulation. As the framework matures, it could even be adopted as an industry standard. The RMF is built upon the understanding that people often perceive AI systems as objective and high-functioning, which can unintentionally lead to harm. By enhancing an AI system’s trustworthiness, these risks can be mitigated. The RMF outlines seven key characteristics of trustworthiness:

  1. Safe: AI systems should have real-time monitoring, backstops, or interventions to prevent physical or psychological harm, or endangerment of human life, health, or property.
  2. Secure and resilient: AI systems should have protocols to protect against attacks and withstand adverse events.
  3. Explainable and interpretable: The mechanisms and outputs of an AI system should be understandable and contextualized properly.
  4. Privacy-enhanced: AI systems should protect human autonomy by preserving anonymity, confidentiality, and control.
  5. Fair, with harmful bias managed: AI systems should promote equity and equality and manage systemic, computational, statistical, and human-cognitive biases.
  6. Accountable and transparent: AI systems should provide information to individuals interacting with it at various stages of its lifecycle and maintain practices and governance to reduce potential harms.
  7. Valid and reliable: AI systems should demonstrate through ongoing testing or monitoring that it performs as intended.

Do these sound familiar? You might have written about these aspects in our piece on How to prevent biases in AI.

The RMF also outlines unique risks to AI systems, including potential privacy risks due to data aggregation capabilities, copyright protections of training datasets, data quality issues impacting trustworthiness, and the lack of consensus on robust and verifiable measurement methods and metrics.

To manage these risks, the RMF suggests four core functions to be employed throughout an AI system’s life cycle: map, measure, govern, and manage.

  1. Map: Gather enough information about an AI system to inform decisions about its design, development, or deployment.
  2. Measure: Implement testing, evaluations, verifications, and validation processes to inform management decisions.
  3. Govern: Develop an organizational culture that incorporates AI risk management into its policies and operations, effectively implements them, and encourages accountability, diversity, equity, and inclusion.
  4. Manage: Monitor and prioritize AI system risks and respond to and recover from risk incidents.

The RMF’s companion Playbook offers actionable items to aid companies in implementing these core functions.

FTC and FDA: Emerging Regulatory Players

Apart from NIST, other federal bodies have been offering their input on AI. The Federal Trade Commission (FTC) has suggested increasing its scrutiny on businesses using AI. The FTC has issued blog posts warning businesses to avoid unfair or misleading practices, including advice to keep AI claims in check and caution against AI deception for sale.

For businesses considering AI technologies for healthcare-related decision-making, the Food and Drug Administration (FDA) has announced its intention to regulate many AI-powered clinical decision-support tools as devices.

A Glimmer of the Future: Senator Chuck Schumer’s Initiative

While the steps taken by NIST, FTC, and FDA provide some early indicators of what future US AI regulation might look like, the current landscape lacks concrete rules that US AI companies can rely on to guide their conduct. Regulation in some form seems inevitable, but the timeline remains uncertain.

In a promising development, on April 13, 2023, reports emerged that US Senator Chuck Schumer is leading a congressional effort to establish US regulations on AI. Although details are scarce, the regulations will reportedly focus on four guardrails: identification of who trained the algorithm and its intended audience, disclosure of its data source, explanation for how it arrives at its responses, and transparent and strong ethical boundaries.

While the timeline for this framework to become established law is unclear, it represents a significant step toward the regulation of AI in the United States.

The Road Ahead

As we move into the future, the need for AI regulation will only become more urgent. It’s a complex and multifaceted issue that demands a nuanced, thoughtful, and cooperative approach. While we may not know exactly what lies ahead, it’s clear that a regulatory landscape for AI in the US is slowly taking shape. We will continue to follow these developments closely and look forward to a future where AI can continue to innovate while being effectively and responsibly managed.

What do you think? Which approach is better suited for the future of AI and the obvious impact it will have on a global scale? For sure it cannot be the Italian way of banning ChatGPT. Do you prefer the European approach with three tiers of regulations depending on the potential impact of a tool or the US approach?

Let us know.

Bernd // agorate

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert