With a rapidly evolving AI regulatory landscape and a patchwork of laws across different countries and U.S. states, AI providers are concerned that innovation could be stifled. In our final installment of Navigating the AI Boom, our research analysts explore notable AI regulatory frameworks, how the U.S. can keep up with innovation, and more.

Will Regulations Hold Up the AI Market?

While AI is expected to deliver many societal benefits, such as better healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper energy, policymakers have expressed concerns about potential risks. These risks include misinformation, fake images, videos, and audio, cybersecurity attacks, biased responses, and private data or intellectual property misuse. To guard against these risks and support responsible AI innovation, governments worldwide have begun introducing AI regulation. However, different countries have taken different approaches, which could complicate efforts by AI providers to introduce their technology across borders.

Perhaps the most notable law thus far is the European Union’s AI Act, introduced in 2023, which has been touted as the world’s first comprehensive AI law. It establishes obligations for providers and users depending on the level of risk from AI. The different risk levels are:

  • Unacceptable risk: AI systems considered a threat to people are banned. This includes biometric identification, classifying people based on demographics, and cognitive behavioral manipulation of people (e.g., voice-activated toys that may encourage dangerous behavior)
  • High risk: AI systems that negatively affect safety or fundamental rights. This classification includes products (toys, aviation, cars, or medical devices) or systems (infrastructure operations, educational training, worker management, law enforcement, or application of the law)

In the U.S., states such as California or Colorado have introduced AI-focused legislation, but differences between these laws have prompted calls for a unified federal framework to address potential threats from the technology. In contrast, other states continue to rely on existing privacy, intellectual property, consumer protection, and other laws to regulate AI.

The Pros and Cons of AGI

OpenAI defines AGI (artificial general intelligence) as “highly autonomous systems that outperform humans at most economically valuable work.” Essentially, AI is a current technology with specialized applications, while AGI is a future technology aiming to replicate human-level intelligence. Before developing LLMs (large language models), AI mainly comprised virtual assistants, recommendation engines, facial recognition technology, and computer vision systems. Previously, the consensus among researchers was that it would take years to reach AGI, but based on recent advancements, researchers have understandably revised this view, with industry leaders arguing that AGI could come as soon as this year.

Given this technology's powerful potential impact on humanity, U.S. lawmakers increasingly find themselves at the center of the AGI debate, specifically focused on open-source models and fears that other global models are advancing past the U.S.’s models. In response to these fears, the U.S. has adopted strict policies to curb China’s AI advancements by implementing export bans on hardware infrastructure for AI development. The bottom line is that AGI has the potential to revolutionize many fields, including scientific research, healthcare, and defense technology; understandably, government leaders want to ensure this technology is used responsibly.