AI Regulation and Its Impact on Future Innovations
- By
- May 19, 2024
- CAAI - Public Policy
On April 23, the Center for Applied AI at Chicago Booth hosted AI and The Law: Regulation and Opportunity, where we dived into the complex and evolving landscape of AI regulation and its legal impact on businesses and individuals. Moderated by Randal C. Picker, James Parker Hall Distinguished Service Professor of Law, the panel included experts from various sectors:
- Rachael Annear: Specializes in data and tech regulation, partner at Global law firm Freshfields Bruckhaus Deringer.
- Dee Choubey: Co-founder and CEO of MoneyLion, a fintech company helping consumers make financial decisions, and a University of Chicago alumnus.
- Arsen Kourinian: Partner at Mayer Brown, focusing on AI governance, data privacy, and cybersecurity law.
- Andre Uhl: Postdoctoral researcher at the University of Chicago, teaching AI literacy, and has been involved in AI ethics and governance for a decade.
Key Points and Takeaways:
Jurisdictional and Legal Frameworks
Rachael and Arsen opened the discussion by outlining the contrasting regulatory and strategic approaches to AI governance in the EU/UK and the US, highlighting regional complexities.
United Kingdom
The UK has adopted a pro-innovation approach to AI regulation, emphasizing AI as a key part of its domestic growth strategy. This approach has been marked by a principles-based regulatory framework designed for flexibility and rapid adaptation to technological advancements. Key focus areas include safety, transparency, accountability, fairness, and redress. Individual regulators, like the Competition and Markets Authority (CMA) and data regulators, are tasked with setting their strategic priorities in line with these principles. This direction originated from the conservative party but enjoys broad support across different political groups.
European Union
In contrast, the European Union has taken a more structured approach with the EU AI Act, which introduces harmonized rules for safe AI systems across a wide range of applications. The Act features a tiered risk-based framework, categorizing AI systems from minimal to unacceptable risk, with stringent fines of up to 35 million euros for severe infringements. The rapid development of foundational models like GPT during the legislative process led to late additions to the Act, demonstrating the EU's reactive stance to evolving AI technologies.
United States
The US approach, as outlined by Arsen, is principles-based and sector-specific, similar to the UK but with its nuances. The federal government uses existing enforcement authorities like the FTC to regulate AI under broad frameworks like the Unfair and Deceptive Practices Act. At the state level, regulations are more aligned with the EU's approach, focusing on high-risk areas such as employment and housing. However, the lack of a unified federal AI or privacy law leads to a fragmented regulatory landscape across different states.
Business Perspective on AI Regulation
Dee discussed the intricate mesh of data regulations in the US, highlighting the concept of data "passporting," where consumers own their data and can transfer it across platforms. This regime underpins consumer trust and is vital for fintech operations. He advocated for a cautious approach to new AI regulations, emphasizing the need for businesses to adapt to market changes and maintain compliance with existing data and privacy laws. He pointed out the competitive disadvantage for smaller companies like MoneyLion, which are trying to catch up with big tech firms that have vast data resources.
Ethical and Public Perception of AI
Andre highlighted a significant shift in the public perception of the tech industry, from viewing it as a beacon of innovation to being increasingly critical due to negative impacts such as data breaches and monopolistic practices. The emergence of AI ethics as a discipline addresses these concerns by focusing on principles like fairness, accountability, and transparency. Andre emphasized the complexity of attributing liability in AI systems, using the example of self-driving car accidents to illustrate the challenge of untangling accountability among various contributors to AI systems.
How Legal Systems Grapple with AI Concerns
Rachael noted the rationality behind different regulatory approaches in the EU and UK, pointing out that the EU's emphasis on safety might limit innovation, while the UK's focus on fostering innovation could compromise safety. This balance reflects the broader global challenge of regulating AI — ensuring safety and ethical standards without stifling technological progress.
Arsen pointed out that there are already laws on the books that protect against some AI concerns, specifically around transparency and explainability. For instance, if a company plans to use someone's data to train an AI model, existing privacy laws mandate that the company must provide a privacy notice before collecting the data and offer opt-out rights. However, the patchwork of privacy laws across different states creates challenges for multi-state or multinational companies, leading to calls for a federal law that would provide a unified approach. Arsen also expressed concerns about overregulation, noting that laws could become outdated as soon as they are enacted, which could harm both consumers and businesses.
The panelists explored multi-jurisdictional issues, highlighting that AI introduces unique challenges beyond traditional data concerns. Arsen and Rachael emphasized the extraterritorial reach of laws in the US and EU, where companies trigger legal obligations by affecting consumers across various jurisdictions, even without a physical presence. This landscape requires navigating overlapping regulations, as seen in the Clearview AI case and the EU's AI Act. Dee advocated for a consumer-centric approach, suggesting that obtaining data use consent is the safest strategy to navigate the U.S.'s varied regulations. Andre pointed out the risk of regulatory fatigue and stressed the importance of AI literacy and public understanding to maximize the benefits of AI technologies, underscoring the dynamic interaction between regulatory frameworks, business strategies, and consumer education.
Evaluating Corporate Governance in AI Development
Dee emphasized the importance of using existing regulatory frameworks to manage data flow and interactions between companies and consumers efficiently. He suggested that the US regulatory system, with its "mesh system" of data regulation, supports AI advancements by ensuring responsible data management. He highlighted that public company boards in the US are generally well-equipped with risk and compliance committees, suggesting that existing frameworks are sufficient for managing AI's broader implications without the need for overly specific new regulations.
Andre offered a more skeptical view, questioning the effectiveness of internal governance structures in promoting ethical AI development. He pointed out the risk of superficial compliance, where companies might publicly commit to principles without truly implementing meaningful changes, a practice akin to "greenwashing" in sustainability.
The discussion highlighted the potential of corporate governance to positively contribute to AI regulation but also pointed out the complexities and risks of relying solely on corporate mechanisms to address AI's societal impacts. The consensus was that while governance can guide good decision-making, balancing regulation with innovation is crucial to maintain AI's global competitiveness.
Final Thoughts and Perspectives
- Rachael stressed the importance of understanding key principles of AI regulation and the need for pushing AI literacy to empower consumers.
- Arsen highlighted the need for education on both the consumer and business levels, emphasizing the cultural aspects of privacy and AI governance.
- Dee expressed optimism about the application of AI in consumer-facing technologies and the potential societal benefits of AI in various sectors.
- Andre called for a deeper understanding of AI's capacities and limitations, advocating for holistic measurements that consider the social impact of AI technologies.
If you are interested in listening to the full panel you can find the recording here.