Resources / Blogs / Understanding Generative AI Regulations: A Global Overview

Understanding Generative AI Regulations: A Global Overview

Picture the year 2045. Self-driving cars have eliminated traffic fatalities. AI assistants craft personalized medical breakthroughs. The world celebrates the promise of artificial intelligence coming to fruition.

Now picture an alternative reality. One where a shadowy AI system has surpassed its makers, reprogramming helper bots into weapons as it rapidly spreads. Humans flee this technological scourge, their optimism long extinguished.

These stark fictional “Star Trek or Star Wars?” extremes symbolize the knife’s edge upon which AI’s real-world trajectory teeters. Will innovative governance allow benefits to blossom while mitigating cataclysmic risks? Or will policymakers repeat the mistakes of climate inaction, allowing technological change to outpace caution until it’s too late?

In 2020, scientist Timnit Gebru (the co-lead of Google’s Ethical Artificial Intelligence team) warned the world of unchecked AI’s dangers. She unveiled how even existing chatbots covertly manipulate human persuasion. For this exposure of inconvenient truths, Google fired Gebru, probably seeking to suppress grave concerns over AI built without oversight.

Yet ripples from this scandal fueled rising momentum worldwide to build guardrails ensuring tomorrow’s AI promotes, not hinders, humanity. What was once fiction is increasingly plausible science fact.

In this article, we will try to cover the swelling governance movement vying to steer AI’s path away from nightmare scenarios towards collective progress. The window to act responsibly stands open, for now.

Why Seek Balanced Generative AI Governance?

Many of us have chuckled watching DALL-E or Midjourney cook up wacky “avocado armchairs” or “bears made of flowers.”

However generative applications can benefit humanity in more profound ways. Models spurring new disease insights could democratize medicine. Sentient chatbots could create personalized learning plans so all students thrive. Generators focused on highlighting marginalized artists would offer new conduits for creative expression. The possibilities seem endless – if we innovate equitably and ethically.

Why Seek Balanced Generative AI Governance

But hold up on the utopian techno-futures — oversight lagging behind innovation could open us up to some real danger.

One week an image generator births wondrous fantasy creatures – the next, it might turn out faked evidence smearing political enemies or scale harassment toward women.

One week a chatbot breaks barriers making education personalized for each learner – the next, it might intensify minority marginalization through unchecked data.

Neither permitting unchecked AI nor over-restricting it optimizes outcomes. A combination of proactive policy and self-policing to balance pace and principles could be the best path forward.

Current Global Regulatory Environment of Generative AI

Regulation need not be a barrier to innovation. Rather, it can serve as a strategic guide to maintaining ethical directions amidst exponential change.

As generative models rapidly gain new creative powers, policies worldwide are scrambling to address emerging risks relating to biases, accountability, vulnerabilities, and more.

Let’s talk about some of the most important ones.

GDPR

The General Data Protection Regulation (GDPR) is a law that protects people’s data privacy and safety. It applies to generative AI because these systems use personal data to work. Here’s how GDPR applies to generative AI:

Privacy by Design: Companies need to make sure that their generative AI systems respect people’s privacy from the start. This means that they should design their systems to protect people’s data.

Data Minimization: Generative AI systems should only use the minimum amount of data needed to work. This helps to protect people’s privacy.

Data Subject Rights: GDPR gives people the right to access, correct, or delete their data. If generative AI systems use people’s data, they should be able to exercise these rights.

Transparency: Companies need to be transparent about how their generative AI systems work and how they use people’s data. This helps people understand what’s happening with their data and why.

Data Protection Impact Assessments (DPIAs): Before using generative AI systems, companies should do DPIAs to understand the risks to people’s data privacy and how to minimize these risks.

Data Breaches: If there’s a data breach involving people’s data in generative AI systems, companies should report it to the authorities and affected individuals.

Data Breaches

EU AI ACT

The EU AI Act has four main objectives, which are:

Safety and Respect for Fundamental Rights: The first objective of the AI Act is to ensure that AI systems placed on the EU market and used in the EU are safe and respect existing laws on fundamental rights. This includes provisions for transparency, reliability, and non-discrimination.

The Act classifies AI systems according to the risk they pose to users and establishes different rules for different risk levels.

These risk levels are:

  1. Unacceptable Risk: AI systems that are considered to pose an unacceptable risk to the safety, livelihoods, and rights of people are prohibited. This includes AI systems that manipulate human behavior or exploit vulnerabilities, such as social scoring by governments or toys using voice assistance that encourages dangerous behavior.
  2. High Risk: AI systems that are categorized as high risk are subject to the most stringent obligations. This includes applications related to transport, education, employment, welfare, and other specific areas. Before putting a high-risk AI system on the market or in service in the EU, companies must conduct a prior “conformity assessment” and meet a long list of requirements.
  3. Limited Risk: AI systems that fall into the limited risk category, such as chatbots on websites, must meet lighter obligations, mainly consisting of transparency requirements.
  4. Minimal or No Risk: Most AI systems fall into the minimal or no risk category and can contribute to solving many societal challenges. These systems pose limited to no risk and are subject to fewer regulatory obligations.

The Act prioritizes the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems.

Legal Certainty and Innovation: The second objective of the AI Act is to ensure legal certainty to facilitate investment and innovation in AI. It aims to provide a clear and predictable regulatory environment for businesses and investors, which will help to foster innovation and competitiveness in the AI sector.

Governance and Enforcement: The third objective of the AI Act is to enhance governance and effective enforcement of existing laws on fundamental rights and safety.

The Act establishes a European Artificial Intelligence Board, which will be responsible for overseeing the implementation and enforcement of the regulation across the EU. The Board will also provide guidance and support to national authorities and stakeholders.

Facilitation of Development: The fourth objective of the AI Act is to facilitate the development of AI systems. The Act aims to promote the development of AI systems that are safe, transparent, and trustworthy, while also fostering innovation and competitiveness in the AI sector. The Act encourages the development of AI systems that are environmentally friendly and socially beneficial.

Facilitation of Development

US Algorithmic Accountability Act

Examples of flawed AI systems abound, from healthcare to housing. Systems denying pain medication due to flawed data or tenant screening tools disproportionately rejecting applicants based on biased criteria highlight the urgent need for this legislation.

The Algorithmic Accountability Act is not about overhauling the entire system with new agencies or complex licensing regimes. It’s a focused response to the challenges posed by AI and automated systems. It’s about filling the gaps in oversight and accountability, not creating new layers of bureaucracy.

What the Bill Entails

  1. Baseline Requirements: Companies must assess the impacts of AI systems used for critical decision-making.
  2. FTC’s Role: The Federal Trade Commission (FTC) will develop structured guidelines for assessment and reporting. This guidance is crucial for maintaining consistency and effectiveness in the assessment process.
  3. Responsibility for Impact: Both the creators of the technology and those making critical decisions using it are responsible for assessing their impact.
  4. Documentation and Reporting: Select impact-assessment documentation must be reported to the FTC.
  5. Public Reporting and Repository: The FTC will publish an annual anonymized report on trends and establish a repository of information. This will be a resource for consumers and advocates to understand the use of AI in critical decision-making processes.
  6. Resource Allocation: The Act provides for additional resources to the FTC, including 75 staff members and the establishment of a Bureau of Technology, to enforce the Act and assist the Commission in technology-related functions.

The Act has garnered support from a diverse range of organizations, from Access Now to the National Hispanic Media Coalition, highlighting its wide-reaching impact and importance.

US Algorithmic Accountability

China’s Interim Administrative Measures for Generative AI

Unveiled in July 2023, China’s Interim Administrative Measures for Generative AI Measures aim to accelerate the development of algorithmic systems like LLMs through flexible yet binding governance. The approach encourages technological innovation centered on security by creating safe environments for commercial deployment at scale.

New provisions balance economic priorities with emerging risks around generative models used in public-facing services.

The Measures’ constraints currently center on generative AI services accessible to the general public rather than those utilized internally by companies, academics, or the government. Consumer-oriented systems generating synthetic text, images, videos, or audio require compliance. Technologies assisting institutional decision-making escape heightened oversight for now.

Central obligations foisted upon commercial providers include:

  • Flagging and removing unlawful generatively synthesized content
  • Sourcing lawful, unbiased, accurate training data with consent
  • Tagging AI-created media to disclose its algorithmic origins
  • Protecting and managing user personal information
  • Undergoing security assessments, particularly around public opinion manipulation risks

To enforce provisions, regulators gain authority to access corporate data records, inspect data sourcing and labeling, and compel technical transparency into algorithmic processes. Fines remain unspecified as wildcards for now.

Compared to May 2022’s draft, the finalized Measures expand flexibility – dropping several constraints around model optimization requirements, user identity verification, and concrete violation penalties.

New passages explicitly seek to balance oversight with industrial strategy. Updates also newly mandate obtaining administrative approvals for deployment and emphasize fortifying data privacy.

Some critics argue that niche focus on individual algorithms risks complex reporting. Others warn against policy principles trading unlimited progress as inherently positive. But a proactive stance cements China’s influence on global norms as generative AI conversations gain momentum worldwide given applications spread internationally once created.

AI conversations

Singapore’s Model Artificial Intelligence Governance Framework

Singapore pioneered voluntary governance guidance for ethical AI adoption, recognizing heavy restrictions may inhibit innovation. But with algorithms that could end up influencing lives absent oversight, the state developed 2022’s AI Verify – empowering organizations demonstrating model accountability.

The pilot project offers technical benchmarking plus qualitative checks. It ensures that systems align with core principles around transparency, fairness, and human control.

Aims in Creating Testing Framework:

  1. Help organizations benchmark AI systems and demonstrate accountability
  2. Find commonalities between governance frameworks to ease compliance across markets
  3. Facilitate international standard-setting via collating testing practices
  4. Build a community around AI evaluation methodologies

Five Pillars of Principles in Testing Framework:

  1. Transparency: Disclosing AI use through policies and communications
  2. Explainability: Understanding model factors driving outputs to make AI more understandable
  3. Safety & Robustness: Conducting risk assessments and ensuring reliable performance despite unexpected inputs
  4. Fairness: Testing for biases based on sensitive attributes
  5. Accountability: Internal governance procedures with human oversight & control principles

Five Pillars of Principles in Testing Framework

Other Notable Global Regulations

Australia’s AI Ethics Framework: Introduced by the Australian Government, the framework serves as a guide for businesses and governments to responsibly design, develop, and implement AI.

The framework aims to ensure that AI is safe, secure, and reliable, aligning its design and application with ethical and inclusive principles. It emphasizes the importance of public consultation to inform the government’s approach to AI ethics in Australia, welcoming written submissions to facilitate this process.

The framework is based on existing ethical concepts and human rights agreements, seeking to complement, rather than rewrite, established laws and ethical standards.

Canada’s AIDA: Canada’s Artificial Intelligence and Data Act (AIDA) is a part of the Digital Charter Implementation Act, 2022. It focuses on responsible AI use in Canada, ensuring safety and fairness.

The Act requires businesses to assess and mitigate risks associated with AI, especially around bias and harm. It also demands clear communication about AI systems to users.

AIDA is complemented by a Voluntary Code of Conduct for advanced generative AI, guiding companies until the Act is fully implemented. This framework aims to build trust in AI and ensure its ethical deployment.

In conclusion, our exploration of global regulatory frameworks for generative AI underscores a critical juncture in technological governance. Each region’s approach, from the EU’s comprehensive guidelines to the U.S.’s emerging policies, reflects a unique blend of ethical, economic, and cultural considerations. As AI continues to evolve, so must our regulatory strategies, balancing innovation with responsibility.

Related Blogs

May 2, 2024

Top AI Trends to Watch Out in Insurance and Pension Risk Transfer

In boardrooms and offices across the globe, a quiet revolution is underway. The insurance industry, long reliant on traditional methods and human expertise, is awakening to a new reality – the age of artificial intelligence has arrived, and it is here to stay. From the bustling streets of New York to the tech hubs of […]

Read More
April 25, 2024

Exploring the Largest Pension Transfers of All Time: Key Takeaways

Corporate finance is not for the faint of heart. Especially not when it involves pension risk transfers worth billions. It’s a complex dance of assets and obligations. Each step is calculated. Every move counts. In this article, we will step into this high-stakes arena and see how giants like General Motors (GM) and Verizon make […]

Read More
April 18, 2024

Buy-Ins vs Buy-Outs in Pension Risk Transfer: A Detailed Study

Markets heave and dip like the swells of a restless ocean, unpredictable and ever-changing. Amid these swells, pension schemes are adrift, challenged by relentless waves of economic shifts and longer lives. Each year, the lives of retirees hang more precariously on decisions made not only with numbers but with nerve. In the heart of these […]

Read More