McKinsey
What the draft European Union AI regulations mean for business

European regulatory proposal on the usage of artificial intelligence

In this article McKinsey provides an overview of the proposed European Union (EU) draft regulations aimed specifically at the development and use of Artificial Intelligence (AI). This future regulation would apply to any AI system used within the European Union and not just to European companies, and according to McKinsey provides insight into the future development of AI regulation around the world and into potential implications for companies.

According to McKinsey, many organizations still have a lot of work to comply with this regulation and address the risks associated with AI (regulatory-compliance risks and others around reputation, privacy, fairness in the commercial activity…). In its view, this regulatory proposal provides insight into the future development of AI regulation around the world and into potential implications for companies.

The article provides an overview of the main aspects included in the draft European AI regulations, such as the following:

  • Based on the risks AI poses for individuals, AI systems will be classified into three categories:
    - Unacceptable-risk AI systems
    : include (1) subliminal, manipulative, or exploitative systems that cause harm, (2) real-time, remote biometric identification systems used in public spaces for law enforcement, and (3) all forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.
    - High-risk AI systems: include those that evaluate consumer creditworthiness, assist with recruiting or managing employees, or use biometric identification… Under the proposed regulation, the EU would update the list of systems included in this category on an annual basis.
    - Limited- and minimal-risk AI systems: include many of the AI applications currently used throughout the business world, such as AI chatbots and AI-powered inventory management.
     
  • AI systems will have different requirements depending on their level of risk. Therefore “High-risk AI systems” would be subject to the largest set of requirements (transparency and provision of information to users, implementation of risk management systems, data quality governance, monitoring, and reporting obligations…) while “Limited- and minimal-risk AI systems” would have significantly fewer requirements, primarily in the form of specific transparency obligations, such as making users aware that they are interacting with a machine. “Unacceptable-risk category” would no longer be permitted in the EU.

Filter results

FILTER BY CATEGORIES()
BACK

Filter results

Categories

URL copied to clipboard