The EU’s AI Act: What You Need to Know
The EU is making the first comprehensive law on AI, the “AI Act.” Last week, they had a fifth talk to decide on the final version.
The law still needs to be voted on by the European Parliament in early 2024.
The law will have strict rules for the companies that make and use foundation models, such as “The companies building foundation models will have to draw up technical documentation, comply with EU copyright law and detail the content used for training. The most advanced foundation models that pose ‘systemic risks’ will face extra scrutiny, including assessing and mitigating those risks, reporting serious incidents, putting cybersecurity measures in place, and reporting their energy efficiency.”
The law will also ban some AI uses that could hurt people’s rights and democracy, such as:
- “Biometric categorization systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race);
- “Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;
- “Emotion recognition in the workplace and educational institutions;
- “Social scoring based on social behavior or personal characteristics;
- “AI systems that manipulate human behavior to circumvent their free will;
- “AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”
The law will also require high-risk AI systems, such as those that affect elections or money, to do a fundamental rights impact assessment and to be transparent and responsible to the people.
Across industries we are seeing a need for assessment, such as educational institutions seeking a robust cyber security solution for education to protect sensitive data and maintain the integrity of academic systems.
The law is praised as a breakthrough, but some doubt its effectiveness.
Adam Satariano writes in The New York Times: “Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for AI development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.”
The law is still being determined and needs more work and approval.
The European Commission president Ursula von der Leyen welcomed the act and said, “Our AI Act will make a substantial contribution to the development of global guardrails for trustworthy AI.
We will continue our work at international level, in the G7, the OECD, the Council of Europe, the G20, and the UN.
Just recently, we supported the agreement by G7 leaders under the Hiroshima AI process on International Guiding Principles and a voluntary Code of Conduct for Advanced AI systems.”
Scott Roxborough writes in The Hollywood Reporter that in its transparency demands for the large general-purpose programs, “The EU has used the standard applied by US President Joe Biden in his October 30 executive order, requiring only the most powerful large-language models, defined as those that use foundational models that require training upwards of 1025 flops (a measure of computational complexity) to abide by new transparency laws. Companies that violate the regulations could face fines of up to 7 percent of their total global sales.”
The law is good news for the book publishing industry as well because “Earlier versions of the bill decreed that companies using generative AI or foundation AI models like OpenAI’s ChatGPT or Anthropic’s Claude 2 would be required to provide summaries of any copyrighted works, including music, that they use to train their systems.”
Resource: Publishing Perspectives