EU Introduces Voluntary Code for AI and Copyright Compliance

Jul 22, 2025
Interview

Introducing Vernon Yai, a leader in data protection and governance, whose insights into the EU’s latest draft code of practice for artificial intelligence promise to offer clarity on a complex regulatory landscape. With a focus on protecting copyright, addressing systemic risks, and offering companies a roadmap for compliance, Vernon deciphers the implications of this voluntary initiative.

Can you explain the main objectives of the EU’s draft code of practice for artificial intelligence?

The main objectives of the EU’s draft code of practice are to help companies comply with AI rules by focusing on safeguarding copyright-protected content and mitigating systemic risks. It provides a framework that emphasizes transparency, safety, and adherence to the ethical standards set by the EU for artificial intelligence applications.

How does the EU’s voluntary code of practice benefit companies that choose to sign up?

Companies that sign up can benefit from legal certainty, as the voluntary code offers a structured pathway to compliance with forthcoming AI regulations. This means companies have a clearer understanding of their obligations and can avoid potential non-compliance issues, which could arise if they choose not to engage with the code.

What are the potential disadvantages for companies that do not sign up for the code of practice?

For companies that choose not to participate, the significant disadvantage is a lack of legal certainty. These firms might find themselves ill-prepared when the rules become mandatory, leading to potential legal and financial risks as they scramble to meet the necessary requirements.

Who were the main contributors in drafting this code of practice?

The draft code was crafted by a team of 13 independent experts. These individuals collectively brought a wealth of experience and insights into AI regulations, which helped in shaping a code that aims to address both the technical and ethical challenges of AI.

Which companies are specifically mentioned as being impacted by the EU’s AI rule book?

Some of the prominent companies impacted include Alphabet, Meta, OpenAI, Anthropic, and Mistral. These firms are leaders in the development and deployment of AI technologies, making them prime candidates for the application of the EU’s AI regulations.

What requirements do signatories need to fulfill in relation to the content used for training AI models?

Signatories are required to publicly disclose summaries of the content used in training their general-purpose AI models. They must also ensure that any use of copyright-protected content via web crawlers is compliant and that measures are in place to reduce the risk of producing outputs that infringe on copyrights.

How does the code of practice address the issue of copyright-protected content?

The code specifically mandates that companies use copyright-protected content responsibly, employing measures like using web crawlers only when it respects copyright laws. This helps protect the intellectual property of content creators while ensuring that AI outputs are legally sound.

What steps are companies required to take to mitigate the risk of copyright infringement in AI outputs?

To mitigate copyright infringement risks, companies need to implement technical and managerial measures that reduce the likelihood of their models generating content that violates copyright laws. This involves a thorough analysis of training data and continuous monitoring of AI outputs.

What measures are proposed in the code to tackle systemic risks associated with AI?

To address systemic risks, companies must establish frameworks that identify, assess, and manage these risks. This involves constant evaluation and the implementation of strategies that ensure AI technologies operate safely within established norms.

How are the transparency and copyright guidelines different for general-purpose AI versus advanced models?

For general-purpose AI, the guidelines focus on transparency regarding data usage and ensuring copyright compliance. For advanced models, such as those developed by companies like OpenAI and Google, the code also imposes additional safety and security requirements to ensure these powerful tools are used responsibly.

Can you give examples of the most advanced AI models that the code specifically targets for safety and security measures?

The code targets advanced AI models such as OpenAI’s ChatGPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude. These models are at the forefront of AI development, necessitating additional oversight to manage their broader influence and potential risks.

What are the transparency obligations placed on high-risk AI systems under the EU’s AI Act?

High-risk AI systems under the AI Act face stringent transparency obligations, which include detailed documentation and reporting on their operational processes. These measures aim to ensure accountability and foster trust in AI systems that have significant societal impacts.

How do the regulatory requirements differ for military, crime, and security AI applications compared to general-purpose models?

For military, crime, and security AI applications, the regulations impose stricter controls due to their potential impact on society. These applications must meet heightened standards for transparency, oversight, and ethical use compared to more generic, or general-purpose AI models.

When do the AI rules for large language models become legally binding?

The rules for large language models will become legally binding on August 2. This means companies must be fully compliant by this date to avoid penalties.

What’s the timeline for the enforcement of these AI rules for new and existing models?

For new models, rules will be enforced starting August 2 of next year, giving companies one year to comply. Existing models are granted until August 2, 2027, to align with these regulations.

How does the design of the code align with the needs of AI stakeholders, according to EU tech chief Henna Virkkunen?

Henna Virkkunen highlights that the code was co-designed with input from AI stakeholders, ensuring it meets their needs. The collaborative effort aims to provide a clear path to compliance, reducing uncertainty and encouraging adoption.

What are the next steps for the code of practice before it can be implemented, and when is this expected to happen?

Before implementation, the code of practice requires approval from EU countries and the Commission. It’s expected that the green light will be given by the end of the year, paving the way for its official adoption and impact.

What is your forecast for the impact of the EU’s AI regulations?

Looking ahead, these regulations will likely become a benchmark for global AI governance, influencing how other regions approach AI oversight. As companies adapt to these guidelines, we can anticipate a shift towards more responsible AI development and deployment worldwide.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later