David Altenschmidt
June 23, 2023

EU Issues Tougher Guidelines for Controlling Fake News From AI

“Mark my words, AI is far more dangerous than nukes…why do we have no regulatory oversight?”, Elon Musk [1]

In the digital age, the European Union is taking action to address the challenge of fake news generated through Artificial Intelligence (AI). A toughened EU Code of Conduct against Disinformation was recently launched and requires signatories such as Google, Facebook, and YouTube to label AI content.

At the launch of a hardened EU Code of Conduct against Disinformation on Monday June 05, Vice President Věra Jourová pledged to introduce technologies that would recognize and clearly label AI content for users.

The Code was originally presented in 2022 and has since been joined by 56 signatories, including Google, Facebook, YouTube, and TikTok. Twitter, however, last month decided to withdraw from the agreement. The Code is voluntary, however under the Digital Service Act (DSA) very large online platforms that fail to comply with the Code can face hefty fines up to six percent of their global revenue.

The new regulations are being implemented ahead of the upcoming European elections in 2024, and are aimed to combat Russian disinformation, expand fact-checking teams, and increase access to data for researchers as well as safeguards against malicious use of AI to spread disinformation. The Commission is urging signatories to start labeling AI content as soon as possible, and measures for labeling AI should be presented by July.

In addition to the new regulations, Vice President Jourova also commented that she does not see a right to freedom of expression for machines. It is clear that the EU is taking serious measures to control the spread of disinformation generated by AI in a difficult frontier to combat.

In conclusion, the European Union is taking strong measures to combat disinformation generated by Artificial Intelligence, particularly ahead of the 2024 European elections. A hardened EU Code of Conduct has been launched and is signed by several large online platforms, with incentives for AI content labeling, improved fact-checking teams, and safeguards against malicious AI use. With stricter regulations now being implemented, the EU is making sure to protect its digital integrity and the safety of its citizens. [2]

At the Center for Deep Tech Innovation, one of our fields of expertise is Artificial Intelligence. Tougher guidelines for AI disinformation is important because it helps to ensure that users have access to accurate and truthful information online. Deep Tech Center's Dr. Marcel Müller on the topic:

“I have been in AI for 10 years, you always see the same cycles: Innovation, shock moment, people calling for regulation. There is a delicate balance between regulation and harming innovation in the process. Innovators and regulators have to work hand in hand to get out the maximum for the European Union.”, Dr. Marcel Müller, Director Center for Deep Tech Innovation

Follow us on LinkedIn or directly get in touch if that topic is close to you.



Our Experts

Check out our Experts

More Experts