The European Union’s (EU) new rules on artificial intelligence (AI) will enter into force in August 2024, bringing significant changes to the way digital content and data are managed. An important element of the new regulation is the restriction on the availability of data for AI-based developments, in particular the use of text databases and the protection of online content.
As a result of the regulation, many content providers, including major news portals, are increasingly restricting access to search bots, so that by 2024 around 30% of websites will block such automated data collection, which previously only affected a few percent of websites. An interview with AI expert Levente Szabados reveals that these measures aim to increase privacy, transparency and user security.
The importance of transparency and the responsibility of the nation state
Szabados stresses that the EU AI regulation makes transparency a priority: the regulation requires all AI-based developments to document what data and methods have been used to train algorithms. Individual Member States must strictly follow the EU requirements, while enforcement remains a national responsibility.
The EU’s central body supports national enforcement authorities by developing good practices and guidelines to help developers and operators of AI systems to comply with the legislation. According to Szabados, a key element of transparency is that consumers know when content, such as an image, has been created using AI. This labelling also serves as a protection mechanism, as it allows a clear distinction between original and generated content.
Challenges in complying with child protection and data protection rules
The new regulation also introduces stricter rules on the use of social media, in particular to protect children’s rights. However, Szabados said that it is a challenge to oblige large platforms to comply with them, as new and disappearing services are constantly appearing on the internet. This rapid change often makes it difficult for authorities to control such content and causes a backlog.
Automated AI systems to detect illegal content
Szabados also highlighted initiatives by UN child protection agencies that use AI to detect and filter out illicit content, such as child pornography. Such tasks are extremely demanding for human inspectors, so automation through AI will greatly assist in effective filtering. In situations where there is a clear consensus around regulation, AI can support child protection measures quickly and efficiently.
The start of a new era in digital content protection
While the EU’s new regulation on AI focuses on transparency, responsibility and consumer rights, the impact of the regulation is making more and more data inaccessible to AI development. According to Levente Szabados, the regulation is a step in the right direction that can contribute to making the online space safer in the long term, but there are still a number of challenges to be addressed in practical implementation.