Are we serious about regulating AI?
AI can be tricked into saying almost anything. That’s what a BBC journalist recently discovered. He found an easy way to make AI say whatever he wanted.
Are authorities doing enough to regulate AI? At the EU level, is the AI Act doing its job?
“You can hack ChatGPT, Gemini, AI Overviews. It is as easy as writing a blog post.”
BBC journalist Thomas Germain ran an experiment: he managed to make three AI tools – ChatGPT, Google’s AI search tools, and Gemini – tell users that he was exceptionally good at eating hot dogs.
As a result, the AI tools began presenting this claim as an established fact.
The issue here is that Thomas Germain found dozens of examples where AI tools can be manipulated to promote businesses or spread misinformation. It appears that altering the answers AI tools provide to the public is surprisingly easy and accessible.
As AI is increasingly used by people for work or for everyday questions, including health-related queries, this is far from reassuring.
And this is only one of the risks posed by the widespread use of AI. Other risks include the massive spread of misinformation through fake video or audio content.
So what are authorities doing to mitigate those risks?
The AI summit that took place in India last week did not deliver on that front. CEOs of the biggest AI companies, alongside a few world leaders, came up with only voluntary commitments. And these commitments did not focus on safe AI use, but rather on data sharing and improving AI tools in underrepresented languages.
Speaking of voluntary practices, the EU did produce a code of practice last year for general-purpose AI, and major companies like OpenAI and Google signed it.
However, according to various tech and AI experts, this has not significantly reduced risks.
Catelijne Muller, co-founder of ALLAI, an independent organisation promoting responsible AI, argues that self-regulation and voluntary commitments simply do not work – only binding regulation does.
Can the European Union make a difference?
The EU was the first, in 2024, to adopt the AI Act, which sets rules for trustworthy AI in the Union. These rules are meant to address risks to people’s health, safety and fundamental rights.
For example, harmful AI-based manipulation is strictly prohibited.
The AI Act also restricts authoritarian-style practices such as social scoring, certain forms of facial recognition, and emotion recognition systems.
But when it comes to more subtle risks – like the spread of misleading or harmful information – the regulation is less effective.
This is partly due to how AI systems work and how quickly they evolve. But it also has to do with the fact that AI is considered a highly strategic technology, increasingly embedded in the global economy.
There is therefore a major challenge for the EU: to take part in AI innovation, not just to be seen as the regulator.
The EU needs to regulate smartly if it wants to have a real impact. Because globally, the US and China — the two biggest AI players
— are not regulating AI use.
The hope that the EU’s first-of-its-kind regulatory approach would influence the rest of the world is slowly fading. EU policymakers initially intended the AI Act to serve as a global blueprint, which we call the “Brussels effect.”
While this approach has worked in other areas, it is not clearly working in AI.
In the US, for example, the Trump administration moved away from regulation and even revoked executive orders adopted under Joe Biden on safe and responsible AI.
AI regulation faces many challenges.
But on a more optimistic note, French neuroscientist Albert Moukheiber believes that people will gradually adapt their perception of AI-generated content. According to him, humans are already trained to be suspicious when interacting with other humans. They will likely learn to apply the same critical thinking to machines.