Poland Sounds Alarm Over TikTok’s AI-Generated Content: Is Europe’s Democracy at Risk?
In a move that’s sparking intense debate, Poland has called on the European Commission to investigate TikTok after the platform hosted AI-generated content advocating for Poland’s exit from the European Union. But here’s where it gets controversial: Polish officials claim this content is almost certainly Russian disinformation, raising questions about TikTok’s compliance with the EU’s Digital Services Act (DSA). Could this be a wake-up call for how AI-driven content threatens democratic processes across Europe?
A TikTok profile featuring videos of young women dressed in Polish national colors gained traction in recent weeks, urging Poland to leave the EU. The profile has since vanished, but the fallout continues. Deputy Digitalization Minister Dariusz Standerski warned in a letter to the Commission that such content poses a ‘threat to public order, information security, and the integrity of democratic processes’ in Poland and beyond. He argued that TikTok’s handling of synthetic audiovisual materials suggests it’s failing to meet its obligations as a Very Large Online Platform (VLOP).
And this is the part most people miss: The content reportedly contained Russian syntax, leading Polish officials to label it as ‘undoubtedly Russian disinformation.’ TikTok responded by removing the content, stating it violated their rules. But is that enough? The DSA mandates that platforms like TikTok assess and mitigate risks associated with AI-generated content. In March 2024, the Commission already requested information from TikTok and other platforms on their AI risk management measures. Yet, this incident suggests enforcement may still be lagging.
The stakes are high. With EU countries increasingly wary of foreign interference in elections and local politics, TikTok’s role in amplifying potentially harmful content is under scrutiny. Last year, the Commission launched formal proceedings against TikTok over suspected election interference in Romania. Now, Poland’s call for action adds another layer to this complex issue.
Here’s the bigger question: Are platforms like TikTok doing enough to combat AI-generated disinformation, or are they inadvertently becoming tools for foreign manipulation? The DSA empowers the Commission to fine non-compliant platforms up to 6% of their global turnover, but will this be enough to deter misuse? As AI technology advances, the line between free expression and harmful content grows blurrier. What do you think? Is TikTok a victim of its own algorithms, or does it bear greater responsibility for the content it hosts? Let’s discuss in the comments—your perspective could shape the conversation.