Poland Asks EU to Probe TikTok’s Use of AI-Generated Content Over Misinformation Fears

Poland Asks EU to Probe TikTok’s Use of AI-Generated Content Over Misinformation Fears
Key Points
  • Poland urges the EU to investigate TikTok’s handling of AI-generated content amid misinformation concerns
  • Warsaw cites potential breaches of the Digital Services Act’s requirements on risk and algorithm management
  • A probe could force changes or sanctions if TikTok fails to meet EU digital safety standards

Poland has urged the European Commission to open a formal investigation into TikTok over concerns about the spread of AI-generated content on its platform. Warsaw’s appeal highlights rising alarm among EU governments about how generative artificial intelligence can fuel misinformation and influence public discourse ahead of elections.

Polish officials say TikTok’s algorithms may be amplifying deepfakes and AI-crafted content without sufficient safeguards. They argue this trend could distort political debate and mislead voters. Those concerns gained urgency as the platform’s popularity has surged among young users, who increasingly rely on it for news and information rather than traditional media sources.

Poland’s request points to perceived gaps in how TikTok moderates AI-generated material. Warsaw believes current content rules fall short of adequately identifying and labeling machine-made media. Officials want Brussels to determine whether TikTok is complying with the EU’s Digital Services Act, which requires large platforms to manage risks tied to algorithmic amplification of harmful content.

Under the Digital Services Act, tech companies must assess risks, introduce measures to reduce them, and report on compliance. These obligations include curbing the spread of manipulated media that could harm “democratic processes.” Poland’s appeal asks EU regulators to verify whether TikTok’s policies meet these standards or require enforcement action.

The move underscores a broader European push to regulate artificial intelligence across social platforms. EU lawmakers have been drafting rules demanding stronger transparency, risk management, and accountability for AI systems. Poland’s request adds urgency, suggesting member states want active enforcement rather than delayed rulemaking.

TikTok has faced similar scrutiny in other regions. Regulators in the United States and United Kingdom have previously expressed concerns about algorithmic recommendations and content moderation. Critics argue that unchecked AI output can trick users, spread false narratives, and undermine trust in legitimate news sources.

EU officials have not yet publicly confirmed whether they will launch a probe. If the European Commission opens an investigation, it could lead to fines or compulsory changes to how TikTok’s AI systems operate in Europe. The commission has broad powers under the Digital Services Act to sanction platforms that fail to meet requirements.

TikTok representatives have defended the platform’s safety measures. They say they have invested in content moderation and AI detection tools. The company insists it works with independent fact-checkers and uses both human reviewers and automated systems to limit the reach of deceptive content.

Despite these assurances, EU member states remain wary. The rapid advance of generative AI has outpaced many existing regulatory frameworks. Policymakers worry that without stricter oversight, social media platforms could become conduits for coordinated manipulation, especially around key political events such as elections and referendums.

Poland’s appeal arrives as the EU prepares to finalize more comprehensive AI rules. The bloc’s Artificial Intelligence Act is expected to set strict standards for high-risk AI applications, including those used in public information ecosystems. Enforcement of these laws may intersect with ongoing digital safety efforts.

Critics of TikTok say the platform’s growth and design make it particularly susceptible to AI misuse. Short, algorithmically tailored video feeds can rapidly spread engaging but misleading content before moderators intervene. This “attention economy” concern amplifies calls for transparency in how recommendation systems work.

As the debate unfolds, EU regulators will weigh the need to protect free expression against the imperative to curb disinformation. Poland’s complaint could trigger closer examination of how AI-generated media is managed on the continent’s most popular social platforms, shaping future tech policy enforcement across Europe.