Trump Opposes EU Digital Services Act

Tensions are once again mounting between Brussels and Washington over the European Union’s Digital Services Act (DSA) — a sweeping regulatory framework that, while framed as a safeguard for online “safety” and “civic integrity,” is being viewed in Washington as something far more intrusive: a threat to free speech.

At the heart of the dispute is the claim — issued plainly by U.S. lawmakers like Rep. Jim Jordan and echoed by Vice President JD Vance — that the DSA, under the guise of mitigating “systemic risks,” may be erecting a global infrastructure for censorship. And despite the EU’s insistence that it’s doing no such thing, the law’s wording and enforcement mechanisms tell a more troubling story.

The EU’s Vice-President for Tech Sovereignty, Henna Virkkunen, pushed back in a letter this month, saying the DSA “does not regulate speech” and that Europe remains “deeply committed to protecting and promoting free speech.” But the U.S. isn’t convinced — and with good reason.

The concern isn’t just about Europe. It’s about what happens when the world’s largest digital platforms are forced to make regulatory decisions in one region that affect users everywhere. Jim Jordan’s warning is straightforward: when global platforms face vague, expansive, and heavily punitive legal mandates like the DSA, they don’t silo those rules to one geography. They apply them globally — because it’s easier, safer, and cheaper than building and enforcing region-specific policies.

We’ve seen this before. The EU’s General Data Protection Regulation (GDPR) didn’t stay confined to Europe. It reshaped digital privacy practices worldwide. Now, the DSA is poised to do the same for speech — only this time, it’s not about securing personal data, it’s about defining what people are allowed to say, read, and post.

Articles 34 and 35 are the fulcrum. They require platforms to monitor and mitigate “systemic risks,” but offer little legal clarity on what that actually means. “Civic discourse,” “electoral processes,” “public health” — these are not criminal categories. They are sprawling, subjective concepts that change with every news cycle and political climate. And yet, under the DSA, platforms that fail to manage these risks face fines of up to six percent of global revenue — a penalty large enough to bring even the biggest companies to heel.

Faced with that kind of exposure, platforms won’t wait to be punished. They’ll preemptively remove anything that might be interpreted as problematic. The result isn’t enforcement of law. It’s corporate risk aversion dressed up as content moderation. It’s the silent deletion of the lawful-but-controversial — the very kind of speech that courts in liberal democracies have long considered vital to public discourse.

Virkkunen argues that the DSA is “content-agnostic,” simply a matter of procedure. But in practice, procedural rules with vague standards and heavy consequences are indistinguishable from direct content regulation. Telling a platform to “mitigate risks to civic discourse” without telling them what that means is a license for preemptive censorship.

And the real irony? The European Commission, which answers to no electorate outside its borders, is poised to become the de facto global moderator of online content. Not by force, but by regulatory weight — the so-called “Brussels Effect” that leads companies to adopt the strictest global standards across the board.

For the U.S., this raises fundamental constitutional concerns. The First Amendment protects not just polite or popular speech, but, as Lord Justice Sedley once noted, “the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative.” That’s not a loophole. It’s the point.

LEAVE A REPLY

Please enter your comment!
Please enter your name here