LEFT WING RESEARCHERS WANT TO LIMIT RIGHT WING DIALOGE!
The EU's DSA requirement for researcher data access (Article 40) gives "vetted" researchers—typically academics or non-profits approved by national regulators—easier access to public X data like post engagement, views, and networks. The official goal is studying "systemic risks" (e.g., disinformation spread). Critics argue this can chill or deter honest/open dialogue in these ways:Broad and subjective labeling of "disinformation" or "harmful" speech: Researchers studying political topics can flag dissenting or unpopular views (e.g., on immigration, elections, gender issues, or COVID policies) as "misinformation" if they don't align with mainstream narratives, leading to reports that pressure platforms or governments to suppress them.
Doxxing and harassment risks: Detailed data (e.g., who engages with controversial posts) can reveal user networks or identities, even if posts are public. Ideologically motivated researchers or leaks could expose people to real-world harassment, job loss, or threats—similar to past cases where academics have publicized lists of "hate speech" accounts.
Chilling effect on everyday users: Knowing researchers (often from left-leaning universities or NGOs) can deeply analyze political posts makes people self-censor. Users may avoid "mean" criticism, jokes, or minority opinions out of fear their activity will be profiled in a study portraying them as part of a "toxic" or "extremist" group.
Selective targeting of one side: Vetted researchers tend to focus on right-wing or conservative content (as seen in pre-DSA studies on "hate speech" or "election interference"). This creates uneven scrutiny—progressive views rarely get the same treatment—discouraging open debate from certain perspectives.
Amplification into policy pressure: Research outputs often feed into EU reports or demands for more moderation, creating a feedback loop where "unfavorable" speech is indirectly penalized without direct censorship.
In practice, no widespread "hunting down" of individuals has been documented yet under DSA (it's still early), but critics (including Elon Musk and free-speech advocates) see it as a tool that empowers biased academics to monitor and stigmatize lawful but controversial speech, making platforms feel less safe for honest political talk. Supporters say safeguards (vetting, public results) prevent abuse, but the concern is real for anyone skeptical of who gets to define "harmful."
Paid blue checkmarks are "deceptive": EU says allowing anyone to buy a blue tick (via X Premium) misleads users into thinking it's real identity verification, like the old system – increasing risks of scams, impersonation, and manipulation.
Lack of ad transparency: X must maintain a fully accessible, searchable public repository of all ads (including who paid, targeting details) so researchers and users can spot scams or illegal ads – EU ruled X's version falls short.
Blocking researcher data access: X is required to provide vetted researchers with public data (e.g., post views, engagement metrics) to study systemic risks like disinformation – EU says X imposes unnecessary barriers.
These three violations just led to a €120 million fine (announced Dec 5, 2025) under the Digital Services Act (DSA).Ongoing investigations (no fines yet):Handling of illegal/harmful content and user reporting tools.
Effectiveness against disinformation and manipulation.
Transparency/explainability of algorithms (how content is recommended/promoted).
In short: The EU wants more transparency, less user deception, and better tools for oversight – not direct content censorship in these cases. X can appeal the fine, and bigger probes are still open.
Comments
Post a Comment