Senate’s Bold Push: Ending Big Tech’s Free Ride

Smartphone screen with social media app icons.

Washington’s rare bipartisan fury at Big Tech is now targeting the legal shield that helped build the modern internet—and critics say it has outlived its purpose.

Quick Take

  • Lawmakers from both parties are pressing to narrow or sunset Section 230, the law that generally protects platforms from liability for user posts.
  • Proposals focus on carving out accountability for child exploitation, suicide-related harms, and algorithm-driven amplification, not merely “speech.”
  • Supporters argue today’s platforms behave less like neutral message boards and more like powerful publishers through recommendation systems.
  • Skeptics warn sweeping changes could entrench Big Tech by crushing smaller competitors and encouraging broad censorship to reduce legal risk.

Why Section 230 Is Back in the Crosshairs

Section 230, enacted in 1996 as part of the Communications Decency Act, generally prevents social media and other online platforms from being treated as the “publisher or speaker” of content posted by users. That legal immunity helped the internet grow, but it also limited victims’ options to sue when harm is linked to content hosted—or promoted—by major platforms. As the law reaches its 30-year mark, courts and lawmakers are revisiting how far immunity should extend.

Sen. Lindsey Graham and Sen. Richard Blumenthal have become prominent voices arguing that what once protected early internet services now shields powerful companies from accountability, including in cases involving child exploitation and self-harm. Sen. Josh Hawley has also advocated reforms that would allow victims to bring lawsuits. The basic argument is that immunity has become so broad that it can insulate companies even when critics believe platform design choices contribute to real-world damage.

The Algorithm Question: Speech vs. Amplification

A central fault line is whether platform algorithms should be treated like protected speech or like product design that can create foreseeable harm. Some lawmakers argue that recommending, ranking, and amplifying content is not the same thing as simply hosting a user’s post. Rep. Ro Khanna has drawn a sharp distinction between human speech and algorithmic curation, arguing that the First Amendment does not automatically protect how platforms route attention for profit. That framing aims to preserve open discussion while targeting engineered amplification.

The distinction matters because modern platforms do not operate like the 1990s message boards Section 230 was written for. Recommendation engines can steer users toward increasingly extreme or addictive material, and critics contend those systems can expose children to sexual content or predatory behavior. At the same time, courts have historically interpreted Section 230 broadly, which is why the legal and policy debates often come down to where “content” ends and “conduct” begins.

Where the Bipartisan Coalition Aligns—and Where It Splits

Republicans and Democrats are arriving at similar conclusions for different reasons. Many Republicans remain focused on claims that platforms have suppressed viewpoints, including during the COVID era and election controversies, and they argue that federal pressure on platforms can blur into censorship. Many Democrats emphasize youth safety, mental health, and corporate incentives that reward engagement over well-being. The overlap is a shared belief that concentrated tech power, backed by legal immunity, has produced outcomes voters never consented to.

The disagreement is over what reform should look like. Some proposals would sunset Section 230 altogether unless Congress reauthorizes it, while others seek narrower carve-outs—especially around child sexual abuse material or algorithmic promotion. Sen. Ron Wyden, a key architect of Section 230, has warned that overly broad reforms could punish smaller sites and nonprofits that rely on moderation to keep communities usable. That concern has become a major brake on sweeping legislation, even when outrage is bipartisan.

What Changes Could Mean for Speech, Safety, and Competition

Reform could expand opportunities for civil lawsuits, especially for families alleging platforms enabled exploitation or amplified harmful content. Supporters say that threat of liability is the missing incentive to force serious safety engineering, not just public-relations moderation. Critics counter that companies would respond by deleting more lawful content, limiting user access, or over-filtering to reduce risk—potentially shrinking online speech. The experience with earlier carve-outs has fueled warnings about overcorrection and collateral censorship.

Economically, new liability exposure could hit smaller competitors hardest, because legal compliance costs favor firms with large legal teams and cash reserves. That creates a conservative-leaning paradox: reforms meant to discipline Big Tech could unintentionally reinforce Big Tech’s dominance if the rules are written without a clear small-platform safeguard. With Republicans controlling Congress and the White House in 2026, the practical test will be whether lawmakers can craft a narrower accountability regime—focused on illegal exploitation and algorithmic conduct—without turning Section 230 reform into a backdoor speech police.

Sources:

Bipartisan lawmakers want to strip Big Tech’s legal immunity that can shield social media companies

The law that built the internet and continues to test the courts

Law Faculty Scholarship

Big Tech

Durbin, Graham Introduce Bill to Sunset Section 230 Immunity for Tech Companies, Protect Americans Online