|
Getting your Trinity Audio player ready...
|
Edited by: TJVNews.com
Senator Ted Cruz (R-Texas), Ranking Member of the Senate Committee on Commerce, Science, and Transportation, and member of the Senate Judiciary Committee, on Monday, sent a letter to social media companies Meta, Google, Twitter, and TikTok launching an oversight investigation into these companies’ use of recommendation algorithms and their reported use of “blacklists,” “de-emphasizing,” and other means of “reduced distribution” of content from users, including many conservatives.
In his letter to these Big Tech companies, Sen. Cruz wrote:
“As you are well aware, social media companies rely on algorithms to not only moderate content, but also to surface personalized recommendations to users. Recommendation systems play an increasingly ubiquitous role in selecting content for individual consumption, including by promoting some content, using product design elements to prominently display recommendations, and down ranking or filtering disfavored content and accounts[…]
“The design of these systems is especially important in light of the Gonzalez v. Google LLC case before the U.S. Supreme Court this term, which concerns whether Section 230 immunizes platforms when they make targeted recommendations of third-party information.
“Recommendation systems are separate and distinct from algorithms that rank or otherwise organize content that a user is already following or subscribed to. Taken as a whole, these systems have an outsized impact—whether positive or negative—on the reach of content and accounts and, by extension, speech[…]
“At their best, recommendations help users discover interesting or relevant content that they might not otherwise find on a platform. However, recommendation systems can also fuel platform addiction by feeding users an essentially infinite stream of content. This can be especially dangerous when recommendations make it easier for vulnerable users, especially teenagers, to find objectively harmful content, such as content that promotes eating disorders and self-harm[…]
“In addition to my concerns about the addictive nature of these systems, I am equally concerned with how censorship within recommendations impacts the distribution of speech online. In a world where seven out of ten Americans receive their political news from social media, the manner in which content is filtered through recommendation systems has an undeniable effect on what Americans see, think, and ultimately believe.”
He added that, “For example, the Twitter Files revealed—and Meta CEO Mark Zuckerberg also confirmed—that both platforms heavily censored the New York Post story about emails on Hunter Biden’s laptop just two weeks before the 2020 election. This censorship ostensibly included suppressing the story in recommendations.
Today’s behemoth social media platforms appear to have adopted the view that a user’s ability to post content does not entitle the user to distribute content. In a 2018 article for WIRED, a liberal academic groused that “politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online” needed to be reminded “that free speech does not mean free reach.”
In other words, as the theory goes, platforms are not restricting speech when they throttle a social media poster’s otherwise benign content, including via recommendations. This kind of soft censorship is still censorship. Likewise, manual and algorithmic interventions that reduce the reach of content—including filtering content from recommendations—are analogous to other interventions, such as content removals, in that they still restrict the poster’s legitimate speech.
The U.S. Supreme Court has also recognized that the First Amendment protects both speakers and their audiences. In Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Justice Harry Blackmun, writing for the majority, stated, “[w]e are aware of no general principle that freedom of speech may be abridged when the speaker’s listeners could come by his message by some other means, such as seeking him out and asking him what it is.”
Senator Cruz concluded his letter by saying, “this letter also serves as a formal request to preserve any and all documents and information, inclusive of e-mails, text messages, internal message system messages, calls, logs of meetings, and internal memoranda, related to the development, deployment, scope, and impact of any current, former, or planned recommendation systems on your platforms.”
As part of his investigation, Sen. Cruz is asking for Meta, Google, Twitter, and TikTok to provide information and documents regarding the scope of these companies’ content recommendation systems, the effect on distribution of content, manual intervention into the recommendations process, how political speech is treated, and what protocols for transparency and due process currently exist regarding these algorithms.

Background:
Many Americans are rightly concerned about Big Tech’s pervasive deployment of viewpoint censorship online. As the technology has evolved, so too has the arsenal of tools by which social media companies can conduct censorship.
In addition to deleting content and accounts, companies like Meta, Google, Twitter, and TikTok also do things like:
Placing users in so-called “jail” for posting content that offends,
Shadow-banning users so that their posts are not seen by others,
Pushing users’ content down in ranked feeds, and
Removing users and their content from recommendations.
The suppression of speech that happens across the world’s largest social media platforms is breathtaking in its scope, near-uniformity, and sheer scale. The “Trust and Safety” apparatuses at these companies were originally brought in to tackle truly harmful and dangerous content.
Indeed, the goal of permitting some level of content moderation to keep users safe is part of the reason that Congress in 1996 passed legislation to provide a safe harbor from civil liability for “good faith” actions to restrict access to content that is “obscene, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”.
Notably, nowhere does Section 230 provide a good faith carve-out for Trust and Safety teams to censor opinion. However, the moderation that “Trust and Safety” teams do today strays far outside the good faith boundaries originally prescribed in Section 230.
There are countless examples of content moderation that has explicitly targeted right-leaning—or simply not-mainstream—views:
Facebook employees routinely suppressed news stories of interest to conservatives,
Google demonetized The Federalist because of comments posted by third-party users, and
Twitter labelled factually accurate content about COVID as “misleading”.
As Sen. Cruz’s letter references, in Gonzalez v. Google, the Supreme Court has been asked to determine whether platforms are immunized under Section 230 when they make targeted recommendations of third-party content.
Previously, Sen. Cruz joined an amicus brief in the Gonzalez v. Google case arguing that courts have incorrectly expanded the scope of Section 230 immunity from civil liability, allowing Big Tech companies to escape scrutiny for their targeted recommendations.

