Push to rein in social media sweeps the states
Lawmakers in 34 red and blue states want to crack down on how online companies handle users' content. But those efforts are colliding with the First Amendment.
By REBECCA KERN
Efforts to police speech on social media are spreading across the country, with lawmakers in 34 states pushing bills that are already setting up court battles with tech giants over the First Amendment.
State legislators have introduced more than 100 bills in the past year aiming to regulate how social media companies such as Facebook and Twitter handle their users’ posts, according to POLITICO’s analysis of data from the National Conference of State Legislatures. However, only three bills have become law, including statutes in Texas and Florida aimed at punishing platforms that Republicans accuse of censoring conservatives — and federal courts have blocked those two states’ measures from taking effect.
Blue states are joining the trend as well, though Democrats’ emphasis is pressing social media companies to establish policies for reporting hate speech, violent content and misinformation.
The states’ efforts — in the absence of federal action — could test governments’ ability to regulate speech, while forcing some of the nation’s wealthiest tech companies to fight an array of legal battles against laws that could upend their business models. These fights will also present courts with a fundamental debate about how the First Amendment plays out in the online age, including the companies’ own rights to decide what content they host on their platforms.
Many legal scholars see glaring flaws in some states’ approaches. “The government cannot tell a private company what speech it can or cannot carry, provided that speech is constitutionally protected,” said Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who has written two books about online speech.
Industry groups have warned that some of the laws — especially the ones in Texas and Florida — could wreak havoc on how they handle content worldwide.
“You cannot have a state-by-state internet,” Kosseff said. “When you step back and look at the possibility of having 50 different state laws on content moderation — some of which might differ or might conflict — that becomes a complete disaster.”
The bills fall into four major categories: More than two dozen, pushed by Republicans, seek to prevent companies from censoring content or blocking users. Others, pushed by Democrats, aim to require companies to provide mechanisms for reporting hate speech or misinformation. Lawmakers of both parties support proposals to protect children from addiction to social media. A fourth, also with bipartisan support, would impose transparency requirements.
Here is POLITICO’S look at the state of play:
Banning censorship
Conservatives’ efforts to ban social media from restricting users’ content ramped up last year, after the major social media platforms booted then-President Donald Trump following his supporters’ Jan. 6 attack on the Capitol.
Since then, legislatures in more than two dozen states — the vast majority Republican-led — have introduced bills aimed at preventing social media companies from censoring users’ viewpoints or kicking off political candidates.
Two of those have become law: Florida Gov. Ron DeSantis signed a bill (SB 7072) into law in March 2021, later updated this past April, prohibiting tech platforms from ousting political candidates. Texas followed suit last September with a law (HB 20) banning social media companies from restricting online viewpoints.
Now those laws are going through the courts, where tech companies have succeeded so far with arguments that the measures infringe on their First Amendment right to decide what to content to host. The 11th U.S. Circuit Court of Appeals ruled in May that Florida’s law was largely unconstitutional, and the Supreme Court blocked the Texas law while an appellate court considers an industry challenge against the statute.
Proponents of the laws say they protect individuals’ free speech rights to share their views on the platforms. But Scott Wilkens, a senior staff attorney at the Knight First Amendment Institute at Columbia University, said the Texas and Florida laws are “pretty clear violations of the platforms’ First Amendment rights to speak themselves by actually deciding what they will and won’t publish.”
Social media companies have argued that if the Texas law goes back into effect, it may make it harder to remove hate speech, such as a racist manifesto allegedly posted online by the perpetrator of a mid-May mass shooting in Buffalo, N.Y. The major platforms eventually removed that posting after the shooting.
Additionally, the Texas and Florida laws — had they been in effect — could have left Facebook open to lawsuits for their decision in June to remove an ad from Missouri Republican Senate candidate Eric Greitens calling for the “hunting” of so-called “Republicans In Name Only.” Facebook took down the ad because the company said it violated policies prohibiting the incitement of violence. Twitter labeled the ad as violating its policy against abusive behavior, but left it visible to users due to the “public’s interest.”
Other Republican-led legislatures have introduced similar bills in Ohio, Georgia, Tennessee and Michigan that would prohibit social media companies from censoring religious or political speech, or would ban platforms from removing political candidates.
Reporting ‘hateful’ content
Democrats have long pushed social media companies to do more to take down misinformation and disinformation, as well posts attacking people along lines of race, gender or sexual orientation. Legislatures in primarily Democratic-run states — including New York and California — have introduced bills requiring social media companies to establish mechanisms for users to report hate speech to the platforms.
New York is the only state where such a proposal has successfully been enacted. Democratic Gov. Kathy Hochul signed S. 4511 in early June as part of a package of 10 bills aimed at curbing gun violence after the Buffalo shooting. The new law requires social media networks to make it possible for individuals to report hate speech on the platforms in a publicly accessible way and says the companies must directly respond to anyone who reports such speech. Companies could face fines of up to $1,000 a day if they don’t comply.
The law takes effect in December.
Democratic New York state Sen. Anna Kaplan introduced the bill last year in hopes of curbing the radicalizing effects of social media. “We are not in any way telling social media what policy to put in,” she said in an interview. “It’s not about violating the First Amendment. It’s about just empowering the users to be able to report hateful content.”
But NetChoice and the Computer and Communications Industry Association, lobbying groups representing tech companies such as Facebook, Twitter and Google, are analyzing whether the new Texas law could lead to First Amendment infringements. Both groups filed lawsuits against the Florida and Texas laws.
“We’re concerned about the law’s constitutionality, and are raising those concerns with state lawmakers,” said Chris Marchese, NetChoice’s counsel, said in an interview after the New York law was signed.
He said the New York law could violate the First Amendment because its definition of “hateful conduct” is too broad, and covers speech that’s protected by the Constitution. He added that even though New York is different from Texas and Florida, “the temptation for the government to step in is incredibly high no matter where you live.”
In California, Democratic Assemblyman James Gallagher of Yuba City introduced a bill (AB 1114) that would require social media companies to explain how they handle content that involves obscenity, threats and incitements of violence that are not constitutionally protected. The bill failed to advance this session.
New York also has several pending bills that would require social media companies to provide ways to report election- and vaccine-related misinformation.
Regulating addictive algorithms
Legislation addressing children’s safety on social media platforms has some bipartisan support. Several bills have been introduced following last year’s revelations from Facebook whistleblower Frances Haugen that Instagram’s algorithms were pushing unhealthy body images on young girls.
Legislators from both parties in California and Minnesota have introduced bills to address the addictive nature of social media.
The California Assembly passed a bipartisan bill (AB 2408) in late May aiming to protect kids from addictive social media features by making the platforms liable to lawsuits and fines if their products knowingly harm children under the age of 18. A child user or their parent or guardian would be able to sue a platform if the child becomes addicted to a platform. Penalties in a successful class action brought under the bill would be at least $1,000 per individual, potentially adding up to very large sums given the number of children using social media in California.
The bill advanced through a California Senate committee in June and is expected to go to the floor in August.
Tech advocates are raising free-speech objections about the measure.
“This has really serious First Amendment problems,” said David Greene, the civil liberties director of the digital rights nonprofit Electronic Frontier Foundation.
Dylan Hoffman, a California lobbyist for tech trade group TechNet, said the bill goes directly after platforms’ algorithms — which are used to moderate user content — and therefore infringes on their First Amendment speech rights.
“It’s clearly about the content and seeking to regulate any feature that you claim as addictive — well, what’s more addictive than showing good content?” he said. “That’s the inherent problem with this bill because you can’t divorce those two ideas.”
The bill’s sponsor, Republican state Rep. Jordan Cunningham, disputed that argument. “It doesn’t touch or regulate content at all,” he said in an interview. “Nothing in the bill tells any social media company what they can or cannot allow users to post on their platform.”
Kosseff said ultimately he doesn’t believe “that going after algorithms gets rid of the free speech issue.” He added, “If you’re restricting the ability for speech to be distributed, then you’re restricting speech.”
However, Wilkens, of the Knight First Amendment Institute, said that while the bill may “implicate the First Amendment, it doesn’t mean that it violates the First Amendment.” He said that while it’s still up for interpretation, the legislation – if it became law – may “be held constitutional because the state’s interest here in protecting young girls seems to be a very strong interest.”
A bill (HF 3724) in Minnesota’s Democratically controlled House also would bar social media companies from using algorithms directed at children, but it failed to advance this session. It would ban social media platforms with more than 1 million users from using algorithms directed at individuals under the age of 18. Companies could face fines of up to $1,000 per violation.
Mandating transparency
Legislators in Mississippi, Tennessee, New York and California have introduced bills this year requiring platforms to provide transparency reports on their content moderation decisions. Both the Florida and Texas social media laws have provisions requiring such reports. The 11th Circuit upheld disclosure and transparency disclosure requirements in Florida’s social media law in its May decision striking down other parts of the law.
“We have made the argument that there is room for government regulation in disclosure requirements,” Wilkens said. He said he thinks those bills “may very well be constitutional under the First Amendment.”
This bipartisan approach on the state level is one federal legislators are contemplating emulating. Sens. Chris Coons (D-Del.) and Rob Portman (R-Ohio) have drafted a bill to mandate that companies disclose some of their data and explain how algorithms amplify certain content.
“It won’t solve the problem, but it will help us identify what the problem might actually be, and increase the chances that Congress might responsibly legislate,” Coons said in an interview.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.