A place were I can write...

My simple blog of pictures of travel, friends, activities and the Universe we live in as we go slowly around the Sun.



March 04, 2019

Fighting Fake News in 2020

What a Kamala Harris Meme Can Teach Us About Fighting Fake News in 2020

I tracked the spread of a bogus claim about the candidate. Here’s what I learned.

By BENJAMIN T. DECKER

Will fake news tarnish the 2020 election as much as it did in 2016? It’s tempting to think that we’ve started to solve the problem. After the 2016 election, the big social media platforms pledged to root out mis- and disinformation through self-regulation. Facebook is tripling the size of its safety and security teams to help protect election integrity. YouTube promised to reduce the spread of “borderline content.” Twitter published large data sets of potential foreign information operations for researchers to analyze. And Reddit increased the scope of its quarantine and ban policies for content such as Holocaust denial, conspiracy theories and misogyny.

But if you look at how fake news really works—how ideas and memes take root and spread across the digital universe—it’s becoming clear that these measures won’t be enough.

As the 2020 campaign gets underway, manipulated, disruptive and misleading content has already emerged. Researchers, for instance, have identified a group of Twitter accounts that have begun coordinating negative messaging—laced with racist and sexist stereotypes—directed at various Democrats. One of the key problems with policing this kind of content is that even if individual platforms do their part, misinformation is platform-agnostic; it jumps from one site to another, sometimes bursting from the darkest corners of the internet to the most open public squares too quickly for any one company to intervene. A meme that is “downranked” on Facebook for, say, a false headline can still find its way to Twitter, Instagram, YouTube or Reddit.

For a window into how this works—and what we’re missing—consider one particularly widespread meme: an attack on Kamala Harris preposterously comparing the senator, who is the daughter of Jamaican and Indian immigrants, to Rachel Dolezal, a white woman who made false claims about being black. To find out where this meme came from, I tracked down its first appearance on the internet through a series of exhaustive reverse-image searches; I then mapped its spread by conducting an analysis of all open-source image- or text-based claims tying together Harris and Dolezal. On social media, I also identified a loose network of hyperpartisans and conspiracists who worked to amplify the claim into the mainstream.

The first recorded appearance of the meme was a May 2018 Reddit post the on r/The_Donald message board, a popular hub for disinformation. From there, the idea hopped to Pinterest, Twitter, 4chan’s “politically correct” message board, as well as Gab and Voat, two “alternative” platforms that emerged as hubs of toxic information after the 2016 election. By the time Harris had announced her presidential run earlier this year, the meme had been shared more than 100 times, including across conspiracy-themed Facebook groups, several websites and a known neo-Nazi web forum.

The Harris/Dolezal meme was part of a larger ecosystem of anti-Harris memes and stories questioning her identity. The best-known is the false claim that Harris, who was born in Oakland, California, is not eligible to run for president because she was born to foreign nationals and was raised partly outside the United States.

This idea, which was traced by Media Matters, first percolated in a July 2017 tweet from an anonymous account. The claim popped up now and again for more than a year after that—in a series of May 2018 Reddit memes and on the blogs of two known conspiracists who in the past have pushed birther theories about Barack Obama and others. Memes about this attack on Harris re-emerged at the time of her presidential announcement in January, fueled largely by two tweets—one from a since-suspended QAnon conspiracy account and the other from a since-suspended Twitter user named Jacob Wohl, who had 185,000 followers and whose false claims about Harris’ “naturalization status” were retweeted more than 6,000 times. Within hours of Harris’ announcement, both recycled and new memes had flooded the feeds of conspiracy-adjacent Facebook groups and on 4chan message boards.

The shadow campaign burst into legitimate conversation when CNN anchor Chris Cuomo, who has more than 1.3 million followers on Twitter, seemed to call for Harris to produce proof of her citizenship. In a since deleted January 22 tweet, Cuomo said, “hopefully there will be no games where the issue keeps changing for righty accusers...and...the legit info abt Harris comes out to deal with the allegation ASAP. The longer there is no proof either way, the deeper the effect.” Perhaps unintentionally, Cuomo (who later apologized) pushed the story into the national news cycle. Journalists and talking heads jumped on the topic, as did more conspiracy theorists. Media Matters, as well as PolitiFact and Snopes, published fact-checks that, while well-intentioned, gave these falsehoods more longevity.

Disinformation agents—whether domestic political operatives, far-right trolls or those acting purely for the “lulz”—operate a bit like brushfire arsonists. They set small blazes of false information in places such as 4chan, Reddit and Gab, where it is easy for sparks to jump over the firebreaks and move to more mainstream platforms. More bad actors stand at the ready to fan the flames once a meme is in wider circulation.

How can we stop this? It’s not that YouTube, Twitter and other platforms can’t police conspiracies like those about Harris on their own sites, or that fact-checkers shouldn’t continue to correct false information as it pops up. But because these kinds of memes often exist on less high-profile sites before they go viral, we can and should start to target the origin of suspect content and its dissemination across platforms. This is best achieved through collaboration and information-sharing.

What I propose is a cross-platform hub in which independent researchers and journalists, as well as members of major social media outlets’ security teams, would work together to inform one another of emerging threat intelligence by parsing through hashtags, memes, videos and web sites. For example, the soft spike in Harris-related birther content in the spring and summer of 2018 could have been flagged by researchers and reported to social media platforms’ trust and safety teams. These teams would in turn monitor the meme on the internet and share information about its spread. If the bad content made it to their own platforms, they would be prepared to counter it, whether through fact-checking, down-ranking, allowing users to filter content or removing content altogether.

There is some precedent for this. Social media outlets already work together to fight explicitly illegal internet content, such as child pornography. And in 2017, Facebook, YouTube, Twitter and Microsoft partnered for an industry-led initiative called the Global Internet Forum to Counter Terrorism to “substantially disrupt terrorists’ ability to promote terrorism, disseminate violent extremist propaganda, and exploit or glorify real-world acts of violence using our platforms.” The partnership has been fruitful—there is significantly less terrorist content available across social media websites, especially among Islamic State, al-Qaida and Boko Haram.

To apply the same kind of collaboration to political disinformation, the social media platforms would have to agree to some standards for what constitutes problematic information—a tricky question in a realm where fiction, satire, news and opinion all legitimately co-exist. But these platforms are already developing policies to identify such information and remove it. Academic researchers are also building better methods to understand and anticipate fake-news narratives. We can’t wait much longer to try new, collaborative tactics—not when our politics and our democracy are at stake.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.