A Hunt for Ways to Combat Online Radicalization – The New York Times

Interesting approach, applicable to extremists and radicals, whether on right, left or other:

Law enforcement officials, technology companies and lawmakers have long tried to limit what they call the “radicalization” of young people over the internet.

The term has often been used to describe a specific kind of radicalization — that of young Muslim men who are inspired to take violent action by the online messages of Islamist groups like the Islamic State. But as it turns out, it isn’t just violent jihadists who benefit from the internet’s power to radicalize young people from afar.

White supremacists are just as adept at it. Where the pre-internet Ku Klux Klan grew primarily from personal connections and word of mouth, today’s white supremacist groups have figured out a way to expertly use the internet to recruit and coordinate among a huge pool of potential racists. That became clear two weeks ago with the riots in Charlottesville, Va., which became a kind of watershed event for internet-addled racists.

“It was very important for them to coordinate and become visible in public space,” said Joan Donovan, a scholar of media manipulation and right-wing extremism at Data & Society, an online research institute. “This was an attempt to say, ‘Let’s come out; let’s meet each other. Let’s build camaraderie, and let’s show people who we are.’”

Ms. Donovan and others who study how the internet shapes extremism said that even though Islamists and white nationalists have different views and motivations, there are broad similarities in how the two operate online — including how they spread their message, recruit and organize offline actions. The similarities suggest a kind of blueprint for a response — efforts that may work for limiting the reach of jihadists may also work for white supremacists, and vice versa.

In fact, that’s the battle plan. Several research groups in the United States and Europe now see the white supremacist and jihadi threats as two faces of the same coin. They’re working on methods to fight both, together — and slowly, they have come up with ideas for limiting how these groups recruit new members to their cause.

Their ideas are grounded in a few truths about how extremist groups operate online, and how potential recruits respond. After speaking to many researchers, I compiled this rough guide for combating online radicalization.

Recognize the internet as an extremist breeding ground.

The first step in combating online extremism is kind of obvious: It is to recognize the extremists as a threat.

For the Islamic State, that began to happen in the last few years. After a string of attacks in Europe and the United States by people who had been indoctrinated in the swamp of online extremism, politicians demanded action. In response, Google, Facebook, Microsoft and other online giants began identifying extremist content and systematically removing it from their services, and have since escalated their efforts.

When it comes to fighting white supremacists, though, much of the tech industry has long been on the sidelines. This laxity has helped create a monster. In many ways, researchers said, white supremacists are even more sophisticated than jihadists in their use of the internet.

The earliest white nationalist sites date back to the founding era of the web. For instance, Stormfront.org, a pioneering hate site, was started as a bulletin board in 1990. White supremacist groups have also been proficient at spreading their messages using the memes, language and style that pervade internet subcultures. Beyond setting up sites of their own, they have more recently managed to spread their ideology to online groups that were once largely apolitical, like gaming and sci-fi groups.

And they’ve grown huge. “The white nationalist scene online in America is phenomenally larger than the jihadists’ audience, which tends to operate under the radar,” said Vidhya Ramalingam, the co-founder of Moonshot CVE, a London-based start-up that works with internet companies to combat violent extremism. “It’s just a stunning difference between the audience size.”

After the horror of Charlottesville, internet companies began banning and blocking content posted by right-wing extremist groups. So far their efforts have been hasty and reactive, but Ms. Ramalingam sees it as at the start of a wider effort.

“It’s really an unprecedented moment where social media and tech companies are recognizing that their platforms have become spaces where these groups can grow, and have been often unpoliced,” she said. “They’re really kind of waking up to this and taking some action.”

Engage directly with potential recruits.

If tech companies are finally taking action to prevent radicalization, is it the right kind of action? Extremism researchers said that blocking certain content may work to temporarily disrupt groups, but may eventually drive them further underground, far from the reach of potential saviors.

A more lasting plan involves directly intervening in the process of radicalization. Consider The Redirect Method, an anti-extremism project created by Jigsaw, a think tank founded by Google. The plan began with intensive field research. After interviews with many former jihadists, white supremacists and other violent extremists, Jigsaw discovered several important personality traits that may abet radicalization.

One factor is a skepticism of mainstream media. Whether on the far right or ISIS, people who are susceptible to extremist ideologies tend to dismiss outlets like The New York Times or the BBC, and they often go in search of alternative theories online.

Another key issue is timing. There’s a brief window between initial interest in an extremist ideology and a decision to join the cause — and after recruits make that decision, they are often beyond the reach of outsiders. For instance, Jigsaw found that when jihadists began planning their trips to Syria to join ISIS, they had fallen too far down the rabbit hole and dismissed any new information presented to them.

Jigsaw put these findings to use in an innovative way. It curated a series of videos showing what life is truly like under the Islamic State in Syria and Iraq. The videos, which weren’t filmed by news outlets, offered a credible counterpoint to the fantasies peddled by the group — they show people queuing up for bread, fighters brutally punishing civilians, and women and children being mistreated.

Experiencing the Caliphate Video by Upvotely

Then, to make sure potential recruits saw the videos at the right time in their recruitment process, Jigsaw used one of Google’s most effective technologies: ad targeting. In the same way that a pair of shoes you looked up last week follows you around the internet, Jigsaw’s counterterrorism videos were pushed to likely recruits.

Jigsaw can’t say for sure if the project worked, but it found that people spent lots of time watching the videos, which suggested they were of great interest, and perhaps dissuaded some from extremism.

Moonshot CVE, which worked with Jigsaw on the Redirect project, put together several similar efforts to engage with both jihadists and white supremacist groups. It has embedded undercover social workers in extremist forums who discreetly message potential recruits to dissuade them. And lately it’s been using targeted ads to offer mental health counseling to those who might be radicalized.

“We’ve seen that it’s really effective to go beyond ideology,” Ms. Ramalingam said. “When you offer them some information about their lives, they’re disproportionately likely to interact with it.”

What happens online isn’t all that matters in the process of radicalization. The offline world obviously matters too. Dylann Roof — the white supremacist who murdered nine people at a historically African-American church in Charleston, S.C., in 2015 — was radicalized online. But as a new profile in GQ Magazine makes clear, there was much more to his crime than the internet, including his mental state and a racist upbringing.

Still, just about every hate crime and terrorist attack, these days, was planned or in some way coordinated online. Ridding the world of all of the factors that drive young men to commit heinous acts isn’t possible. But disrupting the online radicalization machine? With enough work, that may just be possible.

Advertisements

How to Know What Donald Trump Really Cares About: Look at What He’s Insulting – The New York Times

This is a truly remarkable analysis of social media and Donald Trump, rich in data and beautifully charted by Kevin Quealy and Jasmine Lee.

Well worth reading, both in terms of the specifics as well as a more general illustration of social media analysis:

Donald J. Trump’s tweets can be confounding for journalists and his political opponents. Many see them as a master class in diversion, shifting attention to minutiae – “Hamilton” and flag-burning, to name two recent examples – and away from his conflicts of interest and proposed policies. Our readers aren’t quite sure what to make of them, either.

For better or worse, I’ve developed a deep expertise of what he has tweeted about in the last two years. Over the last 11 months, my colleague Jasmine C. Lee and I have read, tagged and sorted more than 14,000 tweets. We’ve found that about one in every nine was an insult of some kind.

This work, mundane as it sometimes is, has helped reveal a clear pattern – one that has not changed much in the weeks since Mr. Trump’s victory.

First, Mr. Trump likes to identify a couple of chief enemies and attack them until they are no longer threatening enough to interest him. He hurls insults at these foils relentlessly, for sustained periods – weeks or months. Jeb Bush, Marco Rubio, Ted Cruz and Hillary Clinton have all held Mr. Trump’s attention in this way; nearly one in every three insults in the last two years has been directed at them.

If Mr. Trump continues to use Twitter as president the way he did as a candidate, we may see new chief antagonists: probably Democratic leaders, perhaps Republican leaders in Congress and possibly even foreign countries and their leaders. For now, the news media – like CNN and The New York Times – is starting to fill that role. The chart at the top of this page illustrates this story very clearly.

That’s not to say that the media is necessarily becoming his next full-time target. Rather, it suggests that one has not yet presented itself. The chart below, which shows the total number of insults per day, shows how these targets have come and gone in absolute terms. An increasing number of insults are indeed being directed at the media, but, for now, those insults are still at relatively normal levels.

Insults per day

Second, there’s a nearly constant stream of insults in the background directed at a wider range of subjects. These insults can be a response to a news event, unfavorable media coverage or criticism, or they can simply be a random thought. These subjects receive short bursts of attention, and inevitably Mr. Trump turns to other things in a day or two. Mr. Trump’s brief feuds with Macy’sElizabeth WarrenJohn McCain and The New Hampshire Union Leader fit this bucket well. The election has not changed this pattern either.

Facebook’s AI boss: Facebook could fix its filter bubble if it wanted to – Recode

While Zuckerberg is correct that we all have a tendency to tune-out other perspectives, the role that Facebook and other social media have in reinforcing that tendency should not be downplayed:

One of the biggest complaints about Facebook — and its all-powerful News Feed algorithm — is that the social network often shows you posts supporting beliefs or ideas you (probably) already have.

Facebook’s feed is personalized, so what you see in your News Feed is a reflection of what you want to see, and people usually want to see arguments and ideas that align with their own.

The term for this, often associated with Facebook, is a “filter bubble,” and people have written books about it. A lot of people have pointed to that bubble, as well as to the proliferation of fake news on Facebook, as playing a major role in last month’s presidential election.

Now the head of Facebook’s artificial intelligence research division, Yann LeCun, says this is a problem Facebook could solve with artificial intelligence.

“We believe this is more of a product question than a technology question,” LeCun told a group of reporters last month when asked if artificial intelligence could solve this filter-bubble phenomenon. “We probably have the technology, it’s just how do you make it work from the product side, not from the technology side.”

A Facebook spokesperson clarified after the interview that the company doesn’t actually have this type of technology just sitting on the shelf. But LeCun seems confident it could be built. So why doesn’t Facebook build it?

“These are questions that go way beyond whether we can develop AI technology that solves the problem,” LeCun continued. “They’re more like trade-offs that I’m not particularly well placed to determine. Like, what is the trade-off between filtering and censorship and free expression and decency and all that stuff, right? So [it’s not a question of if] the technology exists or can be developed, but … does it make sense to deploy it. This is not my department.”

Facebook has long denied that its service creates a filter bubble. It has even published a study defending the diversity of peoples’ News Feeds. Now LeCun is at the very least acknowledging that a filter bubble does exist, and that Facebook could fix it if it wanted to.

And that’s fascinating because while it certainly seemed like a fixable problem from the outside — Facebook employs some of the smartest machine-learning and language-recognition experts in the world — it once again raises questions around Facebook’s role as a news and information distributor.

Facebook CEO Mark Zuckerberg has long argued that his social network is a platform that leaves what you see (or don’t see) to computer algorithms that use your online activity to rank your feed. Facebook is not a media company making human-powered editorial decisions, he argues. (We disagree.)

But is showing its users a politically balanced News Feed Facebook’s responsibility? Zuckerberg wrote in September that Facebook is already “more diverse than most newspapers or TV stations” and that the filter-bubble issue really isn’t an issue. Here’s what he wrote.

“One of the things we’re really proud of at Facebook is that, whatever your political views, you probably have some friends who are in the other camp. … [News Feed] is not a perfect system. Research shows that we all have psychological bias that makes us tune out information that doesn’t fit with our model of the world. It’s human nature to gravitate towards people who think like we do. But even if the majority of our friends have opinions similar to our own, with News Feed we have easier access to more news sources than we did before.”

So this, right here, explains why Facebook isn’t building the kind of technology that LeCun says it’s capable of building. At least not right now.

There are some benefits to a bubble like this, too, specifically user safety. Unlike Twitter, for example, Facebook’s bubble is heightened by the fact that your posts are usually private, which makes it harder for strangers to comment on them or drag you into conversations you might not want to be part of. The result: Facebook doesn’t have to deal with the level of abuse and harassment that Twitter struggles with.

Plus, Facebook isn’t the only place you’ll find culture bubbles. Here’s “SNL” making fun of a very similar bubble phenomenon that has come to light since election night.

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say

Good article on some of the limits in using social media for research, as compared to IRL (In Real Life):

One is ensuring a representative sample, a problem that is sometimes, but not always, solved by ever greater numbers. Another is that few studies try to “disentangle the human from the platform,” to distinguish the user’s motives from what the media are enabling and encouraging him to do.

Another is that data can be distorted by processes not designed primarily for research. Google, for example, stores only the search terms used after auto-completion, not the text the user actually typed. Another is simply that many social media are largely populated by non-human robots, which mimic the behaviour of real people.

Even the cultural preference in academia for “positive results” can conceal the prevalence of null findings, the authors write.

“The biases and issues highlighted above will not affect all research in the same way,” the authors write. “[But] they share in common the need for increased awareness of what is actually being analyzed when working with social media data.”

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say