How to Know What Donald Trump Really Cares About: Look at What He’s Insulting – The New York Times

This is a truly remarkable analysis of social media and Donald Trump, rich in data and beautifully charted by Kevin Quealy and Jasmine Lee.

Well worth reading, both in terms of the specifics as well as a more general illustration of social media analysis:

Donald J. Trump’s tweets can be confounding for journalists and his political opponents. Many see them as a master class in diversion, shifting attention to minutiae – “Hamilton” and flag-burning, to name two recent examples – and away from his conflicts of interest and proposed policies. Our readers aren’t quite sure what to make of them, either.

For better or worse, I’ve developed a deep expertise of what he has tweeted about in the last two years. Over the last 11 months, my colleague Jasmine C. Lee and I have read, tagged and sorted more than 14,000 tweets. We’ve found that about one in every nine was an insult of some kind.

This work, mundane as it sometimes is, has helped reveal a clear pattern – one that has not changed much in the weeks since Mr. Trump’s victory.

First, Mr. Trump likes to identify a couple of chief enemies and attack them until they are no longer threatening enough to interest him. He hurls insults at these foils relentlessly, for sustained periods – weeks or months. Jeb Bush, Marco Rubio, Ted Cruz and Hillary Clinton have all held Mr. Trump’s attention in this way; nearly one in every three insults in the last two years has been directed at them.

If Mr. Trump continues to use Twitter as president the way he did as a candidate, we may see new chief antagonists: probably Democratic leaders, perhaps Republican leaders in Congress and possibly even foreign countries and their leaders. For now, the news media – like CNN and The New York Times – is starting to fill that role. The chart at the top of this page illustrates this story very clearly.

That’s not to say that the media is necessarily becoming his next full-time target. Rather, it suggests that one has not yet presented itself. The chart below, which shows the total number of insults per day, shows how these targets have come and gone in absolute terms. An increasing number of insults are indeed being directed at the media, but, for now, those insults are still at relatively normal levels.

Insults per day

Second, there’s a nearly constant stream of insults in the background directed at a wider range of subjects. These insults can be a response to a news event, unfavorable media coverage or criticism, or they can simply be a random thought. These subjects receive short bursts of attention, and inevitably Mr. Trump turns to other things in a day or two. Mr. Trump’s brief feuds with Macy’sElizabeth WarrenJohn McCain and The New Hampshire Union Leader fit this bucket well. The election has not changed this pattern either.

Facebook’s AI boss: Facebook could fix its filter bubble if it wanted to – Recode

While Zuckerberg is correct that we all have a tendency to tune-out other perspectives, the role that Facebook and other social media have in reinforcing that tendency should not be downplayed:

One of the biggest complaints about Facebook — and its all-powerful News Feed algorithm — is that the social network often shows you posts supporting beliefs or ideas you (probably) already have.

Facebook’s feed is personalized, so what you see in your News Feed is a reflection of what you want to see, and people usually want to see arguments and ideas that align with their own.

The term for this, often associated with Facebook, is a “filter bubble,” and people have written books about it. A lot of people have pointed to that bubble, as well as to the proliferation of fake news on Facebook, as playing a major role in last month’s presidential election.

Now the head of Facebook’s artificial intelligence research division, Yann LeCun, says this is a problem Facebook could solve with artificial intelligence.

“We believe this is more of a product question than a technology question,” LeCun told a group of reporters last month when asked if artificial intelligence could solve this filter-bubble phenomenon. “We probably have the technology, it’s just how do you make it work from the product side, not from the technology side.”

A Facebook spokesperson clarified after the interview that the company doesn’t actually have this type of technology just sitting on the shelf. But LeCun seems confident it could be built. So why doesn’t Facebook build it?

“These are questions that go way beyond whether we can develop AI technology that solves the problem,” LeCun continued. “They’re more like trade-offs that I’m not particularly well placed to determine. Like, what is the trade-off between filtering and censorship and free expression and decency and all that stuff, right? So [it’s not a question of if] the technology exists or can be developed, but … does it make sense to deploy it. This is not my department.”

Facebook has long denied that its service creates a filter bubble. It has even published a study defending the diversity of peoples’ News Feeds. Now LeCun is at the very least acknowledging that a filter bubble does exist, and that Facebook could fix it if it wanted to.

And that’s fascinating because while it certainly seemed like a fixable problem from the outside — Facebook employs some of the smartest machine-learning and language-recognition experts in the world — it once again raises questions around Facebook’s role as a news and information distributor.

Facebook CEO Mark Zuckerberg has long argued that his social network is a platform that leaves what you see (or don’t see) to computer algorithms that use your online activity to rank your feed. Facebook is not a media company making human-powered editorial decisions, he argues. (We disagree.)

But is showing its users a politically balanced News Feed Facebook’s responsibility? Zuckerberg wrote in September that Facebook is already “more diverse than most newspapers or TV stations” and that the filter-bubble issue really isn’t an issue. Here’s what he wrote.

“One of the things we’re really proud of at Facebook is that, whatever your political views, you probably have some friends who are in the other camp. … [News Feed] is not a perfect system. Research shows that we all have psychological bias that makes us tune out information that doesn’t fit with our model of the world. It’s human nature to gravitate towards people who think like we do. But even if the majority of our friends have opinions similar to our own, with News Feed we have easier access to more news sources than we did before.”

So this, right here, explains why Facebook isn’t building the kind of technology that LeCun says it’s capable of building. At least not right now.

There are some benefits to a bubble like this, too, specifically user safety. Unlike Twitter, for example, Facebook’s bubble is heightened by the fact that your posts are usually private, which makes it harder for strangers to comment on them or drag you into conversations you might not want to be part of. The result: Facebook doesn’t have to deal with the level of abuse and harassment that Twitter struggles with.

Plus, Facebook isn’t the only place you’ll find culture bubbles. Here’s “SNL” making fun of a very similar bubble phenomenon that has come to light since election night.

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say

Good article on some of the limits in using social media for research, as compared to IRL (In Real Life):

One is ensuring a representative sample, a problem that is sometimes, but not always, solved by ever greater numbers. Another is that few studies try to “disentangle the human from the platform,” to distinguish the user’s motives from what the media are enabling and encouraging him to do.

Another is that data can be distorted by processes not designed primarily for research. Google, for example, stores only the search terms used after auto-completion, not the text the user actually typed. Another is simply that many social media are largely populated by non-human robots, which mimic the behaviour of real people.

Even the cultural preference in academia for “positive results” can conceal the prevalence of null findings, the authors write.

“The biases and issues highlighted above will not affect all research in the same way,” the authors write. “[But] they share in common the need for increased awareness of what is actually being analyzed when working with social media data.”

Research based on social media data can contain hidden biases that ‘misrepresent real world,’ critics say