As bad as Facebook’s experiment on its members seems, the reality was worse

What’s the worst thing about the news that Facebook hosted what The Atlantic aptly called a “secret mood manipulation experiment,” conducted on 689,000 unwitting members of its network? That’s hard to say. There’s so much bad to choose from.

First, the nature of the experiment: It was cruel. Researchers from Facebook itself, Cornell, and University of California, San Francisco, were looking into whether emotional states could be spread via news shared by online media. That means, without direct personal contact.

The vehicle for this “emotional contagion” was to be Facebook’s News Feed: By tinkering with the feed, researchers altered the balance of positive and negative items people were exposed to. Then, by peeking at the messages that those same people subsequently posted—some 3 million during the week in January 2012 when the experiment was conducted—and categorizing them as positive or negative, the researchers could determine to what degree the members had been influenced.

Not surprisingly, researchers found what they called “experimental evidence for massive-scale contagion via social networks.”

Now, this isn’t a very nice thing to do to people—deliberately goad them into an emotional response just to show that you can. Worse still, half of the lab rats in this exercise were given an overabundance not of joyous news to make them happy, but of stories featuring dreary, gloomy, negative things. That feels very much like what qualifies as intentional infliction of emotional distress.

(Finding evidence that people share such misery, that it constitutes a “contagion,” apparently interests scholars. Why? I can’t imagine. Everybody’s been saddened by dismal news and told others about it. But some social scientists build careers by confirming the obvious.)

Second, not only was this cruel, it was deceptive. The whole appeal of Facebook’s News Feed is that it’s supposed to constitute a filtered selection of news that has been endorsed by the member’s community of “friends.” It’s meant to be what the people you care about seem to care about.

This feed wasn’t that at all. It wasn’t what your friends said was worth knowing. If you attached importance to it because you believed your network of like-minded souls had commended it to your attention, you were being tricked. Other considerations trumped peer interest (and one can only wonder how often Facebook does that.)

So not only was the feed surreptitiously manipulated, it was represented fraudulently.

Third, this whole sham was visited on a vast population of trusting strangers who didn’t know they were being used as subjects in an experiment.

That ignorance matters. By and large, scientific research protocols have, as a core ethical tenet, the notion of informed consent: Participants should be fully aware they’re taking part in an experiment, know about potential risks to themselves or others and, of course, have the ability to opt out.

True, some experiments can’t work without deception. A famous example was Yale psychologist Stanley Milgram’s controversial work on obedience from the early ‘60s.

There, the experiment subjects were outright lied to. They were told they’d be helping run a test of learning and memory by administering electric shocks to people based on how well the respondents handled a set of questions.

That wasn’t what was happening. Instead, it was the helpers who were the subjects of the experiment, and what being tested was their willingness to follow directions from researchers in lab coats, who were telling them to ignore the screams from what were actually paid actors next door.

Milgram’s work had the unmistakable potential of traumatizing participants, who had signed on as paid laboratory aides, without expecting to have their values and moral courage put to the test, and some of whom purportedly left distraught that they had caused real pain to faceless strangers.

Milgram claimed to have “dehoaxed” his subjects by exposing the fraud to them after the sessions in which they participated, as American Psychological Association guidelines prescribe. That’s something nobody connected with the Facebook chicanery has even asserted.

Moreover, an even more fundamental APA recommendation—that experiment deceit be “justified by the study’s significant prospective scientific, educational, or applied value”—has even less bearing on this affair.

Indeed, that may be the most disturbing element of the Facebook plunge into mass fraud: What it was all for, and how casually it apparently was undertaken.

I’m not surprised that behavioral scientists seek quantitative support for propositions that ordinary folk regard as self-evident. Is anybody without a Ph.D. surprised to learn that depressing news depresses people?

But Facebook isn’t in the business of expanding the frontiers of social knowledge, however loosely they’re defined. It’s in the business of expanding its business, and the takeaway from this affair is the evidence of how audaciously the Facebook brass define its license to target, trick, and tweak the people who trust the network to which millions of them devote such staggering time and intellectual energy.

You have to wonder, if Facebook would hand over nearly 700,000 members to be inveigled into such a membrane-thin scientific cause, what else it’s cavalierly and routinely doing to poke, prod and harvest that same population of users for purposes closer to its real business purposes.

-30-

Share this on:
Share on facebook
Share on twitter
Share on linkedin

Leave a Reply

Discover more from Above the Fold

Subscribe now to keep reading and get access to the full archive.

Continue reading