The ethics of “the facebook experiment”

Screen Shot 2014-07-08 at 4.21.10 PMRecently, Internet commentators have been discussing the ethics of an experiment conducted on Facebook. The paper was published this month in the prestigious journal Proceedings of the National Academy of Sciences (PNAS). You can read one account of the ethics of this study in Slate, here, and read other versions all over the Internet, including here, here, and here.  You can read the abstract of the study, which is open to public comment, here.  

What was the study about? In this experiment, the researchers used a sample of people who use Facebook. They randomly assigned the participants to one of three conditions. In one condition, the researchers arranged for a certain percentage of posts that contained positive emotional content to be prevented from appearing on the person's News Feed. In a second condition, they arranged for posts that contained negative emotional content to be prevented from appearing on the News Feed. And in a control condition, they arranged for a random selection of posts (that is, regardless of emotional content) to be prevented from appearing on the News Feed.

The dependent measure was the positivity and negativity of words that participants posted on their Status Updates. Specifically, the researchers used a computer program to count the positive and negative words (such as "happy" or "angry") that participants used after their News Feed was manipulated. The results showed that after having positive content removed from their news feeds, participants used more negative words and fewer positive words on their Status Updates compared to the control group. And after having negative content removed from their News Feeds, participants used more positive words and fewer negative words on their Status updates.

What ethical problems are people raising about this study? One issue that people have raised is informed consent. The study used 689,003 people who had a Facebook account in English.  The researchers did not ask the participants to read or sign an informed consent form. According to Slate's article:

“If you are exposing people to something that causes changes in psychological status, that’s experimentation,” says James Grimmelmann, a professor of technology and the law at the University of Maryland. “This is the kind of thing that would require informed consent.”

Ah, informed consent. Here is the only mention of “informed consent” in the paper: The research “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

a) What does informed consent typically involve in research? Do you think that this statement in the PNAS article allows people full informed consent?

Another issue in the Facebook experiment seems to fall under the possible harm done to the participants. According to the results, people who were randomly assigned to have positive emotion words removed from their news feeds used fewer positive emotions in their status updates–as Slate put it,

Facebook intentionally made thousands upon thousands of people sad.

Other commentators, however, correctly noted that the effect size of the result was very small: While the results were statistically significant, the largest effect size they reported was d = 0.02.

b) What do you think? What's your evaluation of the beneficence of this research? Was true harm done to participants?

Finally, other commentators, such as this one, have commented on the fact that Facebook manipulates people's content all the time. For example, Facebook shows people different advertisements depending on their interests, and shows people content from some of their friends more than others. This writer noted:

So the folks who are outraged about Facebook’s complicity in this experiment seem to basically be arguing that it’s okay when Facebook manipulates their emotions to get them to click on stuff more, or for the sake of in-house experiments about how to make content “more engaging” (that is, to find out how to get them to click on stuff more), but not when that manipulation is done in service of a psychological experiment.

c) What's your view? When, if ever, is it ethical for a commercial site like Facebook to manipulate content and measure your (clicking) behavior, and when, if ever, is it not ethical?