Last week, some researchers from Facebook, Cornell, and UCSF published a paper in PNAS about how more or less exposure to positive/negative status updates on Facebook made those who saw those posts more likely to post positive / negative things. Since this has been published, it has attracted a fair bit of popular press (here and here), criticism/concerns from academics, academics defending the paper and its methods, and academics hoping that this controversy does not discourage future industry research. The way that I see it, it was a perfect decent study that made use of an experimental method that could only be done in collaboration with someone on the inside; it was a great example of industry-academic collaboration. The authors did a good job of making clear that the effect sizes were not very big and made clear how they manipulated and measured emotion (through linguistic analysis- basically counting positive and negative words).
Why did this controversy arise? I think there are three contributing factors.
The first is that the abstract, which gives a high-level summary of the methods and findings, can give a very distorted picture to someone who is not familiar with how academic studies are written and presented. It is very common for the abstract to describe the findings at a very generic level, because the details are all in the paper. People who don’t read the paper, however, may be quite alarmed by this short description, however, and begin to speculate on things that are just untrue. I have had this experience myself, having been criticized for using tax dollars to conduct a “ridiculous” study about Farmville (it was a study about games as being a low-barrier means for relationship maintenance).
The second factor is the ethics of the experiment. I actually am not shocked by the manipulation; websites do A/B testing all the time, but I was somewhat surprised that it was emotional content that was being manipulated. Thankfully the effect size of the manipulation wasn’t very large, but if the authors were actually hypothesizing that emotion contagion would occur, was it the most ethical thing to have people exposed to less positive content? Commonly in lab experiments, one would debrief the participants if they were exposed to any kind of deception. It would have been nice if the researchers had sent a message to notify people who were in the experimental condition. However, given the fact that Facebook is fiddling with the Newsfeed algorithm all the time, the potential emotional damage caused to users was probably minuscule. Of course, I was somewhat surprised that the institutions’ IRBs were okay with this, but my experience with IRBs are that they tend to be really picky and very careful about upholding the rights and well-being of human subjects, so at some point, you have to acknowledge that the decision to permit this study was made by a group of very cautious individuals. Still. Hm. One thing about manipulation, however, is that the word “manipulation” as psychologists use in the context of manipulating different conditions, does not have the negative connotation that manipulation has in the general sense.
The third factor that contributes to this controversy is more of a framing issue, and the labeling of this phenomena as “emotion contagion.” Contagion is a word that has does not have a very positive nuance. As well pointed out, the researchers did not actually measure anyone’s mood, they measured the positive/negative sentiment reflected in people’s status updates. And while there is much evidence that linguistic analysis of text reflects people’s actual moods, there could be an alternative reason: social norms. In other words, if people saw more positive posts, they may feel more likely to conform to the social norm of posting positive things, and vice versa. (Whether or not conformity and contagion are the same is a larger scholarly debate that I dare not open). For example, when the majority of people in my Newsfeed are in a festive mood, I may refrain from posting something pessimistic or join in on the positive posts even though I feel terrible because I don’t want to be perceived as a Debbie Downer (this is a phenomena also known as “spiral of silence”). Perhaps a better test of emotion contagion is looking at the actual pattern of emotions that people post with emoticons in the “Feeling~” section that one can add to one’s status update. Even if there is a social desirability bias, it may be less likely that people post emoticons that are incongruent to their actual emotions.
Anyways, the study and its results are perfectly legit, but as to whether the results should be interpreted as emotion contagion or conformation to social norm is something that the study actually cannot determine. The problem is that from a scholarly perspective, I feel it is okay to label this as emotion contagion because the researchers were testing a very specific theory of that name. However, I feel like the more accurate title of this paper for the public should have been “Patterns of Emotional Expression” because at the end of the day, that is what the study actually measured (but probably less sexy of a title). That brings us to the question of who is the audience of academic papers. Somewhere between the academic paper and the popular press is a journalist or PR person who should be making the appropriate rewrite for public consumption. But now that more papers are open access and academics are directly promoting and discussing papers through blogs and social media, I wonder if academics need to re-think who their audience is and be especially careful in making statements that could be easily misinterpreted.
On a final note, I am not surprised but somewhat sad that people are blaming Facebook for this study. Given that companies are constantly running experiments on users (but not really telling them about what they are doing and then even selling that information) I personally think it is great that such research findings were made available to the public and that doing so, even if it raises controversy, is important in educating the public about what social science is, how social science is implemented into our everyday lives, and correctly understanding how to interpret data. For people who are concerned about Facebook having too much power over our emotions, I would like to say that emotion contagion (or transfer, or whatever you would like to call it) is a fairly well-known phenomena that happens everywhere- not just Facebook, and if anything, this Facebook experiment was a good way to quantitatively document that this thing indeed happens. Moreover, if it is such that the emotional content via Facebook is having a serious impact on your psychological well-being, perhaps it is time to tone down your Facebook use a bit because shockingly, there is also life outside of Facebook.