Showing remarkable staying power in this world of the 24/7 news cycle, the Facebook mood study saga continues. In the latest twist, more than 30 bioethicists have actually signed onto a defense of the study, in what has otherwise largely been a deluge of criticism.
For a refresher, in June the public learned of a Cornell University study in partnership with Facebook that made changes to the social media site’s algorithm determining the content that appears in users’ timelines. The goal? To measure how changes in the amount of positive and negative content a person sees affects the mood of that user’s own posts. Of course, Facebook users were kept in the dark about being unwitting guinea pigs, and all kinds of outrage followed: The study unfairly infringed on users' implicit rights; it unethically screwed with people’s emotions; it made people sad!
And since various academics, researchers and members of the public have lodged these complaints to a steady drum beat, this recent slew of ethicists playing devil’s advocate presents an interesting thickening of the plot.
Stop Worrying and Love the Study
So what are their arguments?
*That the backlash to this study could have “a chilling effect on valuable research” and “will drive social trials underground.”
*That outrage over the study perpetuates the assumption that research is dangerous.
*That the study did not violate privacy; it just changed the proportion of content types shown.
*That Facebook often tweaks its algorithms to improve its business model, and the ultimate purpose of the experiment was to improve users’ experience.
*And that while the study probably did “purposefully [mess] with people’s minds,” it didn’t do this more than most other normal marketing efforts by businesses, non-profits and government sectors (they’ve got a point), and posed no real risk to the users.
The bioethicists even argue that Facebook and academic collaborators should do more studies, to better understand how and why people act the way they do on social media. They do concede, however, that the researchers should have sought an ethical review prior to the study and debriefed Facebook users afterward.
And so it is that we again find ourselves facing the awkward push-and-pull between the desire for understanding human behavior (ostensibly for the good of all, but implicitly for the good of profit margins and academic resumes), and the desire to curb inappropriate intrusion by government, business and academic entities into regular people’s lives. Which has definitely happened, a lot.
But Really, Psychology Sucks
But I can’t help but wonder if the bigger issue here is really the trajectory of psychology as a discipline (and fields that mimic it) to choose dumb topics to study, and then to study those dumb topics poorly. Do we really need more of these sorts of simplistic negativity studies? To measure yet again if hearing the word “old” makes you walk more slowly, or if seeing flash cards about race makes you more guarded with money, or if viewing more of your negative Facebook friends’ statuses gives you a short-lived Debbie Downer effect?
And of course, the second critical layer is that there have been major methodological and interpretive problems (including in the Facebook study) with how psychologists have gone about analyzing these seemingly unhelpful issues.
Of course emotions are a significant part of the human experience, and understanding how they make us tick is worthwhile (back to the understanding vs. intrusion argument). But color-me-skeptical that these fairly artificial experiments reveal much that’s meaningful. Psychology’s fetish with useless experiments that seem scientific is getting in the way of real analysis.
Looking Forward
It’s high time the discipline realize that the lab coat technique that works for the hard sciences just doesn’t make much sense in social sciences. The less clinical alternatives: qualitative research in the form of asking questions in conversation-form, treating subjects like (gasp!) human beings; and quantitative research in the form of analyzing measurable data about people’s backgrounds and the choices they make. Wouldn't this be better than super convoluted experiments of limited applicability to life?
Anyone care to sign onto my psychology manifesto?
Image: ThinkStock