Surprise! Facebook is making headlines again for something negative. Earlier this month, the Proceedings of the National Academy of Sciences (PNAS) published a study conducted by Facebook and two other researchers that essentially portray Facebook users as lab rats. The study was designed to test whether positive or negative content would affect users’ emotions and subsequent Facebook updates. During the study, Facebook manipulated its News Feed for almost 700,000 users. The results showed that when users were shown more negative content in their News Feed, their ensuing posts would tend to be negative. Besides the fact that this study points out something we already know (if you hang around depressed people you become depressed), what’s the big uproar about? Once again, data privacy. Users feel violated because they did not “knowingly” submit to the study and their emotions were manipulated.
I’m not going to dive into whether this study was “right or wrong” but I will touch on something I think is even more important (especially to Facebook). I believe Facebook is looking at the data conversation all wrong and is acting as its own worst enemy.
The Trust Factor
I think it’s safe to say that internet users have minimal trust (if any) related to their online data. On the data collection side, the defense argument is often about how they’re “making the world a better place” by offering contextual advertising or using the data to make smarter business decisions. If we believe that stance to be true (i.e. the more data we have, the better we can serve people), then we must acknowledge that full data transparency is the ultimate end-game. In other words, to get to the level of personalization that all businesses desire, we must have access to all of the data.
Facebook’s latest actions only hinder the movement to data transparency. The ONLY way we come to a place where we freely and happily share our data is if there is a strong trust and transparency. Online users need to know that their data is safe and what it’s being used for. We need to be able to track how our data is being used, from beginning to end.
How do we get there?
I think the first step is realizing we have an ethical responsibility in what we do with data. This reminds me of a study several years ago that assessed the moods of New Yorkers based on their Twitter posts. The study suggested that Hunter High School was the “saddest tweeting high school in New York”. The report was devastating to the students, faculty, and alumni. A report later came out proving the study wrong. The original study picked the wrong location.
We may think that more data is better (and it may be true) but we must stop and think about what we’re doing with the data and how it’s affecting the lives of those around us.
Why Facebook is killing itself
The long-term health of Facebook hinges on the satisfaction of their user base. To stay viable, Facebook must keep the trust of their users. To stay profitable, Facebook must have users (for advertising revenue). Any action that Facebook does to damage the trust of their users is counterproductive. The lifeline of Facebook is the trust of its users.
Leave a Reply