California Management Review
California Management Review is a premier academic management journal published at UC Berkeley
by Kimberly Rotter
Over the past two weeks, three words have been on almost everyone’s screens: “Facebook” “Privacy’ and “Scandal.” Whether you are a part of the 87 million users whose data was captured by Cambridge Analytica or part of the 1 billion people who have used Facebook this past year, your information has been recorded on a database and is being used to target you. Under the spotlight of this scandal, Facebook is being pressured to balance out the need for ad revenue with the importance and safety of user’s privacy.
When the Facebook data scandal began, multiple Facebook friends alerted me that I could see how Facebook had characterized and targeted me. In a small section of my settings, multiple tags showed how Facebook picked what ads and news stories to show me and how they put me in an advertising bubble. And every tag I could call true. I am an expat, away from my family, and under US politics, very liberal. None of this information would be hard to find, and I am sure it helped Facebook provide me with a more relevant user experience. However, it is hard to find the confidence to trust that the billions of data points Facebook keeps on all its users are kept safe.
Targeted ads are a form of advertising based on demographics, consumers’ previous buying history, or behavior. Facebook says their algorithm helps them keep the social media site free, bringing in 39.94 billion dollars of ad revenue last year. Clearly, targeted ads are a successful revenue model, but with the recent news of millions of Facebook users’ information and tracked data being secretly harvested by a political consulting firm, Cambridge Analytica, it might not be a sustainable one. The scandal has opened up conversations about our privacy, our data, and has led to greater questionings of the bases of targeted ads.
*“ By giving people the power to share, we’re making the world more transparent”
- Mark Zuckerberg.*
Targeted ads have been standard practice online for years, tailored to provide users with a personalized advertising experience that keeps them online. Whether this now ubiquitous process has been transparent or not is up for debate. While all Facebook users agree to the Terms of Service, most barely even read it. Google states that advertisers pay based on how the ads perform, so why wouldn’t online advertisers target an audience that is more likely to click on their ads? Our logged data allows their programs to design the most relevant user experience for us, but as it has recently come to the spotlight, users want privacy too. No one wants to feel like they are being watched, with their information being sold to third parties that they never heard of.
Besides privacy concerns, targeted ads have also been linked to discriminatory practices. Facebook is currently facing a lawsuit for violating the Fair Housing Act by allowing landlords to tailor their advertisements to certain demographics, while excluding others. Advertisers can choose to exclude families with women and children, disabled veterans, and more from seeing their listings by using Facebook’s vast preset data. While targeted ads are beneficial to making some user’s online experience more relevant, the same technology is used to discriminate against others. And most users are clueless as to how their “liking” of a page on “English as a second language” could keep them from finding housing. Facebook’s algorithm allows advertisers to exclude certain groups through their targeted ads.
While in 2015 Facebook implemented regulation that forbid app developers to sell user’s data to a third party, that move has not been enough. Now, Facebook needs to find the balance between providing users with relevant ads and not infringing on people’s privacy. As Facebook continues to be in the spotlight of this scandal, they could actualize their commitment to privacy through changing some policies to allow users to opt out from targeted advertising or be more transparent with what the data is used for.
Sites like Facebook provide people with a free service that often becomes ingrained in their users’ lives and holds a lot of their private information. Yes, targeted ads work, are profitable, and can keep a site free that allows people from all over the world to connect. But if Facebook wants to be true to their mission to “focus every day on how to build real value for the world in everything they do,” major changes need to happen. Facebook needs to find the balance between ad revenue and user’s privacy to ensure that targeted ads aren’t used to exclude certain minorities, and that a user’s personal information is kept private unless explicitly and clearly allowed by them. UC Berkeley Haas’ Professor Zsolt Katona explores a similar necessary rebalancing between consumer concerns and online ad revenue in his newest case “Eyeo’s Adblock Plus: Consumer Movement or Advertising Toll Booth?”
Between the thousands of data points on 87 million people that Cambridge Analytica obtained from Facebook, and the continuing talk about government regulation, the social media world is being pressured to change. It needs to find a better balance between the collection of billions of valuable and vulnerable data points available to companies and keeping our information safe. Because our information is worth more than this. And we are worth more than our information.