/ 4 January 2019

Predatory Facebook is an illusory friend that uses you

Focus: Mark Zuckerberg spent two days telling Congress how users' personal information is collected and used.
Focus: Mark Zuckerberg spent two days telling Congress how users' personal information is collected and used. (Jim Watson/AFP)

SOCIAL MEDIA

Once upon a time, there was a book called The­facebook (it later became just Facebook). It was unlike any other book. It existed on the internet and allowed friends from all over the world to connect and share information. That is where the fairy tale ends, although the story that Facebook likes to tell is that it’s a neutral platform with good intentions.

Lately, there has been no shortage of media reporting the opposite. The Cambridge Analytica scandal may have been the most high-profile misuse of Facebook’s systems, but it’s far from the only one. The history of Facebook and its founder Mark Zuckerberg shows a company that has deliberately managed to deceive the public since its inception. If you prefer a more generous interpretation and accept Zuckerberg’s 14-year apology tour as sincere and merely the result of unintended consequences, think again.

Since the Cambridge Analytica scandal, Facebook has moved more than 1.5-billion users’ accounts in Africa, Asia, Australia and Latin America out of the reach of the European Union’s General Data Protection Regulation (GDPR), despite a promise by Zuckerberg to apply the “spirit of the legislation globally”.

A timely report by the Norwegian Consumer Council,  titled Deceived by Design, illustrates that, even if the law protects you, Facebook makes sure that opting out of the default data settings is extremely difficult. The user interface is designed to authorise data collection with one click but opting out of it is a cumbersome multistep process.

Deceptive or “dark” design patterns are nothing new, but they become important when Zuckerberg says the platform will promote “time well spent” ( a phrase borrowed from Tristan Harris, co-founder of the Centre of Humane Technology, established to address among others these very design nudges).

Even if Facebook wanted to change, it could not. The power of the company’s platform comes from the ability to extract, analyse and aggregate vast amounts of data, in turn delivering personalised advertising on an unprecedented scale. The more data it collects, the more value it creates for advertisers. This means it has no innate incentive to address its porous privacy policies sincerely.

Facebook has developed a business model almost entirely supported by advertising. In the first quarter of 2018, nearly 99% of Facebook’s revenue came from advertising. Six million advertisers use Facebook’s extensive data infrastructure, which reaches more than 2.23-billion (and growing) active users daily, amounting to almost 40% of the global internet population.

Reaching and maintaining a critical mass of users is no easy task. Facebook has addressed this over the years by adding new tools and features to its platform, attracting a wider audience at the same time as extending time users spent on the site. Promising new platforms that threaten these lofty ambitions are either copied or acquired outright. Notwithstanding the power of cross-platform data amalgamation, notable acquisitions WhatsApp, Instagram and Facebook Messenger are influential data merchants in their own right.

According to sociology professor Zeynep Tufekci, Facebook can infer a user’s race, ethnicity, religious and political views, personality traits, intelligence, happiness, addictive behaviour, age and gender by merely analysing your likes.

Creepy friend suggestions and uncanny advertising should not come as a surprise considering Facebook’s history of recording users’ every website like, location ping, friend request, friend rejection, break-up, make-up, high and low. Facebook knows who you are, where you are and what you like by tracking your physical and online behaviour.

Users have done pretty well in filling in the blanks by willingly uploading their personal information and, unwittingly, one hopes, supplying Facebook with a digital signature of themselves and shadow profiles of others.

Until recently, Facebook indiscriminately collected, and allowed third-party apps to collect not only users’ personal information but also their contacts — phone numbers, email and physical address, pictures, and any other associated contact information.

Every time a newly downloaded game or app requests access to your address book, data in the address book can be shared, even when those friends have asked that their information remain private. The reality is, “even if you’ve never signed up for Facebook, the company still has a general sense of who you are”, says  Russell Brandom, summarising remarks made by United States Congressman Ben Luján.

Even before there was Facebook, its early incarnation, Facemash, programmed by Zuckerberg himself, allowed users to upload and judge the relative attractiveness of students attending Harvard University. It quickly gained momentum, attracting 22 000 photo views in just four hours. The network was shut down after a few days because of copyright and privacy violations, but its legacy of profiling and judging people remains.

Transparency of online data, for better or worse, is extensively used as a real-world profiling tool by individuals, groups and countries alike from seemingly insignificant personal judgments by individuals to more nefarious social profiling that could affect your chances of getting a job to outright human rights abuses by oppressive states.

No likes: The group ‘Raging Grannies’ called for better consumer protection and online privacy in the wake of the Cambridge Analytica’s access to users’ data. (Justin Sullivan/Getty Images/AFP)

China, no stranger to monitoring its citizens, has been leading the charge in developing a “social credit” system to analyse its citizens’ trustworthiness by merging their online activity with their offline behaviour. Its 170-million surveillance cameras run advanced facial recognition software that tracks and cross-references citizens with its national databases. By 2020, the country hopes to have 570-million cameras: that’s nearly one camera for every two citizens.

Facebook, not surprisingly, but also not exclusively, already works with third-party data brokers to merge users’ online activity and profiles with offline behaviour. The value of personally identifiable data before the invention of the personal computer and subsequently the internet had quite limited utility, considering the limitations of direct marketing.

But the digital age allowed data brokers direct access to individuals, and Facebook became the perfect vehicle with which advertisers could target individuals. Combined with the information you have already given Facebook through your profile and your clicks, you end up with what is arguably the most extensive consumer profile on Earth.

Eduardo Porter, a columnist for The New York Times, cautions users against the illusion of control. “Even if we were to know precisely what information companies like Facebook have about us and how it will be used, which we don’t, it would be hard for us to assess the potential harms. Could we face higher prices online because Amazon has a precise grasp of our price sensitivities? Might our online identity discourage banks from giving us a loan? What else could happen? How does the risk stack up against the value of a targeted ad, or a friend’s birthday reminder?”

Concurrently, Facebook has steadily expanded its use of facial recognition over the years.

Facebook’s advanced facial recognition algorithms, ironically trained by its users supplying and annotating faces, have identified and labelled more than four million faces. The technology indiscriminately scans and identifies people by name without their knowledge or consent.

Facebook promotes this violation of privacy as a feature that protects users’ identities. Facebook told its users that “face recognition technology allows us to help protect you from a stranger using your photo to impersonate you”.

Jennifer Lynch, a senior staff attorney at the Electronic Frontier Foundation, points out that it does not mitigate the fact that they are scanning every photograph.

If anything, the public needs to be protected from Facebook’s mass surveillance ambitions.

The New York Times reported: “The social network has applied for various patents, many of them still under consideration, which show how it could use the technology to track its online users in the real world.” Recent patents filed by the company describe using its facial recognition software to verify shoppers’ trustworthiness by analysing their friends.

Another alarming patent includes a facial tracking system that correlates what you are currently looking at with your emotions. Rochelle Nadhiri, a Facebook spokesperson, said Facebook has often sought patents for technology it has never put into effect and that patent filings were not an indication of the company’s plans.

If Facebook’s history is the measure, the Electronic Privacy Information Centre, a nonprofit research institution, suggests a more realistic explanation. “Facebook’s patent applications attest to the company’s primary commercial purposes in expanding its biometric data collection and the pervasive uses of facial recognition technology that it envisions for the near future.”

Facebook’s research and development into augmented reality and acquisition of pioneering virtual reality company Oculus Rift in 2014 supports the latter view. The company’s trajectory indicates a push to collapse the distinction between users’ virtual and physical selves by mediating the perception of and interaction with the physical world.

Augmented reality allows users to overlay virtual objects on to their everyday environment, blending the real world with a virtual counterpart. The technology is rightfully heralded for its potential consumer benefits, evidently facilitating new ways to share, communicate and experience the world. The flipside is also clear, that for every consumer benefit augmented reality provides, it will undoubtedly create unparalleled violations of users’ privacy. Facebook will be able to track users’ real-time facial expressions and deliver targeted advertising based on users’ emotional states.

In addition to monitoring macro-expressions such as big smiles and frowns, recent artificial intelligence developments are analysing subtle and nearly impossible to suppress micro-expressions that last only a fraction of a second. Because micro-expressions can reveal emotions that people may be trying to hide, recognising micro-expressions can be advantageous for intelligence agencies by providing clues to predict dangerous situations. Or it could be used by Facebook and nefarious governments to manipulate millions of people.

Regardless of the application, the results would be a total loss of autonomy. People’s philosophical views on the degrees of individual freedom or agency might not align, but surely, if anything, none of us wishes to be manipulated.

Facebook’s rise to the top was by no means accidental. Its unprecedented size, demography and the ease with which we can transmit information allow the platform to run social experiments on its users. Over the years, the social network has refined its choice architecture to hijack users’ psychological vulnerabilities with deceptive design and psychological nudges.

“Dark” design patterns are crafted to steer users away from data protection and towards spending more time online. The colour, size and wording of interfaces all contribute to giving users the illusion of control. Accepting data collection is a single click facilitated by bright colours, with big buttons and simple text. Managing your data, in contrast, is a multistep process designed to overwhelm users with granular choices and wording that suggests a loss of account functionality or deletion if users tamper with the default settings. The phenomenon even has a name: the “control paradox”.

Here’s Facebook’s pitch for its intrusive face-recognition feature: it “lets us know when you’re in other photos or videos so that we can create a better experience”. By framing the use of face recognition in a solely positive manner, deliberately leaving out any possible negative consequences, Facebook nudged users toward enabling the option without fully informing them.

Dark patterns are described in the Deceived by Design report as “ethically problematic, because they mislead users into making choices that are not in their interest and deprive them of their agency”.

The control paradox is by no means the only psychological quirk for the social network to exploit. The fields of behavioural economy and psychology describe how users’ decision-making and behaviour could be influenced by appealing to their psychological biases. Studies found that individuals overestimate their ability to make unadulterated decisions.

It’s somewhat more common for individuals to be in a constant flux between states of rationality and cognitive fallibility. But most of us believe we are more rational than the average individual, fittingly endorsing the Dunning-Kruger effect that most people overestimate their abilities.

For example, individuals create temporary preferences for small rewards that occur sooner rather than for more substantial long-term gains, and prefer choices and information that confirm our pre-existing beliefs. Facebook exploits these and other human tendencies and triggers such as social approval, the need to belong, the fear of missing out, intermittent variable rewards, reciprocal expectations and other biological vulnerabilities to keep users hooked on the platform.

Sandy Parakilas, a former Facebook operations manager, says the company is generating economic value by using data about you “to predict how you’re going to act and manipulate you”.

Jennifer King, the director of consumer privacy at the Centre for Internet and Society at the Stanford Law School, echoed a similar view. “As long as Facebook keeps collecting personal information, we should be wary that it could be used for purposes more insidious than targeted advertising, including swaying elections or manipulating users’ emotions,” she told The New York Times.

If the neo-Luddite tone of this article appears simplistic, you’re right.

Facebook’s algorithms are optimised to exploit what traditional media has done for centuries. Its ad-supported business model competes for our finite attention by optimising negative emotions such as outrage and hate in a zero-sum race to the bottom. The saying goes: “If it bleeds, it leads.”

Even if users are interested in a broad range of news from different political preferences, Facebook’s algorithms will favour articles that confirm political prejudices. The fact of the matter is that negative emotions are more accessible and therefore more cost-effective.

How then is Facebook any different from traditional media or other technology companies?

Besides the business model that underpins the company’s every decision, it is also the most powerful communications and media company in the world according to every available measure. Robyn Caplan, a research analyst at Data & Society, points out that Facebook has no rival in size, popularity and functionality. When Facebook introduced its News Feed in 2006, it blindsided its users by departing from connecting friends to controlling what friends see.

Fast forward 10 years and it’s more evident than ever. News Feed’s filtered stream of social content has captured the market and has emerged as the most significant distributor of news in the world.

Arguably, none of this is unusual. Traditional mainstream media also has considerable influence; however, the differences are significant. In addition to being constrained by rigorous industry rules and norms, competition in the mainstream press allows for content to be comparatively assessed across different news outlets for possible bias. Potential prejudice is negated by regulations that limit the power, reach and ownership of any single outlet, thus safeguarding the diversity of content.

The personalisation of Facebook’s News Feed makes these comparative studies nearly impossible. Even if you could establish an information pattern for Facebook’s users, what would you compare it to?

Zuckerberg’s puzzling testimony before the United State’s House of Representatives and Senate attempts to dismiss the monopoly label. He said: “Consumers have lots of choices over how they spend their time.”

By this logic, reporter Paul Blumenthal notes that “Facebook can never be a monopoly in Zuckerberg’s eyes because its competition is every other form of human activity” and by this measure its “biggest competitors are work and sleep”.

Technology companies have managed to convince people that algorithms produce some kind of data-mined objective truth unadulterated by human fallibility. This is not the case; humans are involved in every step of the process. From the initial training data provided by humans, designing the models, analysing and tweaking the results and so forth, all these boundary conditions are set by humans, whose conscious and unconscious biases may be expressed in the results.

Possible malfeasance or corruption aside, the challenges for the “many good people working there”  at Facebook to create a platform that resembles neutrality is difficult. Algorithms are being trained on our past behaviours to predict our future. By definition, the past is not the future. Societies and individuals are constantly changing, therefore training data needs to represent the current population and account for gradual social drifts in the future. We have to acknowledge that even well-designed algorithms could have profound implications for society. An algorithm that is designed to connect like-minded people will at the same time isolate them.

The reality is that the majority of people are far less likely to engage with viewpoints that challenge their preconceived views even in the absence of social media. If the polarisation of communities is a product of our biology, then perhaps social media companies’ neutral platform defence of merely tracking user preference and connecting like-minded people is credible.

But algorithms are not passively monitoring users’ preferences; they actively steer behaviour and thoughts. Measured online conversations are expedited to the fringes and drowned out by radicalised views fomented by unsubstantiated rumours, mistrust and paranoia. Echo chambers are not a result of free association based on the false premise of platform neutrality; it’s the result of optimising outrage for profit. Facebook knows that outraged users are engaged users. Digital misinformation has become so pervasive online that the World Economic Forum has classified it as one of the biggest threats to our society.

Unfortunately, because of the mismatch between the speed of technological development and the gradual grind of accountability, whether it’s morals, ethics or laws, it is possible for technology companies to exploit the technology landscape unchecked. Regulation cannot keep up with the speed of invention and, when it does catch up, companies find ways to circumvent new laws.

A case in point: it took years for the government and the public to begin to understand that Facebook was mining vast datasets of users. Facebook’s motto, “move fast and break things”, attests to an attitude of arrogant carelessness. It is quite happy to ask for forgiveness rather than permission.

Facebook has indicated some willingness to change by adjusting the News Feed algorithm to address these issues and prioritise posts from friends and family over viral videos, news and other content. Zuckerberg announced a significant overhaul of Facebook’s News Feed algorithm that would prioritise “meaningful social interactions” over “relevant content” after pledging to spend 2018 “making sure that time spent on Facebook is time well spent”.

Perhaps these changes should carry more weight, or does it deserve the equivalent blip of attention Zuckerberg has given them. I’m not buying what you are selling. Sam Lester, a consumer privacy fellow at the Electronic Privacy Information Centre, points out that we are “looking to the company that caused these problems to fix them”.

Facebook cannot close the Pandora’s box it opened a decade ago, allowing external apps to collect user data indiscriminately. The public may never know the extent to which these companies have copied and shared their personal information with potentially nefarious and destructive forces. We should not conflate our understanding of the natural world with its digital counterpart. Deleting information online is not equivalent to burning a note.

Our public understanding of humanity’s potential for change has produced laws that honour redemption in the real world by clearing a person’s record after a fixed period of time. The right to be forgiven, however, does not pertain to decentralised digital information that could easily be shared and stored on millions of devices. Our choices today will affect the rest of our lives and those of the next generation.

We are like naive children hiding behind our hands, and the grown-ups are perfectly content to play along.

Social media itself isn’t going away. It has become an integral part of our lives, satisfying a basic human need to connect and share information. Yes, we have come to depend on social networks, but should we accept our virtual makeshift community? Are we destined to wander the virtual identities of friends and pseudo-friends, projecting idealised versions of themselves, making us feel inadequate and mediocre? The already muddied water between fiction and reality will become even more ambiguous in the future when artificial intelligence-enhanced video and audio forgeries become commonplace.

Huge concern: Facebook, not surprisingly, but also not exclusively, already works with third-party data brokers to merge users’ online activity and profiles with offline behaviour. (Frank Hoer Mann/AFP)

The human mind is incredibly susceptible to forming false memories. This tendency will only exasperate with artificial intelligence-enhanced forgeries on the internet, where false ideas spread like viruses among like-minded people.

A big part of the danger of this technology is that, unlike older photo and video editing techniques, it will be more widely accessible to people without great technical skill. “I’m more worried about what this does to authentic content,” said Hany Farid, a professor of computer science at Dartmouth College. “Think about Donald Trump. If that audio recording of him saying he grabbed a woman was released today, he would have plausible deniability.”

You should delete Facebook for the reasons already mentioned, but you wouldn’t because of them. We already ran this experiment. No sooner than #DeleteFacebook went viral earlier this year, droves of users signed back up. Users have come to rely on the platform to socialise, organise, procrastinate and hide behind virtual identities.

Conceivably the most common reason individuals joined Facebook in the first place is to connect with friends. Without any social engineering on its part, initially at least, Facebook was able to convince users to share personal information by connecting friends who trust each other.

Sharing information online is not novel, but creating an environment to share personally identifiable information is. Perhaps users do not trust Facebook inherently, but they inherently trust their friends, and by proximity conflate the two. If the word “trust” raises more questions than answers, please substitute with familiarity. You joined Facebook because it feels familiar and you stay or come back for the same reason.

Humans are intrinsically social animals and seek out intimacy, whether we want to or not. “A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy.”  —  Stephen Marche wrote in The Atlantic online magazine.

Antonio García Martínez, author and tech engineer who formerly worked at Facebook, elaborates on the illusion, describing Facebook as a cheap digital knock-off; Facebook is to real community what porn is to real sex. “Unfortunately, in both instances use of the simulacrum fries your brain in ways that prevent you from ever experiencing the real version again.”

Individuals are always going to be at a disadvantage given the information asymmetry that exists between Facebook and its users. Tim Wu, the author of The Attention Merchants, outlines a potential path forwards. “What we most need now is a new generation of social media platforms that are fundamentally different in their incentives and dedication to protecting user data.”

The French cultural theorist Paul Virilio, best known for his writings about technology, appropriately stated that “the invention of the ship was also the invention of the shipwreck”, describing the inevitable cost that is associated with progress.

His eloquent explanation of casualty permeates almost every aspect of Facebook. Facebook, and social networks like it, will indeed provide the makeshift community for those whose worlds are being destroyed around them, and at the same time provide a megaphone for the destroyers.

The suggestion is that we are dealing with an immovable force. It’s surely true considering Facebook’s social media monopoly and power to influence billions of people daily. Believing that we are somehow immune to the platform’s psychological nudges is naive, and the sooner we accept its absolute power, the sooner we can choose to move on.

Facebook may be one of the first social media companies to emerge alongside the internet; it need not be the last. Facebook is the sum of its users; you are Facebook, and you could also choose not to be it. The critical mass of users that ensured the rapid network effect can also be a powerful driving force in the opposite direction.

Baratunde Thurston, an adviser at Data & Society, says: “Since companies value us collectively, we must restore balance with a collective response that is based on the view that we’re in this together ; that our rights and responsibilities are shared.”

Let’s move fast and break Facebook.

Pieter Henning is an artist and designer who lives and works in Cape Town