In the world of individual privacy and data security, this could be the ultimate irony. Facebook, the company that has taken more fire than any other for misusing, abusing and losing user data, has become the last line of defence in the fight against government access to that same user data. Led by the U.S. and U.K., Facebook is under increasing pressure to delay plans to expand encryption across its platforms until backdoors can be added to enable government agencies access to user content.
But is everything as it seems? Has Facebook really turned away from its casual approach to user data to become the poster child for user privacy? Facebook has its own dilemma around monetising data on an encrypted platform, so what motive does it have to promote security at the expense of its own access? Unsurprisingly, the answer is that Facebook’s agenda is not quite the surprise it may have seemed.
Quick recap. Despite building a business around the monetisation of user data, Facebook also owns WhatsApp, the world’s preeminent messaging platform, now used by some 1.6 billion users monthly. Back in 2016, WhatsApp completed its deployment of end-to-end encryption. For the first time, a universally popular messaging service had given up the ability to access the content it was transmitting.
That proved to be a game-changer. Suddenly a level of content security that had relied on more specialist apps or user-applied encryption was available to all. Rather than encrypt the traffic between users and WhatsApp, the platform assured users that with its end-to-end encryption “only the recipient and you have the keys needed to unlock and read your messages—every message you send has an unique lock and key.” And this was available to everyone, everyhwhere. “All of this happens automatically—end-to-end encryption is always activated. There’s no way to turn it off.”
Because users hold the keys, WhatsApp has no way to access the content, to break the encryption, even if they want to. Accessing content requires a hack applied to an endpoint—as has been seen in certain nation-state attacks, where smart devices are infected with malware that attacks the messaging apps. From a platform perspective, though, sitting in WhatsApp HQ, the content cannot be accessed.
So what’s the issue? Put simply, it’s that if WhatsApp cannot access the content, then law enforcement and government agencies cannot access it either. Not without compromising an endpoint, a smartphone. There is no level of pressure that can be applied to the platform to have it relent, no court orders or warrants, it is not possible for them to crack the encryption without a hack.
Politicians and security officials around the world have flocked to WhatsApp, and its more specialist competitors Signal and Wickr, relying on those secure platforms to communicate with one another, safe from “lawful intercept” within national telecoms systems that compromise calls, SMS messages and data traffic.
But many of those same politicians, and security officials have also lamented the fact that they are now “going dark,” arguing that criminals, terrorists, pedophiles, can send messages safe from government snooping, relying on the platforms to keep their secrets away from prying eyes. That, say the officials, is a nightmare.
And that nightmare is set to get worse. Under fire for user data abuses and privacy scandals, struggling to recover user confidence in the wake of Cambridge Analytica, Facebook changed its strategy. Privacy would now come first, and the encrypted messaging that has revolutionised WhatsApp will now be applied to its other services, in particular Facebook Messenger, with its 1.3 billion monthly users.
Back in June, there were reports that the U.S. government was debating legislating to mandate backdoors into such messaging platforms. In July, U.K. Home Secretary Priti Patel accused Facebook of frustrating the fight against terrorists and child abusers. Also in July, U.S. Attorney General William Barr argued that technology companies must not stand in the way of backdoors being added to their platforms.
The direction of travel had been set. And now, in an open letter, government officials from the U.S., U.K. and Australia have asked Facebook to delay further deployment of encryption without “including a means for lawful access to the content of communications to protect our citizens.” Essentially, backdoors.
The government letter to Facebook says that “companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes,” arguing that extending encryption from WhatsApp to Facebook is more dangerous as it’s a higher risk environment for child exploitation—with children on the site, securing messages to those users is a risk.
And on all counts the governments have a point. There is no doubt that preventing law enforcement from intercepting content shared between criminals or terrorists, or among pedophile groups or between pedophiles and their victims is not a good place to find ourselves. But Facebook’s argument, echoed by others in the technology community, is that a backdoor is a backdoor. If you weaken the defences around the system you cannot maintain control of how those weakness are exploited.
EFF described the government letter as an “all-out attack on encryption… a staggering attempt to undermine the security and privacy of communications tools used by billions of people,” and urged Facebook not to comply. The organization warned the move would endanger activists and journalists, and could be used by “authoritarian regimes… to spy on dissidents in the name of combatting terrorism or civil unrest.”
And this is the crux. The stakes are high in the U.S. and Europe, where, despite legal protections against government snooping, many are now seriously concerned. But in other parts of the world, the risks could be literally life and death. If you build a secret door into the back of your house and tell some friends, or give a bunch of neighbours spare sets of keys, you cannot claim your house is as secure as it was beforehand.
“I believe the future of communication,” Facebook’s Mark Zuckerberg wrote in March, “will shift to private, encrypted services where people can be confident what they say to each other stays secure and messages and content won’t stick around forever.”
Responding to the latest government entreaties and the open letter, a Facebook spokesperson said “we strongly oppose government attempts to build backdoors because they would undermine the privacy and security of people everywhere.”
But there’s a twist. In parallel with the encryption battle, even more pressure is being applied to Facebook to moderate user content. The post-office defence, where Facebook argues it cannot be responsible for what is posted on its network is falling away. Australia and New Zealand, the European Union and the U.K. are regulating to mandate exactly that, with the threat of sanctions facing the industry. But guess what—you cannot physically moderate encrypted content.
Much of the focus on the government calls for encryption backdoors has been on lawful intercept, warranted law enforcement access to communications between named individuals. But there is a much broader issue—the automated monitoring of all content within messages to flag any issues and remove prohibited content.
As the recent open letter to Facebook from the U.S., U.K. and Australia says, nothing should be done that “erodes Facebook’s ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism—preventing the prosecution of offenders and safeguarding of victims.”
The letter applauds Facebook’s current efforts to monitor content—“16.8 million reports to the U.S. National Center for Missing & Exploited Children (NCMEC),” last year, “2,500 arrests by U.K. law enforcement and almost 3,000 children safeguarded in the U.K.,” as well as Facebook acting “against 26 million pieces of terrorist content between October 2017 and March 2019.” And the critical fact—“more than 99% of the content Facebook takes action against—both for child sexual exploitation and terrorism—is identified by your safety systems, rather than by reports from users.”
The governments caution that “much of this activity… will no longer be possible if Facebook implements its proposals as planned… this would significantly increase the risk of child sexual exploitation or other serious harms.”
The issue with Facebook’s content came to the fore with the terrorist attacks in Christchurch, then waves of media coverage exposed extremist content, racism, anti-semitism. Step by step Facebook nibbled away at its past stances. Banning nationalist groups and individuals, sanctioning Facebook Live misbehaviour, recruiting armies of moderators. But given Facebook’s scale, and despite vast investments into AI, it’s an unwinnable battle. But if that content can’t be seen, it can’t be policed.
On the core platform, the secrets of the moderation business have hit the headlines in recent months. Policing posts and sanctioning users is a dirty business. Unencrypted messages are already checked, AI could be applied across the platform with the right security architecture in place. That would add WhatsApp’s 1.6 billion users into a moderation remit. And that would push the onus back onto Facebook.
And so the suggestion is that there is a self-serving set of motives behind Facebook’s stance on encryption, it hasn’t just become the world’s leading privacy advocate. And that leads to a different irony playing out here. Facebook is generating strong support from the technology community for its defence of encryption. But the drivers behind that are more likely to be motivated by a defence against more forced moderation than privacy advocacy, and that same technology community is first to slam Facebook’s avaricious business model at the expense of its users.
Something has to give.