show image

Big Tech vs Big Data — why privacy is at greater risk than before

We should all be worried by tech firms’ failure to protect our data – but not for the reasons many observers believe, writes Robert Amsterdam

The seemingly bottomless scandal that Facebook has found itself in since the Cambridge Analytica story broke has prompted a moment of reckoning for the technology industry and, perhaps ever so briefly, a moment of greater awareness among consumers of the appalling standards for data privacy of these services. Not since Microsoft’s anti-trust adventures or Google’s brief moments of scrutiny before the European Commission in years past has Big Tech come under such a cloud of political oversight, with the spectre of asphyxiating government regulation on the horizon.

What is interesting is the realisation that had Hillary Clinton won the election in 2016, these recent revelations would likely be a complete non-story. Instead, in our relentless pursuit to manufacture theories rationalising events such as the rise of Trump and the surprise of Brexit, we are indulging in the fantasy that these micro-targeted political advertisements ‘manipulated’ voters to make decisions beyond their free will, bringing potentially disastrous political outcomes.

Nonsense. Study after study has found that social media does very little to sway political beliefs, while the effectiveness of Cambridge Analytica’s ‘psychographic’ profiling method was almost comically exaggerated by the company’s colourful chief executive.

While we must learn to accept that we get the politics we deserve, there is no mistaking the deeply concerning handling of private data that this scandal uncovered. As revealed by Channel 4 News, the Observer and the New York Times, Facebook allowed this company – and possibly others – to rather easily come into possession of the private data of more than 78 million users. When Facebook discovered this breach in 2015, it did nothing to notify the affected users, and in fact did nothing to limit the barrage of politically pointed paid ads that capitalised on that stolen data.

For years, Big Tech has displayed a paternalistic, cavalier attitude when confronted with public concerns over data privacy. When the product is free, you are the product, we are breezily told, as though there is at all times a full awareness of every way in which the data we ‘freely give’ to these companies is being used. Yet most of us are very rarely aware of all the ways data is collected and used, and most tech firms do everything possible to obfuscate and complicate any attempt to better understand and control private data. We are rarely informed or reminded of the nature of the data collected about us, because if
we were, most would find it highly invasive and distasteful.

The major disconnect is that Big Tech doesn’t feel too compelled to take privacy complaints seriously, as it doesn’t believe it’s doing anything wrong. After all, collecting this data and targeting the most relevant advertisements to users should in theory improve the experience. But it is this failure to separate normal commercial marketing content from sometimes incendiary political ads coming from anonymous (sometimes foreign) sponsors that is deeply worrying. Campaign advertising is highly regulated in every other medium, but Facebook and Twitter weren’t able to detect that it was a bit strange that Russian accounts were spending more than $1 million on ads for US protests on divisive racial issues. That, or they didn’t care.

Some tech firms are anticipating the EU’s implementation of the General Data Protection Regulation (GDPR). For example, Google has stopped data mining from its email service, and both Google and Facebook have announced new privacy dashboards. Data broker Acxiom, meanwhile, is promising to update its user interface to show people what personal information it holds.

Whereas data protection regulation is a very good idea, the misbehaviour of Big Tech in these matters has me very worried that a much less sensible form of regulation could be coming which will attempt to halt the spread of fake news on digital media. The German government has passed new hate speech laws which are vaguely crafted and can force publishers and social media networks to remove certain content. The EU, UK, and eventually the US are all exploring similar responses, but this poses a major threat. Any attempt for the state to make itself the arbiter of what information the public can
and can’t see represents a dangerous game for freedom of speech.

The technology giants have a very limited window of opportunity to make changes to regain trust and hold the regulators at bay. So far, their response falls far short of expectations. Let’s hope they can turn it around, because the only thing worse than tech companies irresponsibly handling massive amounts of private data and information would be for governments to try to do it.

Robert Amsterdam is the founding partner of the international law firm Amsterdam & Partners LLP

This article first appeared in the May/June edition of Spear’s magazine. Buy your copy or subscribe here 

 

Related

Zuckerberg’s ‘techlash’ trial: what CEOs can learn

GDPR — a ‘game changer’ in the era of big data

Cambridge Analytica scandal shines light on GDPR