The Great Hack: Documentary reflects on data abuse and privacy boundaries

Courtesy : Netflix

The new Netflix documentary, The Great Hack, delves into the story behind the Cambridge Analytica Scandal, which involved the misuse of 87 million US Facebook users’ personal data. The revelations took down Cambridge Analytica, a British data intelligence company working on hefty political campaigns all over the world, and forced Facebook’s founder and CEO Mark Zuckerberg to testify in court.

The film follows the experience of US college professor David Carroll, who enters a legal battle to recover his personal data from Cambridge Analytica after the scandal breaks. Led by Alexander Nix, Cambridge Analytica was headquartered in London, and throughout its existence worked on political campaigns in many countries using its data expertise to target voters.

The storyline encompasses one of the greatest challenges of this day and age and calls into question the power of big tech companies, as well as their lack of accountability in harmful data abuse cases. The business model of social media companies like Facebook, where access and services seem to be free at the point of use, is actually a cash cow leveraging on information about its users that can be commercialised to anyone willing to pay the right price.

Cambridge Analytica claimed to have created a profile for virtually every voter in the US, each one containing 5,000 data points. As these data points were collated from information gathered via Facebook user interactions, the main question emerging is: are users aware of how their data is exploited?

While judgements on the conduct of Cambridge Analytica’s CEO Alexander Nix and the ‘propaganda machine’ deployed by his team require further reflections, the documentary plainly shows the wide array of data that technology companies have access to, and the dearth of ethical protocol around how they intend to use it. It confronts us with the uncomfortable realisation that we are constantly being watched, and that our own behaviour can be used to manipulate our decisions.

The Cambridge Analytica scandal marked a turning point in the history of big tech: before the scandal, the big tech companies had an image of being cool, made by young people who enable connectivity and free information to billions. Since the revelations, however, this feeling of awe has started to dissipate, alongside a general realisation that the alleged free services provided by social media and technology companies actually come at a price. If, on the one hand, hyper-connectivity can bring us closer and provide fast and unpaid access to information, a quick flip of the coin shows us the other side, where users feel cheated and violated.

The scandal prompted a wake-up call for users and authorities alike, raising questions over the kind of regulations needed in a time when data has become the most valuable asset on the planet. The real question arising from the scandal was about how much information people are giving away to social media and technology companies without their consent or even knowledge.

Still, despite the important message it endeavours to get across, the documentary neither offers us real solutions towards finding justice, nor suggests any alternative with which to fight the big tech companies that have invaded every aspect of our lives. There is a clear urgency in recognising data rights, establishing boundaries between public and private, and clarifying terms and conditions for users so that, when using social media, they can be fully aware of the information they would be giving away.

If nothing else, the Cambridge Analytica scandal has highlighted the need to set up an inclusive, comprehensive and global regulatory framework to regulate uses of data and establish stricter privacy rules.

“Data protection is a structural problem. We don’t have effective ways to hold companies accountable and to enforce when they commit data crimes because we don’t even have a way to define, let alone prosecute, these data crimes. We can see that the existing tools we have are not succeeding at what they’re supposed to do,” said professor Carroll in an interview for Business Insider on the documentary and the aim of raising awareness about the blurred lines between privacy and use of social media.

The reality is indeed worrying and the clock is ticking. The debate about data rights, privacy and accountability is long overdue.

John Marshall, CEO of World Ethical Data Forum (WEDF), the only platform to embrace the full range of interrelated issues around the use and future of data, commented : “We’re fortunate such excellent work has been done since the outcry over Facebook and Cambridge Analytica to bring the issues around data and privacy to public awareness. The civil liberties implications of the new data technologies are very serious, and rightly or wrongly Cambridge has come to symbolise a tendency we may have caught just in time. But these questions go deep. We need to explore the ethical and practical questions arising from the uses and control of information quite generally too. It’s very easy to overlook the fact that we’re still struggling with the ethical implications of even apparently very basic technology, such as the press, let alone what Zuboff calls ‘behavioural futures markets.’ How we decide these questions will define our epoch.”

By addressing these concerns and encouraging the collaboration of apparently competing worlds, from technology and big business, government and security agencies, policy makers and the media, to human rights lawyers, whistleblowers, and privacy and transparency advocates, the World Ethical Data Forum offers a unique and crucial perspective for the future.

GDPR: The ground-breaking data regulation

The European Union is a pioneer in establishing data regulations. The implementation of the General Data Protection Regulation (GDPR) in May 2018 sparked a global conversation about data ownership, consent and use, as the ruling imposes a number of obligations on individuals and entities collecting personal data from EU residents. The GDPR, even in its early days, has established itself as an exemplary set of regulations and a good place to start a serious conversation about protecting user privacy.

“In the charter of human rights that founded the EU, data protection rights are listed as a fundamental right that’s equivalent to freedom of speech, freedom to marry, all these other basic human rights. That’s why Europe has a 20-year lead on creating the infrastructure for businesses to provide for these rights,” said Carroll, highlighting the lack of protections in the US, for example, where the major technology companies are headquartered.

The first-year of GDPR reports show that many companies have problems handling data in a responsible manner. In a nine-month summary of the effects of GDPR, the European Data Protection Board said that as of March 2019, there were 206,326 complaints raised, of which nearly 100,000 related to data privacy. GDPR supervisory agencies in 11 countries issued fines, totalling €55,955,871 (over $6.3 million). The Netherlands recorded the most data breach reports per capita, followed by Ireland and Denmark. The biggest European economies — UK, Germany and France — rank tenth, eleventh and twenty-first respectively. Greece, Italy and Romania have reported the fewest breaches per capita.

Despite the regulation being new, compliance has been a problem, which can lead to breaches and damage to users. One year after its implementation, a survey by the International Association of Privacy Professionals (IAPP) shows that more than half of all companies are still not GDPR compliant, while 20 percent said they did not believe full compliance was even possible.

If you are interested in the next WEDF event taking place in London next July 2020, please get in touch with us at: Cassiopeia@worldethicaldata.org

The Great Hack: Documentary reflects on data abuse and privacy boundaries

Courtesy : Netflix

The new Netflix documentary, The Great Hack, delves into the story behind the Cambridge Analytica Scandal, which involved the misuse of 87 million US Facebook users’ personal data. The revelations took down Cambridge Analytica, a British data intelligence company working on hefty political campaigns all over the world, and forced Facebook’s founder and CEO Mark Zuckerberg to testify in court.

The film follows the experience of US college professor David Carroll, who enters a legal battle to recover his personal data from Cambridge Analytica after the scandal breaks. Led by Alexander Nix, Cambridge Analytica was headquartered in London, and throughout its existence worked on political campaigns in many countries using its data expertise to target voters.

The storyline encompasses one of the greatest challenges of this day and age and calls into question the power of big tech companies, as well as their lack of accountability in harmful data abuse cases. The business model of social media companies like Facebook, where access and services seem to be free at the point of use, is actually a cash cow leveraging on information about its users that can be commercialised to anyone willing to pay the right price.

Cambridge Analytica claimed to have created a profile for virtually every voter in the US, each one containing 5,000 data points. As these data points were collated from information gathered via Facebook user interactions, the main question emerging is: are users aware of how their data is exploited?

While judgements on the conduct of Cambridge Analytica’s CEO Alexander Nix and the ‘propaganda machine’ deployed by his team require further reflections, the documentary plainly shows the wide array of data that technology companies have access to, and the dearth of ethical protocol around how they intend to use it. It confronts us with the uncomfortable realisation that we are constantly being watched, and that our own behaviour can be used to manipulate our decisions.

The Cambridge Analytica scandal marked a turning point in the history of big tech: before the scandal, the big tech companies had an image of being cool, made by young people who enable connectivity and free information to billions. Since the revelations, however, this feeling of awe has started to dissipate, alongside a general realisation that the alleged free services provided by social media and technology companies actually come at a price. If, on the one hand, hyper-connectivity can bring us closer and provide fast and unpaid access to information, a quick flip of the coin shows us the other side, where users feel cheated and violated.

The scandal prompted a wake-up call for users and authorities alike, raising questions over the kind of regulations needed in a time when data has become the most valuable asset on the planet. The real question arising from the scandal was about how much information people are giving away to social media and technology companies without their consent or even knowledge.

Still, despite the important message it endeavours to get across, the documentary neither offers us real solutions towards finding justice, nor suggests any alternative with which to fight the big tech companies that have invaded every aspect of our lives. There is a clear urgency in recognising data rights, establishing boundaries between public and private, and clarifying terms and conditions for users so that, when using social media, they can be fully aware of the information they would be giving away.

If nothing else, the Cambridge Analytica scandal has highlighted the need to set up an inclusive, comprehensive and global regulatory framework to regulate uses of data and establish stricter privacy rules.

“Data protection is a structural problem. We don’t have effective ways to hold companies accountable and to enforce when they commit data crimes because we don’t even have a way to define, let alone prosecute, these data crimes. We can see that the existing tools we have are not succeeding at what they’re supposed to do,” said professor Carroll in an interview for Business Insider on the documentary and the aim of raising awareness about the blurred lines between privacy and use of social media.

The reality is indeed worrying and the clock is ticking. The debate about data rights, privacy and accountability is long overdue.

John Marshall, CEO of World Ethical Data Forum (WEDF), the only platform to embrace the full range of interrelated issues around the use and future of data, commented : “We’re fortunate such excellent work has been done since the outcry over Facebook and Cambridge Analytica to bring the issues around data and privacy to public awareness. The civil liberties implications of the new data technologies are very serious, and rightly or wrongly Cambridge has come to symbolise a tendency we may have caught just in time. But these questions go deep. We need to explore the ethical and practical questions arising from the uses and control of information quite generally too. It’s very easy to overlook the fact that we’re still struggling with the ethical implications of even apparently very basic technology, such as the press, let alone what Zuboff calls ‘behavioural futures markets.’ How we decide these questions will define our epoch.”

By addressing these concerns and encouraging the collaboration of apparently competing worlds, from technology and big business, government and security agencies, policy makers and the media, to human rights lawyers, whistleblowers, and privacy and transparency advocates, the World Ethical Data Forum offers a unique and crucial perspective for the future.

GDPR: The ground-breaking data regulation

The European Union is a pioneer in establishing data regulations. The implementation of the General Data Protection Regulation (GDPR) in May 2018 sparked a global conversation about data ownership, consent and use, as the ruling imposes a number of obligations on individuals and entities collecting personal data from EU residents. The GDPR, even in its early days, has established itself as an exemplary set of regulations and a good place to start a serious conversation about protecting user privacy.

“In the charter of human rights that founded the EU, data protection rights are listed as a fundamental right that’s equivalent to freedom of speech, freedom to marry, all these other basic human rights. That’s why Europe has a 20-year lead on creating the infrastructure for businesses to provide for these rights,” said Carroll, highlighting the lack of protections in the US, for example, where the major technology companies are headquartered.

The first-year of GDPR reports show that many companies have problems handling data in a responsible manner. In a nine-month summary of the effects of GDPR, the European Data Protection Board said that as of March 2019, there were 206,326 complaints raised, of which nearly 100,000 related to data privacy. GDPR supervisory agencies in 11 countries issued fines, totalling €55,955,871 (over $6.3 million). The Netherlands recorded the most data breach reports per capita, followed by Ireland and Denmark. The biggest European economies — UK, Germany and France — rank tenth, eleventh and twenty-first respectively. Greece, Italy and Romania have reported the fewest breaches per capita.

Despite the regulation being new, compliance has been a problem, which can lead to breaches and damage to users. One year after its implementation, a survey by the International Association of Privacy Professionals (IAPP) shows that more than half of all companies are still not GDPR compliant, while 20 percent said they did not believe full compliance was even possible.

If you are interested in the next WEDF event taking place in London next July 2020, please get in touch with us at: Cassiopeia@worldethicaldata.org

Privacy vs mass surveillance: an ongoing battle

In today’s age of surveillance capitalism, personal data has become a highly valuable asset. Whilst the Big Tech companies such as Google, Amazon, Facebook and Apple have been long scrutinised on their role in the Big Data Economy and their handling of users’ data, concerns about government surveillance have recently surged across the globe.

Getty Images. GETTY

Surveillance capitalism is a term commonly used to denote a market-driven process where the commodity for sale is your personal data. It centres around companies that provide us with free online services — Google and Facebook, for example — whereby through the mass surveillance of the internet, they gather information from individuals.

Through the collection of online behaviours, such as likes, dislikes, searches, social networks, and purchases, these companies produce data that can be further used for commercial and even political purposes. And this is often done without us understanding the full extent of the surveillance.

The revelations from last year’s Cambridge Analytica scandal highlighted the extent to which internet companies survey an individual’s online activity. Cambridge Analytica’s actions broke Facebook’s own rules by collecting and selling data under the pretence of academic research, possibly violating the election law in the United States.

Despite the questionable nature of Cambridge Analytica’s actions, the bigger players and leading actors in surveillance capitalism, Facebook and Google, are still legally amassing as much information as they can, making huge profits in the process.

The scandal prompted significant questions over privacy concerns, raising the importance of discussions on ethical data surveillance and ethical data handling. Recent research shows that the private sector is not the only body in question when it comes to data surveillance and ethical data handling.

American market research and advisory company Forrester Researcher announced in a recent report that India has been named as a country with minimal restrictions in terms of data privacy and protection, where government surveillance is a cause of concern. China also featured in the report as a country with a high level of government surveillance.

“The government surveillance is a worldwide phenomenon that cuts across geographies, economic development, societal well-being, and institutional design, with alarming levels of government surveillance in countries such as Austria, Colombia, India, Kuwait and the UK,” the report said.

According to the 2019 Forrester Global Map of Privacy Rights and Regulations, “Regulations that allow governments to access personal data of citizens are still undermining the overall privacy protections that certain countries offer their citizens”

Lack of constitutional provisions to enable monitoring of government activity could be one of the primary reasons for the high level of government surveillance in India, industry experts say. Nonetheless, the surveillance practices may prove to be pervasive and not in line with the enforced data privacy laws, thus affecting data security of citizens.

Similarly, a new report by the Human Rights Watch (HRW) reveals the extent to which everyday behaviour in China’s Xinjiang region is monitored by the authorities, contributing to a regime of constant surveillance and mass detention. The report revealed how a mobile app used by these officials helps them collect vast amounts of personal data, prompting them to flag seemingly normal behaviour as suspicious.

The app was accessed by HRW’s Maya Wang when it became publicly available. Maya said that the app was most likely never supposed to be made public: ‘It was a careless mistake that prompted some of the people who have this app to put it online,’ she explains.

However, once accessed, they were able to reverse engineer it, revealing that under the excuse of a counter-terrorism policy called the ‘Strike Hard Campaign against Violent Terrorism’ the app was meant to fulfill the following functions: collect personal information, reporting on activities deemed suspicious, and prompting investigations of people the system flags as problematic.

It enabled the officials to collect an exhaustive amount of sensitive information on individuals including blood-type, digital records of their faces, height, car colour, ‘religious atmosphere’ and political affiliation. The report further details how the information is fed into a policing programme called the Integrated Joint Operations Platform (IJOP), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. HRW findings suggest that every citizen in the region is subject to monitoring under this programme.

Qilai Shen | Bloomberg | Getty Images

The HRW report notes that many, if not all of these mass surveillance practices appear to contravene to Chinese law and have no clear relationship to terrorism or extremism monitoring. According to Wang, this huge surveillance effort by the ruling party comes down to retaining power. ‘I think the goal is to ensure that the party stays in power forever, which is challenging for them to do,’ she explains. ‘The shift to a market-based economy has meant that the party has lost some of the old tools for social control and so they decided that technology is going to be very good for them in achieving that purpose.’

Despite the difficulties, Wang suggest that the concerning levels of survelliance are not exclusive to China or a particular country: government surveillance is taking place all over the world.

‘There is no privacy in first world,’ she says. ‘Even if you’re in Europe or the US you have to be very worried about where your data is going or if it is protected.” This makes the inherent risk and potential for data abuse and breaches an increasingly relevant discussion.

Privacy vs mass surveillance: an ongoing battle

In today’s age of surveillance capitalism, personal data has become a highly valuable asset. Whilst the Big Tech companies such as Google, Amazon, Facebook and Apple have been long scrutinised on their role in the Big Data Economy and their handling of users’ data, concerns about government surveillance have recently surged across the globe.

Getty Images. GETTY

Surveillance capitalism is a term commonly used to denote a market-driven process where the commodity for sale is your personal data. It centres around companies that provide us with free online services — Google and Facebook, for example — whereby through the mass surveillance of the internet, they gather information from individuals.

Through the collection of online behaviours, such as likes, dislikes, searches, social networks, and purchases, these companies produce data that can be further used for commercial and even political purposes. And this is often done without us understanding the full extent of the surveillance.

The revelations from last year’s Cambridge Analytica scandal highlighted the extent to which internet companies survey an individual’s online activity. Cambridge Analytica’s actions broke Facebook’s own rules by collecting and selling data under the pretence of academic research, possibly violating the election law in the United States.

Despite the questionable nature of Cambridge Analytica’s actions, the bigger players and leading actors in surveillance capitalism, Facebook and Google, are still legally amassing as much information as they can, making huge profits in the process.

The scandal prompted significant questions over privacy concerns, raising the importance of discussions on ethical data surveillance and ethical data handling. Recent research shows that the private sector is not the only body in question when it comes to data surveillance and ethical data handling.

American market research and advisory company Forrester Researcher announced in a recent report that India has been named as a country with minimal restrictions in terms of data privacy and protection, where government surveillance is a cause of concern. China also featured in the report as a country with a high level of government surveillance.

“The government surveillance is a worldwide phenomenon that cuts across geographies, economic development, societal well-being, and institutional design, with alarming levels of government surveillance in countries such as Austria, Colombia, India, Kuwait and the UK,” the report said.

According to the 2019 Forrester Global Map of Privacy Rights and Regulations, “Regulations that allow governments to access personal data of citizens are still undermining the overall privacy protections that certain countries offer their citizens”

Lack of constitutional provisions to enable monitoring of government activity could be one of the primary reasons for the high level of government surveillance in India, industry experts say. Nonetheless, the surveillance practices may prove to be pervasive and not in line with the enforced data privacy laws, thus affecting data security of citizens.

Similarly, a new report by the Human Rights Watch (HRW) reveals the extent to which everyday behaviour in China’s Xinjiang region is monitored by the authorities, contributing to a regime of constant surveillance and mass detention. The report revealed how a mobile app used by these officials helps them collect vast amounts of personal data, prompting them to flag seemingly normal behaviour as suspicious.

The app was accessed by HRW’s Maya Wang when it became publicly available. Maya said that the app was most likely never supposed to be made public: ‘It was a careless mistake that prompted some of the people who have this app to put it online,’ she explains.

However, once accessed, they were able to reverse engineer it, revealing that under the excuse of a counter-terrorism policy called the ‘Strike Hard Campaign against Violent Terrorism’ the app was meant to fulfill the following functions: collect personal information, reporting on activities deemed suspicious, and prompting investigations of people the system flags as problematic.

It enabled the officials to collect an exhaustive amount of sensitive information on individuals including blood-type, digital records of their faces, height, car colour, ‘religious atmosphere’ and political affiliation. The report further details how the information is fed into a policing programme called the Integrated Joint Operations Platform (IJOP), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. HRW findings suggest that every citizen in the region is subject to monitoring under this programme.

Qilai Shen | Bloomberg | Getty Images

The HRW report notes that many, if not all of these mass surveillance practices appear to contravene to Chinese law and have no clear relationship to terrorism or extremism monitoring. According to Wang, this huge surveillance effort by the ruling party comes down to retaining power. ‘I think the goal is to ensure that the party stays in power forever, which is challenging for them to do,’ she explains. ‘The shift to a market-based economy has meant that the party has lost some of the old tools for social control and so they decided that technology is going to be very good for them in achieving that purpose.’

Despite the difficulties, Wang suggest that the concerning levels of survelliance are not exclusive to China or a particular country: government surveillance is taking place all over the world.

‘There is no privacy in first world,’ she says. ‘Even if you’re in Europe or the US you have to be very worried about where your data is going or if it is protected.” This makes the inherent risk and potential for data abuse and breaches an increasingly relevant discussion.