Data ethics – why data processing is about more than data protection

Share This Post

When we talk about the processing of data we usually think about data protection or the GDPR and how it applies to the lawful processing of the data.

Of course, data protection regulation is all about the processing of personal data, data that can identify an individual, and whilst some of the complexities of data mean that it’s not  always clear cut as to whether a piece of data is personal or not, if the data links to an individual in anyway (even when it appears to be anonymised) then it’s covered by the data protection rules.

But if we think about the wider connotations of processing digital data as a whole, whether personal or not there’s a lot more to be considered. Whether we talk about the rise of artificial intelligence (the so called fourth industrial revolution), machine learning, processing data for good (such as analysing medical data to help identify possible genetic disorders), algorithms or tech in our homes (e.g. voice recognition like Amazon’s Alexa) data processing is involved in one way or another.

The GDPR is technology neutral. In theory it doesn’t matter whether personal data is being processed by an algorithm or for the delivery of a service by a human using a spreadsheet or CRM, the GDPR applies. But when technology is doing the processing (AI, machine learning, voice recognition, driverless cars, etc.) how is that controlled or managed. Enter the world of data ethics or how to keep the machines (and the humans who control them) in check.

On the face of it, the concept of data ethics is no different from data protection. If you’re going to find that your data is being processed whether anonymous (if such a thing exists) or not, you want it to be:

  • fair
  • safe
  • secure
  • transparent
  • with a level of accountability

Whilst there is a lot of data being processed in anonymous form, the true value of data is usually in being able to identify actions or behaviours of individuals and determine bespoke outcomes – just look at Cambridge Analytica. Indeed Cambridge Analytica and the ensuing Facebook/CA scandal highlights the need for an ethical approach – did Facebook users know they were being targeted based on data gleaned from Facebook, was it clear?

For some, the concept of data ethics is about calling to task those organisations that have an unfair balance of power over our data (typically FAANG (Facebook, Apple, Amazon, Netflix, Google)). A look at what the tech giants are doing and whether it’s in everyone’s interest and not their financial gain. However, data ethics covers much more than that, for example:

  • How do we make sure that bias is not programmed into data processing algorithms (where the bias may unconsciously or consciously caused by the algorithms coder)?
  • When driverless cars become mainstream – who decides, in a scenario of an inevitable accident who the car hits? Should the car decide to save the passenger, the person crossing the road or the driver of another car coming towards it? What if the car’s AI has access to data that allows it make a decision about whom it’s more important to save, or less important to potentially kill?
  • Is there enough diversity in the data available?

Whether processing personal data or not, the concern about AI particularly, has got the authorities worried mainly because:

  • The big tech giants are typically those doing clever AI stuff with the data – who’s keeping them in check (although Google for example have just announced their AI ethics panel)
  • AI algorithms can be difficult to comprehend sometimes, let alone understand
  • Who’s making the decisions about how the algorithms work?

So, it’s no surprise that tech and data ethics have become buzzwords of late. And it shouldn’t be any surprise that the ICO (Information Commissioner’s Office) also has AI on its radar. In the ICO’s Technology Strategy for 2018 – 2020, AI is one of their top three priorities. Indeed, this is why in March 2019 the ICO launched their work towards a framework for auditing AI; they’ve published an overview of the framework which aims to look at both governance & accountability and AI specific risk areas, with the risks looking at a lot of the data protection principles and rules:

  • Fairness and transparency in profiling
  • Accuracy
  • Automated decision making models
  • Security
  • Data minimisation
  • Exercise of data subject rights

Whilst the ICO are clearly looking at AI from the point of view of GDPR compliance and making sure that any AI personal data processing gets the full data protection and privacy treatment, they’re not the only organisation working on approaches to data ethics or ethics in technology. The UK government has set up a Centre for Data Ethics and Innovation (CDEI) to anticipate both “the opportunities and the risks posed by data-driven technology“. Their 2019-2020 work programme is set to look at:

  • Personalised and targeted messaging via online services
  • Algorithm bias
  • Identifying the highest priority opportunities and risks
  • Respond to existing “live” issues

There’s some clues about what this work will aim to achieve in their 2-year strategy document which essentially talks about identifying issues and how they’re going to deal with them, whilst at the same time identify how AI and technology can actually make the world a better place.

So, next time you consider how you’re processing data or perhaps how others are processing your data – have a think about how it might be being processed, what decisions are being made using that data and most importantly of all, whether you’re happy for your data to be used/processed in that way.

More To Explore

GDPR & AI

The key message from the ICO regarding the use of AI is not to forget if AI is processing personal data, then you need to

Read More »

Eat. Sleep. GDPR. Repeat.

We live and breathe GDPR and ePrivacy compliance, so you don’t have too. Our GDPR UNLIMITED helpline is all about offering you help and support, whenever you need it most. As well as the unlimited helpline, you get up to 4 hours “hands-on” help each month, which we can configure to help you in anyway you need such as a GDPR review, or acting as your DPO.

As well as the unlimited helpline and hands-on help you get GDPR and privacy updates, access to our GDPR knowledge centre and webinars.

Unlimited email & phone support

Unlimited email and phone support. Email or organise a voice call as often as you need each month.​

Up to 4 hours "hands-on" help per month

We use these "hands-on" hours to do the GDPR work for you, such as reviews, acting as your DPO, checking DPIA, dealing with breaches, training your staff, etc. (Additional hours: £100+VAT per hour)

Online resources

Our Knowledge Centre gives you access to information, guidance, topic related guides and other tools to support your GDPR and PECR compliance

Updates, alerts & briefings

We provide updates and alerts and a monthly compliance briefing. You can either sign into the Knowledge Centre or sign up via email to receive an email every time we add a new update or alert

DPO services

Whether mandated or not we can act as your Data Protection Officer (DPO) and manage your day to day compliance

Webinars, workshops & training

Whether updates on the latest issue, workshops or team training, it's all included in your monthly retainer.

LIKE WHAT YOU'RE READING? join our email list

Sign up for monthly briefings and the occasional emails about our webinars and services

Want to know more about how we use your data? Check out our privacy policy