Data Privacy and AI: Is it even possible? Now, there’s a loaded question. With so many preceding it: What do we mean by AI? Which aspects of what data are we referring to when it comes to AI? Are ethical practices universal? So, here are a few things to consider.

What data are we talking about?

Are we talking about the data going in to form an algorithm? Or the output? Does privacy mean we are protecting people from the data scientists themselves, or external actors? Add into the mix large amounts of data from a multitude of disparate places – and even data previously thought of as anonymous can end up identifying someone by linking bits of ad hoc information together.

Transparency, awareness & risk

And yet, it’s not just who has access to your data – it’s about transparency and awareness as to what is happening to it, too. Are you happy for your data to be used in the learning data? Are you aware of why certain decisions have been made? That’s extremely important.

Then there’s risk appetite. Individuals, societies and businesses all have different risk tolerances. We all know that a company’s risk appetite can be risk averse or more tolerant of it, but the same applies to individuals. Some people would rather pay a subscription online and get a free deal – but be advertised to. Others would rather pay than have their data shared with other companies and a profile built about them accordingly.

Cultural considerations

There are also cultural nuances at play. We often talk about data privacy as though it’s a fixed concept. But notions on this are not the same across the globe. Different societies place different weight on the rights of society versus that of the individual; which has a knock-on effect on what is viewed as acceptable. The COVID vaccine uptake is a good example of when society can come before the needs of individuals, and that’s before you even get to the laws: the GDPR, for example, is about governing the use of data, where, for many US laws, it’s more about how you share it.

Bias – good or bad?

We tend to discuss data bias as something inherently bad. And it can be, of course. The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative carried out research in that showed how a platform’s tools allowed advertisers to specifically target ‘white, African American, Asian or Hispanic users’. And this had the power to discriminate against and marginalise certain demographics – particularly in the areas of housing, employment or credit. In this way, ‘bad’ bias can then creep into the learning data and have a ripple effect.

But, what if bias means an algorithm moving towards something unexpected, that could teach us something we didn’t know? Data can move in a way that would never occur to us. Medicine illustrates when bias can be a force for good. When it comes to gender, race and ethnicity, certain drugs react in different ways. If we try and take this bias out of the algorithm, the medicine may not be as effective for certain groups. Equally, if attention was paid to gender bias in research on car crashes, fewer women might be seriously injured. Different height, weight, seatbelt usage, and crash intensity would be considered, rather than data from a 50th percentile standard male dummy. In some cases, it’s not bias in Machine Learning that needs to go, it’s awareness and monitoring that needs to be ramped up.

A top-down business issue

When it comes to AI and data ethics, we are all still on a journey to determine what good looks like. This topic isn’t just a technical issue or a legal issue – it’s a business issue. A lot of organisations across the world want to be considered ethical. Still more need to be perceived that way. But, how can you be considered ethical in general, if you aren’t being ethical with your data? You can’t have it both ways. Data ethics and AI may be a paradox, but that doesn’t mean it isn’t a powerful opportunity for businesses, who can build on foundations of trust – and grow.

 

PICCASO Podcast with Data Privacy Panellists from Calligo, Google and Shell

” AI and the Ethical Implications of Bias in Machine Learning (ML) Models”

Available to watch On-Demand

Calligo’s VP of Data Ethics and Privacy, Sophie Chase-Borthwick, will join:
– William Malcolm – Privacy Legal Director, Google
– Radha Gohil – Data Ethics Strategy Lead, Shell
– Anne W. – Security Specialist, Microsoft

Data Ethics is a major area for consideration in the world of data, governance, privacy and law. Artificial Intelligence (AI) can perform highly complex problem-solving (such as unravelling intricate cancer diagnoses), but it can also suffer major setbacks (such as the potential for racial discrimination).

AI is outperforming humans at narrowly defined, repetitive tasks, which is the space in which AI excels, there are however some risks associated with AI and during our panel debate, we have invited some leading experts and thought leaders to help us navigate this complex area. 

WATCH ON-DEMAND