In our latest Beyond Data podcast, co-hosts Sophie Chase Borthwick (our Data Ethics & Governance Lead) and Tessa Jones (our Chief Data Scientist) invited Tomer Elias, Director of Product Management at BigID, to discuss how AI bias affects the LGBTQ+ community.

Here we explore some of the episode’s highlights – although you can also watch the full episode here.

Why is there bias?

When building an AI algorithm or AI solution, it is crucial to make sure it’s based on data sets that are both unbiased and diverse and, in terms of the LGBTQ+ community, this often falls short. Whatever the sector – work, health, entertainment – all will be subject to bias if the LGBTQ+ community is not taken into consideration when an AI solution is being created.

For Tessa Jones, one of the barriers to collecting sufficient data is that people might be reluctant to share information about their sexual orientation or their gender journey – particularly if they don’t know how this personal data will be used. Sophie Chase-Borthwick agrees that it quickly becomes a catch-22 situation:

“The biases that make you nervous of disclosing information are the very reason that you need to disclose said personal information in order to prevent bias and improve.

Knock-on effects

Drawing on his experience as a board member of an organization that supports LGBTQ+ employees, Tomer Elias explains how candidates are being let down by recruitment AI solutions and that the consequences are significant.

“A lot of people in the LGBTQ+ community are unemployed and that’s not because they’re lacking the professionalism and passion.”

Meanwhile, medical advances in the LGBTQ+ community are constantly evolving, and many algorithms do not take these changes into account.

“People who are transitioning are not getting the right treatments because the treatment providers are not well educated about it and the data is not diverse enough,” explains Tomer.

Tessa also raises the issue of health apps that require a user to state whether they are male or female.

“Even though the equations could be written differently to how you use different input, they’re just not and that means, you either have to pretend you’re something different or just not use that tool.”

Potential of AI to help overcome bias

While AI bias is clearly affecting the LGBTQ+ community, there are innovative ways it can be used to overcome it, too. Such as in recruitment.

“At the initial interview stage, AI could be used to scramble the voice so you would not know if the candidate was male or female or someone who has transitioned,” says Tomer.

He also poses the possibility for AI to help with the retention of LGBTQ+ employees.

“Technology could help employers know that the employee is happy and feels a part of the organization.”

Time to step it up… 

There are already many AI forces for good – including recommendation systems which can help LGBTQ+ people feel more emotionally supported and The Trevor Project that uses AI to predict which callers are more likely to commit suicide to ensure they get help.

Much more needs to be done. But the fact that people are starting to think about AI bias and the LGBTQ+ community is a step in the right direction.

“Now we’re talking about it and people are realizing the actual real-world implications, hopefully more people will feel comfortable expressing themselves and we can close some of that data gap so there is more information for the models to work off,” according to our Data Ethics & Governance Lead, Sophie Chase-Borthwick.

“It’s also super critical that we have diverse AI developers who are knowledgeable about people and bias,” adds Calligo’s Tessa Jones.

To hear more of our fascinating discussion on AI bias and how it affects the LGBTQ+ community, tune in to our latest Beyond Data podcast episode below.