Medindia LOGIN REGISTER
Medindia

Racial Bias in AI Restricts Vital Access to Healthcare, Says Data Scientist

by Colleen Fleiss on Mar 10 2023 12:38 AM
Listen to this article
0:00/0:00

Racial bias in artificial intelligence was found to impede the key access to healthcare services, says data scientist.

Racial Bias in AI Restricts Vital Access to Healthcare, Says Data Scientist
Artificial intelligence (AI) leveraged by healthcare organizations can establish systemic racism, says top data scientist.
This can negatively impact Black and ethnic minority groups when applying for a mortgage or seeking healthcare, according to an industry expert. 

Confronting Biases in Artificial Intelligence

Calvin D Lawrence is a Distinguished Engineer at IBM. He has gathered evidence to show that technology used by policing and judicial systems contain in-built biases stemming from human prejudices and systemic or institutional preferences. But, he says, there are steps AI developers and technologists can do to redress the balance. 

In his new book, Hidden in White Sight, published today, Lawrence explores the breadth of AI use in the United States and Europe including healthcare services, policy, advertising, banking, education and applying for and getting loans.

Hidden in White Sight reveals the sobering reality that AI outcomes can restrict those most in need of these services.

He added: “Artificial Intelligence was meant to be the great social equalizer that helps promote fairness by removing human bias, but in fact I have found in my research and in my own life that this is far from the case.” 

Lawrence has been designing and developing software for the last thirty years, working on many AI-based systems at the U.S. Army, NASA, Sun Microsystems, and IBM. 

With his expertise and experience, Lawrence advises readers on what they can do to fight against it and how developers and technologists can build fairer systems. 

Advertisement
These recommendations include rigorous quality testing of AI systems, full transparency of datasets, viable opt-outs and in-built ‘right to be forgotten’. Lawrence also suggests that people should be able to easily check what data is held against their names and be given clear access to recourse if the data is inaccurate. 

Lawrence added: “This is not a problem that just affects one group of people, this is a societal issue. It is about who we want to be as a society and whether we want to be in control of technology, or whether we want it to control us. 

Advertisement
“I would urge anyone who has a seat at the table, whether you’re a CEO or tech developer or somebody who uses AI in your daily life, to be intentional with how you use this powerful tool.” 

Source-Eurekalert


Advertisement