By Sen. Akilah Weber Pierson, M.D., Special to California Black Media Partners
Technology is sold to us as neutral, objective, and free of human flaws. We are told that computers remove emotion, bias, and error from decision-making. But for many Black families, lived experience tells a different story. When technology is trained on biased systems, it reflects those same biases and silently carries them forward.
We have seen this happen across multiple industries. Facial recognition software has misidentified Black faces at far higher rates than White faces, leading to wrongful police encounters and arrests. Automated hiring systems have filtered out applicants with traditionally Black names because past hiring data reflected discriminatory patterns. Financial algorithms have denied loans or offered worse terms to Black borrowers based on zip codes and historical inequities, rather than individual creditworthiness. These systems did not become biased on their own. They were trained on biased data.
Healthcare is not immune.
For decades, medicine promoted false assumptions about Black bodies. Black patients were told they had lower lung capacity, and medical devices adjusted their results accordingly. That practice was not broadly reversed until 2021. Up until 2022, a common medical formula used to measure how well a person’s kidneys were working automatically gave Black patients a higher score simply because they were Black. On paper, this made their kidneys appear healthier than they truly were. As a result, kidney disease was sometimes detected later in Black patients, delaying critical treatment and referrals.
These biases were not limited to software or medical devices. Dangerous myths persisted that Black people feel less pain, contributing to undertreatment and delayed care. These beliefs were embedded in modern training and practice, not distant history. Those assumptions shaped the data that now feeds medical technology. When biased clinical practices form the basis of algorithms, the risk is not hypothetical. The bias can be learned, automated, and scaled.
For us in the Black community, this creates understandable fear and mistrust. Many families already carry generational memories of medical discrimination, from higher maternal mortality to lower life expectancy to being dismissed or unheard in clinical settings. Adding AI biases could make our community even more apprehensive about the healthcare system.
As a physician, I know how much trust patients place in the healthcare system during their most vulnerable moments. As a Black woman, I understand how bias can shape experiences in ways that are often invisible to those who do not live them. As a mother of two Black children, I think constantly about the systems that will shape their health and well-being. As a legislator, I believe it is our responsibility to confront emerging risks before they become widespread harm.
That is why I am the author of Senate Bill (SB) 503. This bill aims to regulate the use of artificial intelligence in healthcare by requiring developers and users of AI systems to identify, mitigate, and monitor biased impacts in their outputs to reduce racial and other disparities in clinical decision-making and patient care.
Currently under consideration in the State Assembly, SB 503 was not written to slow innovation. In fact, I encourage it. But it is our duty must ensure that every tool we in the healthcare field helps patients rather than harms them.
The health of our families depends on it.
About the Author
Sen. Akilah Weber Pierson (D–San Diego) is a physician and public health advocate representing California’s 39th Senate District.