The Senate Finance Committee recently addressed the concerning issue of regulating artificial intelligence (AI) in healthcare to prevent the increase of bias and improper denial of insurance coverage. The discussion was primarily focused on the use of commercial algorithms in the U.S. healthcare system, particularly a widely utilized algorithm, which a study led by Obermeyer et al. revealed to exhibit large racial bias. The algorithm, designed to guide health decisions, demonstrated that black patients with the same risk score were often sicker than their white counterparts, leading to a great reduction in identifying black patients in need of extra care. The bias stemmed from the algorithm’s reliance on health costs as a proxy for health needs, a flawed metric influenced by unequal access to care. The study suggested adjusting the algorithm to remove racial bias by excluding costs as an indicator of health needs. This development raises concerns about broader implications of algorithmic bias in healthcare and highlights the value of regulatory oversight.
The study focused on a live, scaled algorithm deployed nationwide, representing a common approach in the industry, affecting millions of patients annually. The algorithm, part of commercial risk-prediction tools applied to approximately 200 million people in the U.S., is beneficial in targeting patients for “high-risk care management” programs. These programs aim to improve care for patients with complex health needs by providing additional resources. The study emphasized that this issue involves this specific algorithm, demonstrating a generalized problem in algorithmic bias in the health sector.
The comprehensive analysis conducted in this study reveals racial disparities embedded in algorithms, highlighting the mechanisms that contribute to these biases. The research, centered around a widely used algorithm in the healthcare sector, unveiled a disconcerting reality that black patients assigned the same risk score by the algorithm were consistently found to be considerably sicker than their white counterparts. This alarming revelation raised concerns about the effectiveness of predictive algorithms, particularly when used to guide policy interventions. The study’s exploration of racial bias within the algorithm echoed broader issues observed in various sectors where predicted risk forms the basis for targeted policy measures. The findings suggested that the algorithm’s reliance on health costs as a proxy for health needs contributes in perpetuating racial biases. The flawed metric, influenced by unequal access to care, led to a systemic underestimation of the health needs of Black patients, resulting in a large reduction in identifying those in need of additional care. The study estimated that addressing this racial bias could potentially increase the percentage of black patients receiving extra help from 17.7% to a large 46.5%. This contrast highlights the importance of reassessing the algorithms used in the healthcare system to ensure equitable outcomes.
The study also explored the broader implications of label choice-induced bias, recognizing that seemingly reasonable choices could inadvertently introduce bias into algorithms. The study’s engagement with the algorithm manufacturer highlighted a proactive approach toward resolving the identified issues. The study demonstrated the fixability of label biases by proposing and experimenting with solutions, emphasizing the potential for reduction by adjusting the labels provided to the algorithm. This process, while challenging, emphasized the need for a detailed understanding of the domain, iterative experimentation, and careful selection of data elements to address biases effectively. These results emphasize the importance of regulatory frameworks governing the implementation of artificial intelligence in healthcare and the Senate Finance Committee’s attention to these matters, marking a key acknowledgment of the value of deploying algorithms ethically and impartially to promote fairness, equity, and improved healthcare outcomes.