top of page

Reducing AI Biases in Healthcare: Follow These Four Steps-The HSB Blog 5/17/21




Key Takeaways:

  • With increased interdependence of medicine and data sciences, new physician/data scientists are needed to help develop and audit AI models,

  • Most data fed into AI tools tend to be homogeneous patient populations, thus companies must institute frameworks for responsible data use.

  • Minimizing racial and ethnic bias in AI requires auditing both the development and output of models to ensure their clinical accuracy and relevance.

  • Transparency and explainability, particularly around data privacy and security will be key in ensuring trust in models


The Problem:


AI systems contain biases for many reasons, two very common ones are 1) cognitive biases and 2) incomplete data sets.


According to Verywell Mind a cognitive bias is a systematic error in thinking that occurs when people are processing and interpreting information in the world around them and affects the decisions and judgments that they make. Cognitive bias often works as rules of thumb that help you make sense of the world and reach decisions with relative speed. They may manifest themselves as feelings towards a person or a group based on their perceived group membership. More than 180 human biases have been defined and classified by psychologists, and each can affect individuals for whom we make decisions. These biases could seep into machine learning algorithms through developers unknowingly introducing them to the model or a training data set which includes those biases.


Data completeness refers to “a structured and documented process performed to ensure that any database is complete for its intended use” and that all of the data needed is included and available. In addition, in healthcare, “data are considered complete if a patient record contains all desired types of data (i.e., breadth), contains a specified number or frequency of data points over time (i.e., density), or has sufficient information to predict an outcome of interest (i.e., predictive)”. However given the inequalities in access to healthcare for some underserved communities, data will often be incomplete.


According to a 2020 study entitled, Ethics of Big Data and Artificial Intelligence in Medicine, “most data fed into AI tools tend to be homogeneous regarding patients’ characteristics. This may result in an under-representation or an over-representation of certain groups in the population”. For example, according to the COVID Racial Data Tracker, while Blacks accounted for over 15% of COVID deaths, they made up less than 10% of the population for clinical trial participants for both the Pfizer and Moderna vaccines. If the data available for AI is gathered primarily from a White population, the resulting AI systems will know less about other populations and therefore will not benefit Black patients or patients from other ethnic groups, per se. As noted in Ethics of Big Data and Artificial Intelligence in Medicine, “common practice is that minority populations are often under-represented, which makes them vulnerable to erroneous diagnosis or faulty treatment procedures as a result”.


The Backdrop:


Applying AI to clinical issues in healthcare is difficult. Healthcare data can be unstructured with data privacy and security issues further complicating data sharing. In addition, as we have seen during the COVID pandemic for a variety of reasons, underserved populations are underrepresented in training data sets and these data sets themselves contain elements of conscious and unconscious bias. As noted in Algorithms, Machines and Medicine in The Lancet “ training only on patients from one health service or region...runs the risk of overfitting to the training data, resulting in brittle degraded performance in other settings.” Data scientists and clinicians may approach data through a different lens. For example data scientists may seek to optimize models without considering the ability of clinicians to impact the variables they are trying to optimize. On the other hand, physicians often struggle with the balance between application of their clinical experience and trusting treatment protocols derived from technologically complex, often unexplainable AI tools. Along those lines, patients and clinicians want to understand the factors that went into data driven models, what factors these models consider, how these models arrive at their conclusions and how clinically valid treatment protocols derived from these models are. If nothing else, patients want to be assured that they will not be harmed in any way by following the advice of AI derived models.


Implications:


As we merge medicine and data sciences together, we must keep equity, transparency, explainability and trust in the forefront; these suggestions on eliminating biases in AI are crucial. AI and machine learning tools are products of the human mind and human beings inherently carry biases and by proxy their products/creations are prone to contain many of these same biases. As the novel pandemic exposed the existing disparities within the healthcare systems, efforts must be made to consciously assess models to ensure that they do not contain bias both conscious and unconscious going forward.


First and foremost data scientists should ensure that data and data training sets collected for research and treatment must be heterogeneous enough to build deep learning algorithms that represent the diverse patient populations they are meant to serve. This will help ensure historically marginalized groups are treated fairly and accounted for in the algorithm development process to improve health outcomes. As outlined in Enhancing Trust in Artificial Intelligence: Audits and Explanations Can Help, “companies [should institute] a framework for responsible data use, particularly in the context of avoiding bias”.


In addition, greater formal collaboration between physicians and data scientists is required to ensure that models are looking at the appropriate data and impacting treatment plans correctly. In fact, a number of programs including the Cleveland Clinic’s Center for Artificial Intelligence (CCAI) and Inception Labs at the Medical College of Wisconsin are pursuing interdisciplinary training and development to improve the application of AI initiatives in healthcare.


Audits should also become a consistent and required part of the AI development process. As noted in a 2018 report from the Information Systems Audit & Control Association (ISACA), audits “should focus on the controls and governance structures that are in place and determine that they are operating effectively”. Audits should occur both pre-development and post-training/prior to implementation to guarantee that models do not have disparate impact, that they follow all existing laws and regulations as well as following all best practices. One approach may be to evaluate and map all such models for any potential disclosures of Protected Health Information (PHI).


Data scientists and clinicians must take steps to ensure models are transparent and explainable to gain the trust of patients and clinicians. This must include explaining the factors that go into building models including demographic data, the nature and types of training data and parameters the models are trying to optimize. In addition, to the extent possible, developers of AI models in healthcare must be able to provide answers to patients’ questions surrounding data privacy and security including the availability and exchange of data. “Patients must have the right to decide: who will own their data, where that data will be stored, and what that data will be used for.” These suggestions coupled with incorporating evolving metrics for ‘fairness’ and equity are non-negotiable for improving overall health outcomes. Ultimately, by correctly combining the processing capability of artificial intelligence with the experience and insights gained from the human minds, healthcare systems can improve the quality of care, drive better patient outcomes and reduce burden on the healthcare system.


Related Reading:

Search By Tags
Recent Posts
Archive
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Social Icon
bottom of page