top of page

Explainable AI-Making AI Understandable, Transparent, and Trustworthy-The HSB Blog 3/23/23



Our Take:

Explainable AI, or AI whose methodology, algorithms, and training data can be understood by humans, can address challenges surrounding AI implementation including lack of trust, bias, fairness, accountability, and lack of transparency, among others. For example, a common complaint about AI models is that they are biased and that if the data that AI systems are trained on is biased or incomplete, the resulting model will perpetuate and even amplify that bias. By providing transparency into how an AI model was trained and what factors went into producing a particular result explainable AI can help identify and mitigate bias and fairness issues. In addition, it can also increase accountability by making it easier for users and those impacted by models to trace some of the logic and basis for algorithmic decisions. Finally, by enabling humans to better understand AI models and their development, explainable AI can engender more trust in AI which could accelerate the adoption of AI technologies by helping to ensure these systems were developed with the highest ethical principles of healthcare in mind.


Key Takeaways:

  • AI algorithms continuously adjust the weight of inputs to improve prediction accuracy but that can make understanding how the model reaches its conclusions difficult. One way to address this problem is to design systems that explain how the algorithms reach their predictions.

  • ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both of which are well in excess of what any human brain could process and break down.

  • During the Pandemic, the University of Michigan hospital had to deactivate its AI sepsis-alerting model when differences in demographic data for patients affected by the pandemic created discrepancies and a series of false alerts.

  • AI models used to supplement diagnostic practices have been effective in biosignal analyses and studies indicate physicians trust the results when understand how the AI came to its conclusion


The Problem:

The use of artificial intelligence (AI) in healthcare presents both opportunities and challenges. The complex and opaque nature of many AI algorithms, often referred to as "black boxes", can lead to difficulty in understanding the logical processes behind AI's conclusions. This not only poses a challenge for regulatory compliance and legal liability but also impacts users ability to ensure the systems were developed ethically, are auditable and eventually their ability to trust the conclusions and purpose of the model itself. However, the implementation of processes to make AI more transparent and explainable can be costly and time-consuming and could potentially result in a requirement or at least preference that model developers may need to disclose proprietary intellectual property that went into creating the systems. This process is made even more complex in the U.S. where the lack of general legislation regarding the fair use of personal data and information can hamper the use of AI in healthcare, particularly in clinical contexts where physicians must explain how AI works and how it is trained to reach conclusions.


The Backdrop:

The concept of explainable AI is to provide human beings involved with using, auditing, and interpreting models a methodology to systematically analyze what data a model was trained on, what predictive factors are more heavily weighted in the models as well as provide cursory insights into how algorithms in particular models arrived at their conclusions/recommendations. This in turn would allow the human beings interacting with the model to better comprehend and trust in the results of a particular AI model instead of the model being viewed as a so-called “black box” where there is limited insight into such factors.


In general, many AI algorithms, such as those that utilize deep learning, are often referred to as “black boxes” because they are complex, can have multiple billions and even trillions of parameters upon which calculations are performed and consequently can be difficult to dissect and interpret. For example, ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both well in excess of what any human brain could process and break down. Moreover, because these systems are trained by feeding vast datasets into models which are then designed to learn, adapt and change as they process additional calculations the products of the algorithms are often different from their original design. As a result, the numbers of the parameters the models are working with and the adaptive nature of the machine learning models, engineers and data scientists building these systems cannot fully understand the “thought process” behind an AI’s conclusions or explain how these connections are made.


However, as AI is increasingly applied to healthcare in a variety of contexts including medical diagnoses, risk stratification and anomaly detection, it is important that AI developers have methods to ensure they are operating efficiently, impartially, and lawfully in line with regulatory standards both at the model development stage and when models are being rolled into use. As noted in an article published in Nature Medicine, starting the AI development cycle with an interpretable system architecture is necessary because inherent explainability is more compatible with the ethics of healthcare itself than methods to retroactively approximate explainability from black box algorithms.


Explainability, although more costly and time-consuming to implement in the development process, ultimately benefits both AI companies and the patients they will eventually serve far more than if they were to forgo it. Adopting a multiple stakeholder view, the layman will find it difficult to make sense of the litany of data that AI are trained on, and that AI recites as part of their generated results, especially if the individual interpreting these results lacks knowledge and training on computer science and programming. Through creating AI with transparency and explainability , developers also create responsible AI that may eventually give way to the larger-scale implementation of AI in a variety of industries, but especially healthcare where more digitization is generating more patient data than ever before along with the need to manage and protect this data in appropriate ways.


Creating AI that is explainable ultimately increases end user trust, improves auditability and creates additional opportunities for constructive use of AI for healthcare solutions. This is one way to reduce the hesitation and risks associated with traditional “black box” AI by making legal and regulatory compliance easier, providing the ability for detailed documentation of operating practices, and allowing organizations to create or preserve their reputations for trust and transparency. While a large number of AI-enabled clinical decision support systems are predominantly used to provide supporting advice for physicians in making important diagnostic and triage decisions, a study from the Journal of Scientific Reports found that the this actually helped improve physicians’ diagnostic accuracy, with physician plus AI actually performing better than whey they received human advice concerning the interpretation of patient data (sometimes referred to as the “freestyle chess effect”). AI models used to supplement diagnostic practices have been effective in biosignal analyses such as that of electrocardiogram results to detect biosignal irregularities in patients as quickly and accurately as a human clinician can. For example, a study from the International Journal of Cardiology found that physicians are more inclined to trust the generated results when they can understand how the explainable AI came to its conclusion.


As noted in the Columbia Law Journal, however, the most obvious way to make an AI model explainable would be to reveal the source code for the machine

learning model, that actually “will often prove unsatisfactory (because of the way machine learning works and because most people will not be able to understand the code)” and because commercial organizations will not want to reveal their trade secrets. As the article notes another approach is to “create a second system alongside the original ‘black box’ model, sometimes called a ‘surrogate model.’” However, a surrogate model only closely approximates the model itself and does not use the same internal weights of the model itself. As such, given the limited risk tolerance in healthcare we doubt such a solution would be acceptable.


Implications:

As noted by all the buzz around ChatGPT with the recent introduction of ChatGPT4 and its integration into products such as Microsoft’s Copilot and Google’s integration of Bard with Google Workspace, AI products will increasingly become ubiquitous in all aspects of our lives including healthcare. As this happens, AI developers and companies will have to work hard to ensure that these products are transparent and do not purposely or inadvertently contain bias. Along those lines, when working in healthcare in particular, AI companies will have to ensure that they implement frameworks for responsible data use which include 1) ensuring the minimization of bias and discrimination for the benefit of marginalized groups by enforcing non-discrimination and consumer laws in data analysis; 2) providing insight into the factors affecting decision-making algorithms, and 3) requiring organizations to hold themselves accountable to fairness standards and conduct regular internal assessments. In addition as noted in an article form the Congress of Industrial Organizations, in Europe AI developers could be held to legal requirements surrounding transparency without risking IP concerns under Article 22 of the General Data Protection Regulation which codifies an individual’s right to not be subject to decisions based solely on automated processing and requires the supervision of a human in order to minimize overreliance and blind faith in such algorithms.


In addition, one of the issues with AI models is due to data shifts, caused when machine learning systems underperform or yield false results due to mismatches between the datasets they were trained on and the real-world data they actually collect and process in practice. For example, as challenges to individuals’ health conditions continue to evolve and new issues emerge, it is important that care providers consider population shifts of disease and how various groups are affected differently. During the Pandemic, the University of Michigan Hospital had to deactivate its AI sepsis-alerting model when differences in demographic data gathered by patients affected by the pandemic created discrepancies with the data the AI system had been trained on, leading to a series of false alerts. As noted in an article in the New England Journal of Medicine this has fundamentally altered the way the AI viewed and understood the relationship between fevers and bacterial sepsis.


Episodes like this underscore the need for high-quality, unbiased, and diverse data in order to train models. In addition, given that the regulation of machine learning models and neural networks in healthcare is continuing to evolve, developers must ensure that they continuously monitor and apply new regulations as they evolve, particularly with respect to adaptive AI and informed consent. In addition, developers must ensure that models are tested both in development and post-production to ensure that there is no model drift. With the use of AI models in health care there are special questions that repeatedly need to be asked and answered when using these models. Are AI properly trained to account for the personal aspects of care delivery and consider the individual effects of clinical decision-making, ethically balancing the needs of the many over the needs of the few? Is the data collected and processed by AI secure and safe from malicious actors, and is it accurate enough so that the potential for harm is properly mitigated, particularly against historically underserved or underrepresented groups? Finally, what does the use of these models and these particular algorithms mean with regard to the doctor-patient relationship and the trust vested in our medical professionals? How will decision-making and care be impacted when using AI that may not be sufficiently explainable and transparent enough for doctors themselves to understand the thought process behind and therefore trust the results that are generated? These questions will undoubtedly persist as long as the growth in AI usage continues and it is important that AI is adopted responsibly and with the necessary checks and balances to preserve justice and fairness for the patients they will serve.


Related reading:

Comments


Search By Tags
Recent Posts
Archive
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Social Icon
bottom of page