top of page

Navigating the Ethical Landmines of AI in Healthcare-The HSB Blog 8/25/23

Our Take:

Ethical concerns over the use of AI in healthcare are intricate and nuanced. While AI-based algorithms have the promise to deliver more personalized, effective and efficient healthcare delivery they also hold the potential to exacerbate biases and disparities already present in the system, posing risks to data privacy and security. While protections will be imperfect and an iterative process as the use of AI, particularly generative AI, evolves in healthcare, patients must be kept informed about the use of AI-based systems and technologies in their care, and given clear information in a fashion that ensures informed consent. As AI continues to progress, maintaining ethical standards will require vital collaboration among professionals, developers, policymakers, and ethicists with ongoing updates to ethical guidelines to prioritize patient and societal welfare.

Key Takeaways:

  • As of January 2023, there were 520 FDA-cleared AI algorithms, approximately 396 of which were for radiology and 58 of which were for cardiology (Radiology Business)

  • One study found that a widely used model of health risk reduced the number of Black patients identified for extra care by more than half due to racial bias (Science)

  • The first AI models for medical use were approved by the FDA in 1995, with only 50 approved over the first 18 years, while almost 200 were approved in 2023 alone (Encord)

  • AI applications have the potential to cut annual U.S. healthcare costs by $150 billion by 2026 as Ai is used more for drug discovery and development, and improving medical research (Accenture)

The Problem:

Ethical issues around the use of AI in healthcare encompass a broad range of complex problems and dilemmas including privacy and data security, bias, fairness, explainability, transparency and job displacement. One of the most problematic and widely debated issues around the use of AI in healthcare relates to bias and fairness. Since AI algorithms are developed by human beings, they can inherit biases from the humans who write the code that create those algorithms and select the data sets that the models will be trained on. In fact, the problem often starts with the data sets which the models are trained on which are often limited in their societal representation. As noted in “Can AI Ever Overcome Built-In Human Biases?”, ”AI systems absorb implicit biases from datasets that reflect existing societal inequities. And algorithms programmed to maximize accuracy propagate these biases rather than challenge them.”

For example, as noted in the above referenced article, two of the most common biases relate to race and gender. Facial recognition systems trained mostly on light-skinned faces will inevitably struggle with dark-skinned faces and an AI recruiting tool t was found to penalize resumes containing the word “women’s” and downrank graduates of two all-women's colleges. As a result, models based on this data can lead to unequal or discriminatory treatment, undermining fairness in healthcare and perpetuating existing healthcare disparities.

In addition, ethical issues around AI in healthcare arise from concerns related to data privacy and security. The use of AI in healthcare often involves the processing and analysis of vast amounts of sensitive patient data which are then applied to things such as predictive analytics, and precision medicine among other things. Given the ever-increasing digitization of healthcare data and sheer amount of data points available on patients through tools such as sensors, remote patient monitoring and other wearable devices, data will increasingly become at risk. As a result, as noted in “Enabling collaborative governance of medical AI”, “medical AI’s complexity, opacity, and rapid scalability to hundreds of millions of patients through commonplace EHRs demand centralized governance. Already, there are well documented case studies of commonly used medical AI systems potentially causing harm to millions or unnecessarily burdening clinicians, including…Epic’s sepsis model at Michigan Medicine and elsewhere.” Protecting the privacy and security of this data, and ensuring it is not misused or breached, will likely remain a significant ethical challenge in healthcare.

There are also significant concerns relating to transparency and explainability. In layman's terms, many AI algorithms such as deep learning models, operate as "black boxes," where it is difficult if not impossible to determine what a decision or recommendation was based on. This creates issues specific to healthcare where clinicians need to be able to explain the clinical basis for their recommendations and want to be able to evaluate any recommendations around existing or evolving treatment protocols. As pointed out in a recent article in Jama Health Forum, “Patients expressed significant concerns about …the potential for artificial intelligence to misdiagnose and to reduce time with clinicians.” The article went on to highlight that “racial and ethnic minority individuals [expressed] greater concern than White people.“ Clearly, lack of transparency can raise concerns about accountability and trust in these types of models.

One final concern surrounding the use of AI in healthcare which should not be minimized revolves around the potential for job displacement among clinicians and other staff in healthcare. This has become an even greater concern more recently with the evolution of generative AI and will be true for both so-called “lower-risk” non-clinical applications and eventually even more clinical applications. As AI systems become more capable of handling various tasks, there is the potential that certain roles traditionally performed by humans, such as initial triage or diagnostics could be automated. Hence as noted in “Enabling collaborative governance of medical AI”, “front-line clinicians must be made aware of medical AI’s indications for use and understand how and how not to use it,” As outlined in a recent article in the Lancet entitled, “AI in medicine: creating a safe and equitable future”, “[AI] could change practice for the better as an aid—not a replacement—for doctors. But doctors cannot ignore AI. Medical educators must prepare health-care workers for a digitally augmented future.”

The Backdrop:

The landscape of the healthcare ecosystem is being significantly reshaped by the rapid advancements in artificial intelligence (AI) and machine learning especially with the rapid developments around generative AI. These technological strides have ushered in an era where AI can more rapidly and easily be applied to accelerate the digital transformation in healthcare. The capabilities of AI systems, especially in the domains of high-volume data analysis, predictive analytics, genomics and increasingly diagnosis and treatment recommendations seem to be growing by the day. For example, as noted in the Lancet article, “AI in medicine: creating a safe and equitable future”, “The Lancet Oncology recently published one of the first randomized controlled trials of AI-supported mammography, demonstrating a similar cancer detection rate and nearly halved screen-reading workload compared with unassisted reading. AI has [also] driven progress in infectious diseases and molecular medicine and has enhanced field-deployable diagnostic tools.” AI's ability to process vast datasets, recognize complex patterns, and provide insights that were previously unattainable with human or machine-assisted comprehension has garnered substantial attention within the healthcare community. Consequently, despite several earlier periods of hyperbole, it appears that at least augmented intelligence (or AI light) is here to stay and will remain a pivotal force driving innovation and the practice of medicine.”

In addition, with the ongoing digitization of healthcare data, healthcare organizations now have access to an ever-increasing amount of patient and clinical research data. These technological advancements have provided healthcare organizations with unprecedented access to an abundance of patient data. As noted in “Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare”, “the integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education”. As the article goes on to note, The convergence of health data and AI technologies has the potential to not only enhance the efficiency and precision of healthcare delivery but also to usher in a new era of data-driven and patient-centric healthcare solutions.

The dramatic increase in the use of data and AI in healthcare however, cannot and should not occur in a vacuum, there needs to be standards and guardrails in place to safeguard their application. Fortunately, numerous organizations and esteemed ethicists are actively engaged in the formulation and development of comprehensive guidelines and ethical frameworks. These initiatives represent a proactive response to the dynamic landscape of healthcare AI, aiming not only to regulate its application but to provide a principled and responsible framework for its deployment. As highlighted in a recent article in Nature entitled, “Enabling collaborative governance of medical AI”, these frameworks proactively address potential challenges as AI evolves, guiding the ethical balance between innovation and responsibility. They must promote ongoing dialogue and collaboration among healthcare professionals, AI developers, policymakers, and patient advocates to align AI in healthcare with ethical principles and the best interests of all. ”Policymakers must invest in human and technical infrastructure to facilitate that governance. Infrastructure might include technical investments (IT systems and processes for robust, low-cost medical AI evaluation), procedural developments (best practices for pre-implementation evaluation, care pathway integration and post-integration monitoring) or human training (training grants for clinical AI specialists).


Ethical issues and tensions in the application of AI in healthcare are far-reaching and have significant consequences for numerous stakeholders, including patients, healthcare providers, policymakers and society as a whole.

Ethical concerns in AI can erode patient trust in healthcare systems. Increasing trust and confidence in models derived and used ethically while avoiding bias is crucial in the development and deployment of artificial intelligence and machine learning systems. The steps and strategies to achieve this,include solutions such as transparency and explainability, diverse and inclusive teams, bias detection and mitigation, etc.

In terms of transparency and explainability solutions include making your model's decision-making process as transparent as possible by documenting data sources, preprocessing steps, model architecture, training sets and where possible hyperparameters. Utilize explainable AI techniques, such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations), to provide insights into how the model arrived at its predictions.

In addition to addressing and reducing the potential for bias, AI models should be developed by diverse and inclusive teams of varying backgrounds wherever possible. Research has consistently shown that diverse teams perform better (both in terms of productivity and quality of final product) and diverse perspectives can help address bias more effectively. Consciously and unconsciously, diversity encourages ethical discussions and raises awareness of potential biases with the possibility of aggravating disparities. AI teams should also proactively implement bias detection tools to identify potential bias in data and model outputs. Use of bias mitigation techniques, such as re-sampling, re-weighting, and adversarial training, to reduce bias in both the training data and the model's predictions should be standard.

AI’s use in healthcare must also be accompanied by structured and frequent audits both post-training and post-deployment, in addition, audits should routinely assess the controls and procedures that have been developed, and ensure they are being followed.

All of the above are crucial to avoid the ethical lapses that can impact patient outcomes, lead to misdiagnosis, suboptimal treatments, and even potentially harm to patients. Increasingly ethical breaches can result in legal and regulatory actions against healthcare organizations and developers of AI tools. Inadvertent data breach disclosures of non-compliance with data protection guidelines like HIPAA, GDPR and state privacy regulations can lead to fines and legal liabilities. Addressing the ethical issues in the development and deployment of AI in healthcare is critical for realizing the full potential of AI while ensuring that patients and society as a whole realized the full benefit from these technological advancements.

Related Reading:


Search By Tags
Recent Posts
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Social Icon
bottom of page