top of page

Does AI Always Optimize Healthcare? You Need Data Access, Education & Oversight -The HSB Blog 2/8/21



AI Products Must Incorporate These Elements to Optimize Healthcare


Our Take: To fully realize the potential of AI in healthcare, policymakers must prioritize strategies to help providers ensure data access, improve interdisciplinary education and establish a rigorous oversight regime for implementing AI. Without establishing a framework for the implementation of AI, it is unlikely to achieve its full potential to transform healthcare.


Description: With dramatic advancements in massively parallel processing, data storage and compute power AI is increasingly being applied to solve issues that have bedeviled healthcare for years. AI tools are being applied to diagnostic, monitoring, administrative, and surgical uses, among others. According to the Association for the Advancement of Medical Instrumentation (AAMI), AI is considered to be “one of the most promising digital health tools” in the words of former Food and Drug Administration (FDA) Commissioner Scott Gottlieb. The AAMI also notes that 77% of organizations either already leverage AI to support clinical decision-making or are likely to, and 66% are using AI to extract meaning from big data. Providers are interested in AI and recognize its huge potential but face obstacles to implementing AI tools. The most fundamental issue for AI integration faced by the healthcare system is a lack of uniform access to high-quality, representative data. The usefulness of AI hinges entirely on the quality of the data it’s working. In addition, as noted by the AAMI data concerns also center on patient privacy, the expenses of data collection, and the challenge of interoperability between different provider databases. This is particularly true for electronic health record systems, as patient data are often incomplete or split into multiple databases. AI requires deep and complete data sets to train and run models which requires data that is comprehensive and in a compatible format with any existing data sets. As a result, policymakers need to establish and expand high-quality data access mechanisms beyond the recent interoperability standards that were introduced. According to a report from the US Government Accountability Office (GAO) and the National Academy of Medicine (NAM), one way to achieve this is to create a data commons, which is a cloud-based data sharing platform that allows participants to access, edit, share, and store data in one digital space. According to the report, increasing high-quality data access and transparency can help developers eliminate bias in data by ensuring data are representative at larger scales. Without such data access mechanisms, AI tools could be vulnerable to bias from skewed data, which can impact the manner in which decisions are made as well as the quality of such decisions which could have negative consequences for health equity. In addition, policymakers must address the level of education and practical experience healthcare professionals have in developing AI models and applying their conclusions. Among other things the GAO/NAM report suggests broadening interdisciplinary education to help providers use AI effectively. For example, the report recommends not only changing the curriculum in medical and nursing schools to include AI content, but also establishing new research grants to encourage participation in interdisciplinary projects that utilize AI. It highlights a model program offered at the Department of Veteran’s Affairs for postdoctoral fellows to gain experience working on AI and big data initiatives. Interdisciplinary education for providers such as that outlined would enable providers to better understand and critically evaluate recommendations from AI tools and empower them to make more informed decisions. Finally, policymakers must create standardized regulatory oversight to ensure the safety and efficacy of AI models as well as any recommended treatment protocols. Among other recommendations, regulators have proposed an evolutionary approach to the regulation of adaptive AI to encourage the consistent monitoring and improvement of tools throughout their life cycle. This includes the review and audit of both the inputs used to train and develop the models as well as the auditing of any model recommendations. One suggestion for the GAO/NAM report is that a multi-stakeholder party collaborate with policymakers to focus on AI tools the FDA does not oversee, like the Medical Device Regulators Forum, a trans-governmental regulatory organization that covers medical devices the FDA does not.


Implications: While the potential applications and benefit to the healthcare industry that AI presents is clearly recognized, policymakers, regulators, practitioners and data scientists that develop these tools must all collaborate in applying the practices mentioned to address data access, education, and oversight. At a minimum neglecting to do so would leave potential advancements in care unrealized or delayed. Failure to do so would also potentially lead to the application of poorly trained models that could produce inappropriate or even dangerous treatment recommendations. Moreover, such models could produce biased or inaccurate results leading to a lack of trust in the models thus hurting their adoption. Establishing consistent and flexible frameworks for the regulation of AI will also be key as adaptive AI could drive changes in treatment protocols as models become more precise and prediction algorithms improve with experience. Coordinating these efforts, not just simply pursuing them, will be a demanding task. Once AI technology is adopted by larger, resource rich providers its benefits and usefulness should become even more broadly apparent, spurring a virtuous cycle of broader adoption and efficacy.




As the FDA Clears a Flood of AI Tools, Missing Data Raise Troubling Questions on Safety and Fairness

Event: A recent article published in STAT+ newsletter highlighted existing gaps in the FDA’s regulatory and standards approval process. STAT+ examined data reported in hundreds of pages of documents filed with the FDA over the last six years by companies that ultimately gained the approval of products that rely on AI. The examination found inconsistent standardized frameworks to assure safety, efficacy, and fairness when it came to FDA approvals. Also, there was a lack of information on whether AI products improved care or triggered unintended consequences such as an increase in incorrect diagnoses, unnecessary treatment, or racial disparities. The FDA has sought to refine and clarify its regulatory approach by releasing several guidance documents and establishing a new digital health unit to deploy regulatory standards for developing and managing such tools.

Description: FDA clearance is necessary when determining whether a product is safe to deploy in healthcare facilities or assist clinicians in deciding various patient health outcomes. However, most AI products submitted to the FDA are not required to file and be approved under a premarket approval (PMA) the most stringent of the device marketing applications, but instead are reviewed through the FDA’s 510(k) pathways. A 510(k) is a premarket submission made to FDA to demonstrate that the device to be marketed is at least as safe and effective (substantially equivalent) to one or more similar legally marketed devices (predicates). The submitter of a 510(k) must make and provide evidence to support their substantial equivalency claims. As a result, AI product manufacturers are not required to systematically document how the AI was developed or whether its performance was validated on an independent dataset. This is a crucial component to ensuring whether the product will work on a wide range of patients relative to diverse characteristics such as age, sex, gender, race, and ethnicity. However, machine learning (ML) systems are designed to help find hidden patterns in data that may improve care, a fundamental difference from the drugs and traditional devices the FDA is accustomed to viewing. STAT+ found that of the 161 products cleared by the FDA between 2012 and 2020, only 73 disclosed the amount of patient data used to validate their devices' performance, only 7 reported the racial makeup of their study populations, and only 13 provided a gender breakdown. Moreover, there is often a noticeable gap in demographic data, for example in breast imaging. The inconsistencies with FDA standards have led clinicians to question whether the tools will be useful and equitable to the patients it intends to serve. Recognizing this inconsistency, the agency has proposed a new action plan for increased regulation and how AI tools will be monitored. Their new action plan proposes to create a process for reviewing planned alterations or updates to a product before they are made so that the agency can allow companies to iterate their products in ways the agency finds safe and useful. The agency is also exploring ways to alter its regulatory frameworks to use more real-world performance data. Also, since much of the submitted documentation for tools does not assess the demographic of their training datasets, STAT+ recommends manufacturers be transparent about their reference datasets at submission so that people know the constraints of the AI instrument.

Implications: The FDA has received an influx of demands from political and business leaders to speedily approve products financed with millions of dollars in private investment. The pressure is likely to grow as rapid software innovations and computing power increases, making it faster and easier to encode algorithms into medical products. Although regulations are a vital component in standardizing AI products, potential risks should be identified early before becoming rooted in healthcare delivery. For future AI products to be successful and utilized in healthcare facilities and clinicians, inconsistencies in datasets have to be addressed. Diverse and unbiased patient cohorts are clearly necessary and will allow for less biased data and remove clinical variables that lead to AI devices' mistakes. In addition, incorporating a diverse population of patients in AI technologies will provide transparency and confidence in the products, especially for clinicians where such technology can often be opaque. Experts argue that the FDA must treat AI more like a human by asking questions like how and where it was trained and how the AI product will react when exposed to complicated situations in the real world. Moreover, attention should be given by the FDA to executives who are initiating discussions with the agency to ensure study designs are inclusive. Details such as the ethnicities and races of training data upon which the machines were trained should be routinely exposed and scrutinized by the FDA. Currently there is often an underrepresentation of certain populations, generally people of color, which is required for these systems to provide unbiased outcomes.The lack of evidence or standardized regulation has put the burden on hospitals to determine that the AI the tools cleared by the FDA actually perform as indicated because AI tools developed based on unrepresentative data may not be accurate for the variety of populations whose care many hospitals are charged with.




Folx Health Raises $25 Million for Virtual Clinical Offerings and Care for the LGBTQIA+ Community


Event: Folx Health, a Boston-based startup that offers virtual care and prescriptions for hormone replacement therapy and sexual health, has raised $25 million in new funding. Folx Health was designed for the queer and transgender community. It provides access to a network of queer and trans clinicians with a tailored focus on clinical offerings that are often marginalized in traditional health settings.

Description: Folx Health is a primary care practice with in-person and virtual options for the LGBTQIA+ (lesbian, gay, bisexual, pansexual, transgender, genderqueer, queer, intersexed, agender, asexual and ally) community. The company announced the availability of its hormone replacement therapy for testosterone or estrogen with monthly plans starting at $59 a month. It will also begin releasing its sexual health and wellness offerings starting with erectile dysfunction (ED) treatment, soon to be followed by at-home sexually transmitted infection (STI) testing and treatment and pre-exposure prophylaxis (PReP), all customized for the specifics of queer and trans bodies. The services will include unlimited on-demand clinical support with at-home lab testing (for most plans) and home-delivered medications (costs may vary based on medication). The company’s services are now available in California, Connecticut, Delaware, Florida, Illinois, Massachusetts, North Carolina, New York, Texas, Virginia and Washington. The company is also launching a Folx Library, which will serve as a content hub and resource for Queer and Trans health, written by Folx clinicians and its broader community.

Implications: As 2% of the population identify as transgender and 10% to 20% of the population identify as part of the LGBTQIA+ community, there is a large market opportunity to digitally cater to this community. Although the community has been historically under-served in the healthcare market, the pandemic further limited the community. The LGBTQIA+ community requires not only physical needs like hormone replacement therapy as well as sexual health and wellness services, but also community resources and a safe space with other members of the LGBTQIA+ community. Since Folx Health was founded at the peak of the pandemic (Spring 2020), it has helped the LGBTQIA+ community access their online and home-delivered needs. In addition to Folx Health, there are a growing number of digital health companies tackling health issues for the LGBTQIA+ community. Queerly Health is an online marketplace where LGBTQIA+ people can connect with vetted and trained providers, telehealth tools and concierge health. Violet Services is a mental healthcare startup run by and for the LGBTQIA+ community, and Plume is a digital health service focused exclusively on the transgender community and has expanded into employee benefits.




FDA Urged to Review Accuracy of Pulse Oximeters for People of Color


Event: On January 28th, the American Hospital Association reported that three senators are urging the FDA to examine the accuracy of pulse oximeters of minority patients due to recent studies that have concluded that the devices are providing inaccurate measures due to the color of their skin. In an era where providing quality care has become essential in healthcare, it is imperative that this issue is addressed immediately to ensure positive health outcomes, reduce health disparities, and exhibit equality.


Description: As COVID has continued to surge through hospitals, many patients have been given pulse oximeters, a tool that monitors blood oxygen levels. Due to the ongoing impact of the virus and its disproportionate consequences on minorities, their use of pulse oximeters has significantly increased. According to the letter sent to the U.S. Food and Drug Administration (FDA) by Senators Elizabeth Warren, Cory Booker, and Ron Wyden, studies suggest that these devices are biased against patients with “darkly pigmented” skin. They are less accurate in Black patients with undetected low levels of oxygen in their blood compared to White patients and as a result, black patients are at a higher risk for hypoxemia. With the increased purchase of oximeters in retail pharmacies, usage in emergency departments and outpatient testing centers, it is important that these devices maintain the highest level of accuracy. Although these three senators have only recently shed light on this issue, the medical community has long acknowledged that there may be racial bias in these devices. According to a 2005 publication, three pulse oximeter brands were tested on 11 black participants and 10 white participants. All three assigned higher pulse oxygen saturation (SpO2) levels to the Black individual, notably finding the effect of skin pigment on inaccuracy in the pulse oximeters increased linearly as SpO2 levels decreased. This means a patient was actually more ill than the pulse oximeter reflected and placed the patient in jeopardy, clearly a serious failing and concern.


Implications: With the history of healthy disparities, trauma, and distrust amongst minorities, specifically African Americans, within the healthcare system, it is important that the issue with the pulse oximeters be swiftly addressed. The senators have tasked the FDA with reviewing this issue and providing answers to some of the following questions: 1) Has the FDA reviewed data on the inaccuracy due to skin color of pulse oximeters, including those used in professional and at-home settings? If so, what has the FDA concluded, and what is the clinical significance of this inaccuracy? 2) For current pulse oximeters being used clinically and over-the-counter, before the product received FDA clearance or approval, did the FDA collect data on the accuracy of the product among subgroups for example, by sex, age, race, and ethnicity? 3) Has the FDA been monitoring these devices to ensure they are being marketed appropriately under the Federal Food, Drug, and Cosmetic Act (FDCA)? 4) To what degree are different cleared or approved pulse oximeters efficacious across racial and ethnic groups? Is one group consistently producing more accurate measures? Is one group consistently producing less accurate measures? 5) Does the FDA plan to adjust accuracy requirements of future pulse oximeters seeking clearance or approval to ensure they are accurate on all patients regardless of skin color? 6) Is there evidence that other infrared medical devices that interact with a patient’s skin pigment, such as vein visualization devices or thermometers, also vary in efficacy by the patient’s race and ethnicity? Accountability is key as the pandemic continues to significantly impact communities of color and the accuracy of these devices can mean the difference between life or death for many of these individuals. Their right to quality care should not be infringed upon due to a medical device’s bias towards their darkly pigmented skin.




Biden Administration Awards $231M to Increase US Production of At-Home, OTC COVID-19 Test


Event: The U.S. Department of Defense, under the Biden administration, awarded $231.8 million to Australia's Ellume to expand U.S. production of a rapid at-home test for COVID. The contract is a part of the Biden Administration’s pandemic response effort that includes the purchase of 8.5 million antigen tests for nationwide distribution. According to Ellume their rollout strategy for the COVID home test includes retail commercialization and partnerships with other public and private institutions.

Description: Throughout his presidential campaign President Biden pledged to act rapidly and decisively with regard to pandemic response. As a result, President Biden is following through on a campaign pledge to scale up testing capacity and invest in advanced technologies such as at-home and rapid tests. This is part of his team’s unified national strategy to reduce the spread of the Coronavirus. In addition, President Biden issued an executive order to establish a national pandemic testing board to coordinate federal efforts to expand test availability and use. Ellume's product is the first over-the-counter self-test for COVID to receive FDA emergency use authorization. The test was developed with a $30 million contract from the National Institutes of Health's Rapid Acceleration of Diagnostics (RADx) initiative. The Ellume test is an antigen test and can be performed in about 15 minutes using a nasal swab specimen from adults and children as young as 2 years old and is authorized for people with our without symptoms. While the FDA has acknowledged that antigen tests can be less sensitive and less specific than lab run tests, Ellume’s test correctly identified the presence of a virus (or lack of it) with over 95% accuracy in people with symptoms and over 90% accuracy in people without symptoms.

Implications: The convenience of an over the counter kit gives the general public the ability to test themselves and receive their results within minutes. The availability of such tests through initiatives undertaken by the Biden administration will build confidence in controlling the pandemic and in testing to understand the spread of the virus. The widespread availability of testing can also help control the spread of the virus in rural and underresourced communities where testing facilities and access to testing may be limited. In addition, testing solutions such as this will give people better information about their own exposure to the virus and enable them to be more cautious about exposing themselves to others as well as taking required precautions when they have symptoms. Tests like these which are provided with a smartphone application that walks users through specific instructions about how to perform the test and receive their results should make the process user-friendly, easy to use and help eliminate the current backlog in testing.


Tags: