top of page


199 items found for ""

  • AI in Anesthesiology: Lowering the Risk of Surgical Complications and Adverse Outcomes

    Our Take: Artificial intelligence (AI) has the potential to transform the field of anesthesiology by improving patient safety and patient outcomes. It has the potential to contribute significantly to the advancement of healthcare practices, offering innovative solutions to address critical issues in the field of anesthesiology. Pilot AI projects have already been successfully utilized to classify high-risk surgical patients, continuously administer medications throughout surgery, and aid in clinical decision making. Further opportunities exist for AI to transform the field of anesthesiology by improving the safety of critical procedures like tracheal intubation and nerve blocks. With the current shortage of anesthesiologists, a forecasted increase in the demand for surgical procedures, and an increasingly sick patient population, AI may allow anesthesiologists to provide safe high-quality care for these complex patients and decrease the risks associated with invasive procedures. Despite the promising outlook, the incorporation of AI into anesthesiology requires a deliberate and responsible strategy. The implementation of AI technology should be done carefully to navigate legal, ethical, and safety concerns. Key Takeaways: In 2020, workplace staffing shortages affected 35.1% of US anesthesiologists, a figure that rose to 78.4% in 2022 (Anesthesiology) Experts project approximately a 100% increase in Americans aged 50 years and older with at least one chronic disease by 2050 leading to an increase in patient complexity and surgical demand (Frontiers in Public Health) An automated machine learning model that analyzed a dataset of over 1M patient encounters effectively identified patients at high risk of postoperative adverse outcomes, showcasing the potential for AI to enhance risk prediction in surgical settings (Journal of the American Medical Association) Anesthesiologists successfully used a closed-loop system to maintain blood pressure within 10% of a target range for >90% of the case time in patients undergoing abdominal surgery (Journal of Personalized Medicine) The Problem: Artificial intelligence has emerged as a transformative force in the field of anesthesiology, presenting numerous applications that could greatly improve patient care. From optimizing preoperative patient conditions to calculating the risk of adverse events during the perioperative period, AI may also soon perform routine procedures while augmenting closed-loop systems for automated medication and fluid delivery. Despite the many benefits of utilizing AI in the field of anesthesiology, multiple critical issues must be tackled before anesthesiologists can fully embrace and trust this technology. First and foremost, the adaptability and efficiency of AI in anesthesiology may be impeded by safety considerations. Notably, Ethicon Inc’s Sedasys employed an AI model to administer propofol – an anesthetic and sedative medication – aiming to achieve mild to moderate sedation during gastrointestinal (GI) endoscopic procedures. This closed loop general anesthesia system maintained a continuous propofol infusion to sustain sedation throughout a GI procedure, automatically adjusting the infusion based on vital signs. Due to safety measures, the Sedasys machine was unable to increase the dose of propofol if the patient were to become “light” during a procedure – a significant limitation that decreased the adaptability of this technology. Additionally, Sedasys’ safety mechanism of administering fentanyl followed by a 3-minute waiting period before propofol administration compromised efficiency, particularly given the swift completion of most diagnostic upper GI procedures within 5 to 10 minutes. Secondly, the integration of AI in anesthesiology raises medical legal concerns. Medicine, being a highly complex field with nuanced variables, relies on human judgment for clinical decision-making. The intricacies that human anesthesiologists consider, especially in novel clinical situations, may not be fully appreciated by AI systems. In the event of an anesthetic error during a surgery involving AI technology, determining responsibility becomes a complex matter — whether it lies with the anesthesiologist, the AI-developing company, or the hospital. Moreover, issues of patient consent arise, as individuals may not fully comprehend the implications, both positive and negative, of AI technologies and may harbor concerns regarding the security and privacy of their health data. Lastly, a crucial challenge in the utilization of AI in anesthesia pertains to the quality of data it relies on for training and learning. While systems are generally trained on data from electronic health records, patient monitors, and anesthesia machines, these diverse data sets as well as methods of data acquisition, often subjectively documented by clinicians, pose challenges in ensuring the accuracy, consistency, and completeness of the database. Consequently, faulty training data can lead to AI systems making incorrect judgments, potentially resulting in adverse health outcomes. For instance, an AI system may erroneously flag a patient as high-risk before an operation, prompting invasive monitoring during a surgical procedure which introduces associated risks. All these concerns contribute to the hesitation among anesthesiologists to fully embrace AI technology. These are further compounded when practitioners' concerns about job security and the fear of “black box” AI systems making treatment recommendations are added into the clinician’s calculus. The Backdrop: A recent report from the American Society of Anesthesiologists reveals that almost 40% of anesthesiologists plan on early retirement, while about a quarter have already reduced or plan to reduce their working hours. In 2020, workplace staffing shortages affected 35.1% of US anesthesiologists, a figure that rose to 78.4% in 2022. Amid escalating production pressures from private equity firms, financial struggles in hospitals, and alarming rates of burnout among anesthesiologists, AI emerges as a strategic tool that can reduce the stress that anesthesiologists face by helping them provide high-quality care and improve patient safety. Demographic shifts, such as increased life expectancy and the aging "baby boomer" generation, forecast a surge in demand for surgical procedures in the near future. For example as noted in “Retooling for an Aging America: Building the Health Care Workforce”, the authors note, "older patients use two-to-three times as many medical services as younger patients, and the number of people over age 65 will increase by almost 50%, just in the next 10 to 15 years alone”. A 2022 Frontiers in Public Health paper predicts a 99.5% increase in Americans aged 50 years and older with at least one chronic disease by 2050. As surgical volume and complexity rise, anesthesiologists face increasing obstacles in ensuring patient well-being throughout the entire surgical process. AI, through the use of risk calculators, can assist by identifying high-risk patients based on preoperative variables, allowing for optimized resource allocation and patient preparation. A noteworthy example comes from a 2023 Journal of the American Medical Association paper, where researchers employed an automated machine learning model, analyzing a dataset of over a million patient encounters. This model effectively identified patients at high risk of postoperative adverse outcomes, showcasing the potential for AI to enhance risk prediction and resource optimization in surgical settings. Another machine-learning model published in the journal Anesthesiology was able to successfully predict a significant risk factor for adverse perioperative outcomes: post-induction hypotension, defined as low blood pressure in the first 20 minutes after administering anesthetic medication. Moreover, researchers from Japan published a 2021 paper in the Journal of Intensive Care describing the use of a deep-learning AI model capable of classifying intubation difficulty by analyzing face images of patients. Tracheal intubation, a critical step at the beginning of surgical cases, demands precision to avoid complications such as airway damage, bleeding, and prolonged deoxygenation. The AI model exhibited an 80.5% predictive value, providing anesthesiologists with valuable information to prepare advanced techniques and equipment ahead of “difficult airway” situations, significantly improving patient safety. This development aligns with the broader trend of integrating AI into robotic intubation systems, exemplified by machines like the Da Vinci surgical system and the Kepler intubation system, which show promise in automating and enhancing the safety of intubation procedures. With further developments in AI, it is possible that artificial intelligence may one day be integrated into these robotic intubation systems allowing for safe and automated procedures to be performed. As shown by a 2021 paper published in the journal Clinical Anatomy, AI has even augmented ultrasound-guided regional anesthesia procedures by aiding in the identification of anatomical structures on ultrasound. In the study, 97.5% or more of the expert anesthesiologists agreed that the AI-assistant would assist in confirmation of anatomical structures on ultrasound for less experienced practitioners. Another key issue that anesthesiologists face is a phenomenon called “alarm fatigue.” “Alarm fatigue” refers to an increase in a health provider’s response time or decrease in response rate to an electronic alarm alert from a medical or patient monitoring device due to the excessive frequency of alarms. This is especially concerning for anesthesiologists who hear many alarms for blood pressure, heart rate, heart rhythm, oxygen saturation, temperature, and more. Even during high-risk cardiac surgeries, 80% of alarms were deemed useless. In an article published in the Health Informatics Journal entitled “Machine learning in anesthesiology: Detecting adverse events in clinical practice,” the authors propose the possibility of AI systems that can be used to generate meaningful and reliable alarms which can mitigate “alarm fatigue.” Lastly, closed-loop systems integrated with AI that can deliver medications to induce anesthesia in patients, show promising outcomes. AI closed-loop systems may eventually be used to control other factors throughout a surgical case as well – blood pressure, neuromuscular blockade, vent management, and pain control. For example, a 2022 paper published in the Journal of Personalized Medicine described the successful use of a closed-loop system to maintain blood pressure within 10% of a target range for >90% of the case time in patients undergoing abdominal surgery. Another 2020 article in the journal Anesthesiology found that closed-loop systems had a positive impact on delayed neurocognitive recovery and outperformed manual control by anesthesiologists in managing anesthetic medication, fluids, and ventilation variables. Implications: Artificial intelligence holds substantial promise for enhancing patient safety and outcomes in the field of anesthesiology. AI enables anesthesiologists to adopt a proactive approach by identifying high-risk surgical patients and optimizing patient preparation. During a surgical procedure, customized and targeted alarms can help reduce "alarm fatigue," and the integration of robotic-AI devices may improve the safety of procedures performed by anesthesiologists such as tracheal intubation and ultrasound-guided regional anesthesia. These measures collectively improve patient outcomes while avoiding adverse health events, paving the way for the future of anesthesiology. As the demand for surgical procedures rises with an aging population and an increase in chronic diseases, anesthesiologists stand to benefit from AI to navigate the complexities of patient care and aid in clinical decision-making. AI may even play a role in helping to alleviate the aforementioned shortage of anesthesiologists anticipated over the next several years. For example, AI may enable anesthesiologists to supervise less credentialed but highly capable clinicians such as nurse anesthetists to broaden care all without compromising patient care quality. The integration of AI into the field of anesthesiology holds the potential to improve care and lower costs all while helping the evolution of healthcare practice. Related Reading: Artificial Intelligence in Anesthesiology: Current Techniques, Clinical Applications, and Limitations Artificial intelligence and anesthesia: A narrative review A Comprehensive Analysis and Review of Artificial Intelligence in Anaesthesia

  • Ventricle Health: Accelerating Evidence-Based Virtual Care Delivery for Heart Failure

    The Driver: Ventricle Health recently announced that it has raised $8M in seed funding in a round led by RA Capital Management alongside Waterline Ventures and other investors. The company currently supports accountable care organizations (ACOs) in the mid-Atlantic, Texas, Ohio and Florida, with plans to announce more markets soon. According to the company the proceeds of the financing are intended to finance the acceleration of the national delivery of their value-based home care model for heart failure patients in collaboration with value-based care provider groups and payers. Key Takeaways: 6.2 million adult Americans have heart failure, with prevalence projected to increase by 46% and direct medical costs to reach $53 billion by 2030 (CDC, Journal of Managed Care & Specialty Pharmacy). 50% of Americans with cardiovascular disease do not have access to a cardiologist with an average wait time for consultation of 26 days (Ventricle Health). Hospitalization costs are by far the largest costs for heart failure and mean costs per hospitalization for heart failure ranged from $10,737 to $17,830 (Journal of Managed Care & Specialty Pharmacy) Despite improvements in the treatment and incidence of heart failure the 1-year mortality rate remains approximately 30%, while the 5-year mortality rises to 40% (Circulation) The Story: The company was founded in 2021 by heart failure cardiologist, Dr. Dan Bensimhon, and a team of veteran heart failure clinicians. Bensimhon, Chief Medical Officer at Ventricle, is a leading board-certified cardiologist and medical director in the Advanced Heart Failure & Mechanical Circulatory Support Program at Cone Health. He is joined by CEO Sean O’ Donnell, who has a background in value-based care delivery and digital solutions. O’Donnell was previously President and COO of Consumer Health Services at the retail-based physician care clinics for Duane Reade and Rite Aid Pharmacies. According to O’Donnell, “the use of emerging technologies enables a hospital-at-home experience, detecting early signs of disease and implementing evidence-based protocols at a fraction of the cost. The aim of the company is to also build the most proactive, engaging, and impactful provider network for cardiac care in the U.S.” The Differentiators: As noted by the company, access to cardiologists for heart failure patients is a particularly crucial issue. For example, they note the average wait time to secure a cardiology appointment in the U.S. is 26 days which can have a significant impact on follow up care and readmissions. Ventricle attempts to address this issue by following a care model that is “anchored around well-established guideline-directed medical therapy (GDMT) pathways” which as noted in the journal Drugs, is “the cornerstone of pharmacological therapy for patients with heart failure with reduced ejection fraction (HFrEF) and consists of the four main drug classes…being used in conjunction.” As the article points out, “there is an underutilization of GDMT, partially due to lack of awareness of how to safely and effectively initiate and titrate these medications.” Ventricle attempts to overcome this by providing patients access to cardiology care appointments from their home in as little as three days. This should help reduce the 30-day readmission rate from CHF which ran as high in one study as 24.4% for those with reduced ejection fraction and was approximately 23% for all-cause readmission. The company believes its home-based and virtually enabled care model can reduce the overall average annual cost of heart failure care by at least 30-50%. The Big Picture: According to the CDC, heart failure costs the nation an estimated $30.7 billion in 2012. By 2030, heart failure spending is expected to exceed $70 billion. As noted above, by allowing CHF patients to access clinicians earlier, Ventricle could substantially help reduce costs particularly those for hospitalizations. For example, according to a 2022 study in the Journal of Managed Care and Specialty Pharmacy, hospitalization costs are by far the largest costs for heart failure. For example, the article pointed out that the costs per hospitalization for heart failure (HHF) ranged from $10,737 to $17,830 (mean) and charges per HHF ranged from $50,569 to $50,952 (mean) based on an analysis of data between January 2014 and May 2019. Ventricle Health’s virtual-based enabled care model aims to detect early signs of disease with the implementation of evidence-based protocols at a fraction of the cost, thereby reducing the cost of heart failure care by 30-50% according to the company. As noted in a number of studies, “clinical trials have demonstrated that self-management interventions, including face-to-face patient education, telephone case management, and home visits can improve self-care adherence and reduce the risk of HF-related hospitalizations.” all of which Ventricle appears to reimagine and deliver virtually. This can not only increase access to care but there is a potential to reduce the utilization of hospital and emergency services, which could result in a dramatic reduction of healthcare costs. Cardiac care company Ventricle Health garners $8M and more digital health fundings, Startup founded by Cone Health cardiologist, Ventricle Health, raises $8 million in seed funding

  • mHealth: Challenges Remain to Enable Providers to Address Public Health-The HSB Blog 11/5/23

    Our Take: Mobile Health (mHealth) apps have emerged as a transformative force in the healthcare industry, significantly impacting public health in various ways. These applications leverage the ubiquity of smartphones and the power of digital technology to improve healthcare access, patient engagement, and health outcomes. Addressing privacy concerns, and health inequalities, and ensuring regulatory compliance are essential steps in maximizing their benefits while mitigating potential risks. The ongoing integration of mHealth into healthcare systems holds great promise for the future of public health. Key Takeaways: Almost 40% of Americans aged 65 and older still do not own a smartphone and approximately ⅓ of Americans who have smartphones do not have high-speed internet connection within their homes (Pew Research Center) The least dense areas of the United States pay upwards of 37% more for broadband than the densest centers with the lowest-income households tending not to have a home broadband subscription (Benton Institute for Broadband & Society) 65.6% of Primary Care Health Professional Shortage Areas (HPSAs), which are defined in part by having a provider-to-patient ratio of 1:3500 were located in rural areas (Rural Health Innovation Hub) Almost half (49%) of lower-income households (i.e., those whose annual incomes are $50,000 or less), live on the precipice of internet disconnection in that they could lose connectivity due to economic hardship (Benton Institute) For lower-income households (i.e., those whose annual incomes are $50,000 or less), half (49%) live near the precipice of disconnection in that they have lost connectivity due to economic hardship (Benton Institute for Broadband & Society) The Problem: While Mobile Health (mHealth) apps have made significant strides in improving public health, they also come with several challenges and problems that need to be addressed. First and foremost is unequal access and the exacerbation of existing disparities, often referred to as the “digital divide”. For example, while according to the Pew Research Center, across developed economies, “a median of 85% say they own a smartphone, 11% own a mobile phone that is not a smartphone and only 3% do not own a phone at all” this is not synonymous with broadband access particularly for the underserved and elderly. For example, according to the Pew Research Center, almost 40% of Americans aged 65 and older still do not own a smartphone and approximately ⅓ of Americans who have smartphones do not have high-speed internet connection within their homes. Although many will argue that just having a smartphone will give their owners access to a broadband hotspot, this argument fails to take into account that broadband access via a hotspot is quickly “throttled down” by cellular providers and many of those who own smartphones may not have unlimited data plans necessary to make that a viable option. Moreover, many in rural and underserved areas often pay more for broadband access. For example, according to the Benton Institute for Broadband & Society, the least dense areas of the United States pay upwards of 37% more for broadband than the densest centers with the lowest-income households tending not to have a home broadband subscription, citing price as the problem”. Importantly this could lead to an exacerbation or racial disparities in rural populations which are showing patterns of increases in BIPOC populations. In 1990, one in seven people in rural areas identified as people of color or indigenous, in 2010 one in five rural Americans identified this way. Many of those families also sit at the precipice of what is called “subscription vulnerability” For lower-income households (i.e., those whose annual incomes are $50,000 or less), half (49%) live near the precipice of disconnection in that they have lost connectivity due to economic hardship (during the pandemic), live at or below the poverty line, or say it is very difficult for them to fit broadband service into their household budgets. There is also the problem of low digital literacy and low user engagement for those who do have access. This was particularly evident during the COVID pandemic. For example, an article from WIRED magazine entitled, “Telemedicine Access Hardest for Those Who Need it Most” found that “as many as 41% of Medicare recipients don’t have an internet-capable computer or smartphone at home, with elderly Black and Latinx people the least likely to have access compared to whites”, while another study in JAMA found “approximately 13M elderly adults have trouble accessing telemedicine services, and approximately ½ of those people may not be capable of having a telephone call with a physician due to problems with hearing, communications, dementia, or eyesight, including 71% of elderly Latinx people and 60% of elderly Black people.” Moreover, many apps lack the ability to customize to their users and may be of questionable quality. Most mHealth apps offer a one-size-fits-all approach, failing to adapt to individual user needs, preferences, and goals or limitations. Without personalization, users may not find the app relevant to their specific health concerns, which can lead to disengagement over time. In addition, as the authors note in “Mobile Health Apps and Health Management Behaviors: Cost-Benefit Modeling Analysis”, “It is evident that situational effects create some kind of general perception of risk because they inhibit the effective impact of mobile health apps on lifestyle behaviors, such as weight loss or physical activity [while] some apps may provide inaccurate information or unreliable health advice, potentially putting users' health at risk.” Privacy concerns and the slow pace of passing policies and regulations for data protection adds to consumers’ uneasiness. For example, as we noted in “Health App Regulation Needs A New Direction-The HSB Blog 4/12/22, “while the markets and technology are moving at a rapid pace, policies and efforts around regulation move extremely slowly and have generally lagged behind advancement.” The Backdrop: The impact of Mobile Health (mHealth) Apps on public health occurs within the context of several overarching societal and technological trends that have shaped the healthcare landscape. Understanding this backdrop is essential for comprehending the significance of mHealth apps in improving public health. One of these has been the proliferation of smartphones and users' ability to capture, store and transmit large volumes of health data on these devices. As noted in , “The Impact of Using mHealth Apps on Improving Public Health Satisfaction during the COVID-19 Pandemic: A Digital Content Value Chain Perspective” “mobile health apps effectively promote information exchange, storage, and delivery, and they improve the ability of patients to monitor and respond to diseases.” With billions of people carrying smartphones, these devices have become ubiquitous and readily accessible tools for healthcare management and information. The maturation of mHealth also facilitate the delivery of remote care and remote patient monitoring (RPM) allowing care delivery for underserved urban communities as well as broad swaths of rural communities. For example, according to the Rural Health Innovation Hub, 65.6% of Primary Care Health Professional Shortage Areas (HPSAs), which are defined in part by having a provider to patient ratio of 1:3500 were located in rural areas. Given the lack of providers in these areas many countries [including the United States] have begun to use mHealth apps on a large scale to provide consultation, monitoring, and care services for patients.” These mobile health apps, encompass both telehealth, virtual care and RPM allow for the exchange of two-way data between patients and healthcare personnel to realize remote medical consultation, psychological consultation, health education, and obtain medical protection thereby facilitating virtual consultations, monitoring, remote diagnostics and escalation to in-person visits when necessary. Given their ubiquity, and ability to constantly measure users' health data with relatively inexpensive technology, mHealth has demonstrated an ability to help reduce the cost of healthcare delivery. Not only has this been achieved by an increase in the delivery of basic preventive care it has also moved the delivery of care from episodic and reactive to continuous and proactive. As noted in the aforementioned “The Impact of Using mHealth Apps on Improving Public Health Satisfaction during the COVID-19 Pandemic: A Digital Content Value Chain Perspective“, “the emergence of mHealth apps has changed the supply mode of health services and brought about benefits for both healthcare providers and recipients. On the one hand, doctors use mHealth apps to process patient information and monitor patient health. On the other hand, individuals use mHealth apps to obtain health information for immediate diagnosis." As a result, these apps can reduce the burden on traditional healthcare systems by enabling remote care and self-management of a number of health conditions. Implications: As noted above mHealth apps have a number of positive implications for the delivery of healthcare and public health. mHealth apps can help promote healthy lifestyles, track fitness and nutrition, and create an opportunity for early intervention. As noted in the article, “Mobile Health Apps and Health Management Behaviors: Cost-Benefit Modeling Analysis", “chronic diseases, but not health crises, often manifest in the form of health management routine. [In situations like this] the use of mobile health apps helps to address the health concerns of individuals who are already aware of their health condition.” MHealth can also provide opportunities for continuity of care in public health, particularly for communities that lack transportation or the ability to take time off from jobs to seek treatments. This can be magnified during times of crisis like pandemics or natural disasters when in-person visits are challenging. As noted in, “The Impact of Using mHealth Apps on Improving Public Health Satisfaction during the COVID-19 Pandemic: A Digital Content Value Chain Perspective’, ”Mobile health apps effectively promote information exchange, storage, and delivery, and they improve the ability of patients to monitor and respond to diseases. They can also be used for training, information sharing, risk assessment, symptom self-management, contact tracking, family monitoring, and decision-making [as they were] during the COVID-19 pandemic.“ Perhaps most importantly, mHealth can help reduce costs and address workforce shortages associated with physical infrastructure, including travel, time off and geographic barriers making healthcare more cost-effective for both patients and providers. As the authors note in “Mobile health app users found to be more content with public health governance during COVID-19”, “Smartphone apps can partly eliminate the shortage of medical resources and improve the quality of medical services for high-risk groups and [those] residing in remote locations.” Related Reading: The Impact of Using mHealth Apps on Improving Public Health Satisfaction during the COVID-19 Pandemic: A Digital Content Value Chain Perspective Mobile health app users found to be more content with public health governance during COVID-19 Commercial mHealth Apps and Unjust Value Trade-offs: A Public Health Perspective Research on the Impact of mHealth Apps on the Primary Healthcare Professionals in Patient Care Access to Telemedicine Is Hardest for Those Who Need It Most

  • Ilant Health-Comprehensive Weight Loss and Obesity Treatment

    The Driver: Ilant Health recently raised $3M in initial funding backed by a number of angel investors including Nick Loporcaro, President & CEO of Global Medical Response; Brandon Kerns, CFO of CareBridge, Russell Street Ventures, and Main Street Health; Matt Klitus, CFO of Lyra Health; Ivah Romm Founding CEO of Cityblock Health and current Cityblock board member; Dr. Sylvia Romm, Founder of Sounder Health and David Werry, Co-founder & President at Well. Key Takeaways: According to a recent analysis by Trusit Securities, the market for weight management employer solutions is projected to be nearly $700M by 2024 with the potential to grow to $6B-$B over the longer-term Approximately 42% of the U.S. population has obesity1, with more than 200 diseases associated with this condition (Milliman) Real-world analysis of GLP-1 obesity treatment conducted by two PBMs found that [only] 32% of members on GLP-1 treatment were persistent at one year, and [only] 27% of those” stayed on therapy for the following year (Prime Therapeutics & Magellan Rx Management) 19.3% of U.S. youth (aged 2–19 years) were classified as obese, 6.1% had severe obesity, and 16.1% were overweight (NHANES) The Story: Ilant was founded by Elina Onitskansky, the former senior vice president and head of strategy at Molina Healthcare, after years of her own struggles with her weight and weight loss. As Onitskansky noted on the firm’s website, “I [had] tried just about everything out there – from working with nutritionists to diets and meal replacements to daily exercise and personal training to weight loss resorts…I would lose weight only to see it come back and then some.” It was only after she referred herself into a bariatric procedure that she had sustained success and decided to found Ilant. As Onitskansky noted to Fierce Healthcare, prior to that she often felt unheard or judged by doctors who “assumed she had yet to try basic changes like diet or exercise” often facing a refrain of “why didn’t I try eating more salads” walking more? Or …just try harder?” Moreover, as she noted on Ilant’s website, even after she had lost the weight, Onitskansky was “ashamed [to tell people she] hadn’t been strong enough to ‘do it on her own’ because it “was hard to overcome years of shame and stigma”. As a result, Ilant was born “out of the desire to use [her] experience as a healthcare executive and as an obesity patient to improve care for others.” The Differentiators: . What distinguishes Ilant’s model is that its goal is to be a “single front door” for patients to assess and access obesity treatments that they access through employers and health plans not direct to consumers. For patients, Ilant seeks to deliver holistic, individualized, evidence-based, integrated treatment. This involves evaluating treatments via what the company calls Ilant Metabolism Matters, to match clients to the right treatment. According to the company, this evidence-based algorithm accounts for the “medical, behavioral, and social determinants of health considerations'' of patients and evaluates treatments along the entire treatment spectrum from “intensive behavioral therapy to pharmacotherapy (including all potential medications, not just GLP-1s), to bariatric surgery”. Ilant combines these services with access to doctors trained in obesity, mental health professionals, nutritionists, and others to treat them holistically and help them succeed. For employers and payers, Ilant applies an analytics engine it terms Ilant Rapid Returns, that addresses what it says is the historic undercoding of obesity and impact of obesity treatment. As noted in the aforementioned Fierce Healthcare article, this helps match “individuals to the treatment most likely to drive outcomes and value for them” while taking into account the physical, emotional and social factors that may impact patients. Ilant intends to work with commercial, Medicare and Medicaid insurers but has not announced any partnerships to date. In addition, according to the company they intend to pursue a value-based care approach where they take both one-sided (shared savings) and two-sided (shared savings and loss) on patients. The Big Picture : According to data from the CDC and Milliman, approximately 42% of the U.S. population has obesity1, with more than 200 diseases associated with this condition. Moreover, according to data from two recent studies, the average cost of care was 100% higher for obese patients with obesity than for nonobese with the health care costs related to obesity accounting for 21% of total national healthcare spending in the United States. This problem is of particular concern when it comes to youth and adolescents. Data from the National Health and Nutrition Examination Survey (NHANES) indicated that 19.3% of U.S. youth (aged 2–19 years) were classified as obese, 6.1% had severe obesity, and 16.1% were overweight. While the recent publicity and study data around the glucagon-like peptide-1 (GLP-1) drugs for diabetes have raised hope that they could be prescribed as a solution for weight loss, these drugs are not a panacea for those struggling to deal with weight loss issues. For example, as noted in Milliman’s report “Payer Strategies for GLP-1’s for Weight Loss”, these drugs “ must be taken consistently and long-term to achieve and maintain weight loss benefits and patients who discontinue use after a few initial doses or are inconsistent with their dosing will likely not see any material health benefits…”. The study went on to note “a recent real-world analysis of GLP-1 obesity treatment conducted by two pharmacy benefit managers (PBMs) found that [only] 32% of members on treatment were persistent at one year, and [only] 27% of those” stayed on therapy for an additional year. Given data such as this it does appear that while GLP-1s may be appropriate for some, due to their side effect profiles and persistency issues, they are unlikely to be appropriate or effective for many. As a result, we believe that there remains a large and robust market for solutions like Ilant Health that create a continuum of treatment options and provide broad-based support for patients. According to a recent analysis by Trusit Securities, the market for weight management employer solutions is projected to be nearly $700M by 2024 with the potential to grow to $6B-$9B over the longer term. In addition, while employers and payers will undoubtedly have to cover GLP-1 drugs for certain patients, we do expect them to require patients to pursue other treatment protocols like those offered by Illant and competitors before approving GLP-1 usage as a last resort. While weight loss management is a crowded field, with competitors ranging from publicly traded WW, to Vida, Noom and Wondr Health, we do believe that a substantial market opportunity remains for clinically proven, evidence-based weight management companies. Obesity treatment startup Ilant Health launches with $3M, Value-based obesity treatment provider Ilant Health launches out of stealth

  • The Pros and Cons of Deploying AI to Confront Physician Burnout-The HSB Blog 10/13/23

    Our Take: Artificial intelligence (AI) has the potential to alleviate physician burnout significantly by reducing the amount of time spent on bureaucratic tasks like documentation and reviewing old medical records. AI that utilizes large language models (LLMs), speech recognition, and natural language processing (NLP) can help transcribe conversations between physicians and clinicians into formatted clinical notes and prepare clinical summaries of a patient’s medical history. This is especially crucial as physicians spend increasing amounts of time utilizing the electronic health record during patient visits and after-hours for documentation purposes. In light of a projected shortage of 124,000 physicians by 2034 and heightened levels of burnout post-pandemic projected by the Association of American Medical Colleges, leveraging AI to minimize bureaucratic burdens is an essential next step. The use of AI to reduce physician burnout through decreasing bureaucratic burdens can allow physicians to dedicate more time to addressing patient concerns, leading to improved patient satisfaction scores and health outcomes. Key Takeaways: Physician burnout costs the U.S. approximately $4.6B/yr. due to reduced hours, physician turnover, and expenses of finding and hiring replacements (Harvard Business School) 60% of physicians agree that bureaucratic tasks, including note writing, are the top contributor to physician burnout (Medscape) Physicians spend almost 50% of their time on the electronic health record (EHR) and desk work with 1-2 hours of after-hours work each night dedicated to EHR tasks (Annals of Internal Medicine) AI utilizing voice-enabled technology saved clinicians 3.3 hours per week and reduced the amount of time physicians spent reviewing old notes by 60% by producing a clinical summary (AAFP) The Problem: Artificial intelligence utilizing LLMs can play a significant role in reducing physician burnout, especially by assisting with the burden of clinical documentation. The advent of new tools from vendors such as Augmedix, Regard, Nuance, and allows healthcare organizations to significantly reduce the administrative burden on physicians. For instance, Nuance’s Dragon Ambient eXperience (DAX) software does this by recording the physician-patient encounter and transcribing it into a formatted clinical note through the utilization of speech recognition and NLP. However, several challenges exist that need to be addressed before further integrating this new technology into the workday of physicians. One issue is privacy. These novel tools require access to a patient’s protected medical record and will also consolidate new medical information during the patient’s visit. Therefore, there is a potential risk to patient privacy rights, and mistrust in these systems may hinder implementation. Moreover, patients may be wary of sharing information, fearing legal repercussions due to recorded data. According to a global survey by UIPath, only 44% of respondents from the baby boomer generation hold favorable views on AI in the workplace. Physicians, too, may be wary of this new technology as they fear for their job security and that recorded audio may be used against them in malpractice cases. For example, a recent survey of 1,500 physicians by Medscape found that only 19% of physicians would be comfortable using voice technology during a patient consultation. Another pressing issue is the possibility of error and misinformation. Language generation models can produce inaccurate information which is particularly concerning when it comes to incorrect medical information. “Hallucination” is the term used to describe when an NLP model conjures false information. This phenomenon is more pronounced when multiple languages are used during a clinical encounter or when the technology infers information that the patient did not explicitly verbalize. AI can also omit facts from the patient visit. Dr. Shravani Durbhakula, a pain physician and anesthesiologist at the Johns Hopkins School of Medicine, expressed her reservations, stating, “The major concerns I would have here is I’m not sure the computer would be smart enough to know what is important [enough] to pull out into the note.” She stated the world-class hospital does not use [ambient intelligence tools] to automate clinical notes. “You could miss critical information." Lastly, artificial intelligence trained on large datasets of text may inadvertently reflect biases present in the training data, perpetuating medical bias. When it comes to note writing, this can be seen in the form of emphasizing certain diagnoses or symptoms for different patient demographics. For example, a New England Journal of Medicine study highlighted this bias when a generative AI model, GTP-4, ranked panic and anxiety disorder higher on its list of potential diagnoses for female patients as compared to male patients. Furthermore, when GTP-4 was asked to generate clinical vignettes of sarcoidosis, the model described a black woman 98% of the time, reflecting a significant bias in its output. The Backdrop: Physician burnout is a huge source of burden on the healthcare system. It can lead to increased medical errors, lower quality of care, worse patient outcomes, and higher attrition rates. For physicians themselves, it can lead to increased rates of depression, substance abuse, suicide, and overall work dissatisfaction. Nearly two-thirds of doctors experience symptoms of burnout following the pandemic according to results published in the Mayo Clinic Proceedings. A major contributor to physician burnout is the increased administrative burden placed on physicians, namely in the form of documentation required for electronic health record systems (EHRs). A perspective piece published in The New England Journal of Medicine found that for every hour spent on patient interaction, physicians spend an extra 1-2 hours completing notes, ordering labs, prescribing medications, and reviewing results, all without extra compensation. In another paper published in Annals of Internal Medicine, the authors discovered that physicians devote almost 50% of their time to the EHR and desk work, allocating an extra 1-2 hours nightly to EHR tasks. 60% of physicians agree that bureaucratic tasks, including note writing, are the top contributor of physician burnout as per a report published in Medscape. AI presents a promising solution to significantly reduce the time spent on documentation. An American Academy of Family Physicians (AAFP) report found that AI leveraging voice-enabled technology saved clinicians 3.3 hours per week thereby helping to reduce burnout. Furthermore, the AI was able to reduce the amount of time physicians spent reviewing old notes by 60% through creating clinical summaries. For example, in one case study, Regard’s CEO found that their AI tool reduced measures of burnout by 50% and reduced documentation time by 25%. Augmedix’s website similarly boasts a 40% improvement in work-life satisfaction and a 3-hour per workday reduction. Despite the burden of bureaucratic tasks like documentation, this is increasingly important in the United States healthcare landscape where the government ties reimbursement to the quality of the medical record. Without proper documentation, physicians do not get paid for the services that they provide patients. The AAFP report found that AI integration resulted in a 25% increase in diagnoses sent to insurance companies that were previously unrecorded in the EHR. Beyond the financial incentive of proper documentation, it also serves to facilitate communication with other healthcare providers, reduces risk management exposure, and captures value-based case metrics. Conversely, inadequate documentation can lead to adverse treatment decisions, expensive diagnostic studies, repeated studies, unclear communication, inappropriate billing, and poor patient care. While proper documentation is necessary, physicians face significant challenges due to the physician shortage in the United States and the high volume of patients. AI stands as a potential tool to mitigate the aforementioned shortfall of physicians by alleviating physician burnout and attrition. A 2021 JAMA Network Open study found that an AI tool extracting relevant patient health data and presenting it alongside the patient record reduced EHR use time by 18%, a promising tool in reducing burnout. Nuance’s DAX software also showcases promising outcomes, claiming to reduce documentation time by 50% and reduce feelings of burnout and fatigue by 70%. Implications: Physician burnout is a pressing issue that needs to be addressed, especially with nearly two-thirds of doctors experiencing symptoms of burnout according to the New York Times and the impending physician shortage. In addition, physician burnout has significant economic consequences. For example, a study by Harvard Business School found that the economic toll of physician burnout is staggering, amounting to approximately $4.6 billion annually in the United States alone. This financial burden arises from reduced physician work hours, physician turnover, and expenses associated with finding and hiring replacements. Artificial intelligence can play a significant role in reducing physician burnout by reducing the amount of time physicians spend on documentation, the top contributor of physician burnout. This reduction in burnout can lead to a noticeable improvement in the quality of patient care, enabling physicians to dedicate more time to patients without distraction and consequently improving healthcare outcomes. Furthermore, minimizing documentation-related work during after-hours, often termed “pajama time,” can help mitigate medical errors, a significant concern when physicians' recall and alertness may be compromised. As AI integration has been found to increase documentation of previously unrecorded diagnoses, physician reimbursement may also become more accurate. While addressing concerns of privacy, error, misinformation, and biases are essential, AI services focused on enhancing the documentation experience are continuously evolving and are poised to play a pivotal role in alleviating physician burnout. As these AI-driven technologies progress, they are bound to enhance the healthcare landscape, ultimately benefiting both healthcare providers and patients. Related Reading: Doctors turn to imperfect AI to spend more quality time with patients AI alleviates burnout, reduces documentation time by 72% in primary care Artificial Intelligence And Its Potential To Combat Physician Burnout Development and Validation of an Artificial Intelligence System to Optimize Clinician Review of Patient Records

  • CMR Surgical: Advancing Flexibility and Precision in Robotic Surgery

    The Driver: CMR Surgical recently raised $165M. The funding round was led by Softbank and Tencent and included all of its existing investors including Ally Bridge Group, Cambridge Innovation Capital, Escala Capital, LGT, RPMI Railpen, and Watrium. The funding brings CMR Surgical’s total funds raised to $1.1B per Crunchbase and the company’s value remains the same as it was at the time of its Series D in 2021 when it was $3b according to Sifted. The company stated it plans to use the funds to drive continued product innovation, including new technological developments, and to support the further commercialization of the system in key existing, and new, geographies. Key Takeaways: From January 2012 through June 2018, the use of robotic-assisted surgery for all general surgery procedures increased from 1.8% to 15.1%, equaling an 8.4-fold change (JAMA) Cost savings from robotic surgery generally appear to be a function of operating time and reduction in complications with one study showing approximately a 3030-minuteeduction in time and a 10% reduction in complications to achieve savings (BMJ Surgery, Interventions, and Health Technologies). The market for soft-tissue robotic-assisted minimal access surgery is projected to exceed $7B per year (CMR Surgical) The typical costs for a robotic-assisted surgical machine range anywhere from $1.5M to $2.0M USD (Cureus) The Story: CMR was founded in 2014 with the goal of giving as many people in as many places in the world access to minimal-access surgery (MAS). As Mark Slack, Chief Medical Officer noted in 2022, “our goal is to make robotic-assisted surgery more accessible globally, offering a solution with flexible and novel financing models that can work for both public and private contracts and for low-and middle-income countries. The company sells robotic-assisted minimal access surgery systems to hospitals for hernia repair, colectomies (partial or full removal of a colon), hysterectomies, sacrocolpopexies (repair weakness or damage in pelvic organs often using surgical mesh) and lobectomies (removal of a lobe of a lung). As noted by Fierce Health back in 2017, CMR is combining the design and economics of its devices to broaden the use of robotic-assisted surgery in hospitals. “CMR wants the device …to be economically viable for more hospitals. As it stands, hospitals make big upfront investments to acquire robots. This limits uptake…CMR has designed Versius to cost less than other systems. And it is pairing this cost-conscious design with a business model that could make the economics more favorable still for hospitals.” As a result, Versius is being used in routine clinical practice to deliver high-quality surgical care to patients around the world. The Differentiators As noted by the company, what distinguishes CMR Surgical’s Versius robotic surgical assistant is its patented “V-wrist technology, which allows [its] small, fully wristed instruments to have seven degrees of freedom which can be rotated 360 degrees in both directions by the arm. This technology, which bio mimics the human arm, has helped [CMR surgical] make the units so much smaller than other systems.” CMR Surgical argues that this gives surgeons increased precision, accuracy and proficiency allowing them to reach hard-to-reach areas when necessary. This in turn facilitates the small “footprint and modular design” of CMR’s Versius robotic surgical assistant, where each robotic “arm” is independent of the other, allowing a surgeon to place a singular arm or “port” where necessary to best suit the needs of the patient for a given procedure. This is in contrast to market leader, Intuitive Surgical’s DaVinci Robot where all arms emanate from a single pod above the patient, giving rise to a so-called Octopus configuration of robot arms. CMR Surgical units also allow surgeons to either sit or stand at the surgical console, or even change positions during operations, thereby causing less physical strain on the body during lengthy procedures. This can be particularly important given current workforce shortages and issues around clinician satisfaction. The Big Picture: As noted by the Mayo Clinic, “the primary benefit of robotic surgery for patients is faster recovery” primarily due to smaller incisions and less blood loss during procedures. This “allows patients to return to daily activities sooner...and have fewer surgical complications.” In addition, robotically assisted surgery can reduce opioid use and help reduce the overall cost of and length of hospitalizations. For example, as noted by a 2021 study in BMJ Surgery, Interventions and Health Technologies, cost savings from robotic surgery generally appear to be a function of operating time and reduction in complications with one study showing approximately a 30-minute reduction in time and a 10% reduction in complications to achieve savings However, it should be noted that this topic has been the subject of considerable debate (please see “Robotic Surgery: A Comprehensive Review of the Literature and Current Trends”, Cureus, July 2023 for a review of current trends, applications and issues). We do believe that over time as lower-cost robotically assisted surgery like CMR Surgical’s Versius come to market, costs will decrease, helping to reduce length of stay and system costs which will be crucial for hospitals that are under continuous margin pressure. In addition, as adoption and technologies such as robotics and artificial reality (AR) and virtual reality (VR) increase, the training and proficiency of surgeons should increase as well. For example, according to a recent article in Semiconductor Engineering, having a recording of the procedure will enable “future analysis to improve the process and for educational purposes. [As such] there is great hope that as time goes by robotic-assisted surgery will increase accuracy, efficiency, and safety, all while potentially reducing healthcare costs.” CMR Surgical raises $165M for robotic-aided minimal access surgery, CMR Surgical raises $165m from existing investors SoftBank and Tencent

  • AI in Radiology: Aiding Workflows and Accuracy as Workforce Pressures Mount-The HSB Blog 9/30/23

    Our Take: Artificial intelligence has numerous applications in radiology and has been rapidly evolving to help improve care, reduce costs, and reduce the burden on radiologists. AI algorithms are being developed to assist radiologists in the analysis and interpretation of medical images and can help identify abnormalities, quantify tumor sizes, and highlight potentially relevant areas for further review. AI also can help automate time-consuming tasks in radiology, such as image segmentation and feature extraction which can significantly reduce the workload for radiologists, allowing them to focus more on complex cases and patient care. This is particularly important as there is already a worldwide shortage of radiologists, which is projected to worsen as the population ages. For instance, in the U.S., the growth of the Medicare population has significantly outpaced the number of radiologists entering the field in recent years. As noted by one study presented at the Radiological Society of North America (RSNA), “the growth of the Medicare population outpaced the diagnostic radiology (DR) workforce by about 5% from 2012 to 2019” and there are no signs of this imbalance improving given that “between 2010 and 2020, the number of DR trainees entering the workforce increased just 2.5% compared to a 34% increase in the number of adults over 65.” As such, AI has the potential to revolutionize radiology by improving the speed and accuracy of image analysis and improving the quality of care. However, its adoption should be carefully managed to ensure patient safety and the continued involvement of radiologists in the decision-making process. Key Takeaways: Between 2010 and 2020, the number of diagnostic radiology trainees entering the workforce increased by just 2.5% compared to a 34% increase in the number of adults over 65 (RSNA) In one study, comparing AI-CAD and traditional CAD software, the AI system outperformed by decreasing the false-positive marks per image (FPPI) by a significant 69% (Diagnostics) Over 85% of outpatient facilities and hospitals are facing staffing challenges, while they’re anticipating a 10% uptick in demand for staffing across MRI, nuclear medicine, ultrasound, radiologic and cardiovascular technologists (U.S. DOL) In one study from the Netherlands of over 40K women with extremely dense breast tissue scanning using commercially available AI software led to significantly fewer interval cancers than the control group (Pediatric Radiology) The Problem: Radiology is a true early adopter of AI in clinical practice. There were 520 FDA-cleared AI algorithms that were cleared as of January 2023 over three-quarters of which were for radiology. Nevertheless, several challenges need to be addressed to further broaden AI’s integration into healthcare even deeper. For example, while AI could eventually eliminate the need for additional readings or verifications by other radiologists, these algorithms need to be validated and tested in clinical practice before organizations will actually put them into practice and trust them. One issue is the dual problems of data quantity and data quality. AI algorithms require large volumes of high-quality data for training and validation. However, as pointed out in a summary of an RSNA-MICCAI Pane entitled “Leveraging the Full Potential of AI—Radiologists and Data Scientists Working Together” older images may have certain idiosyncrasies. For example, while, "there was no good reason for it, a small percentage of the cases had text burned into the images, information such as dates, and computed radiography cassette numbers and other researchers…[were] still running into issues because they’re going back to older data that may have burned-in text." Also, as with any AI data set, developers of algorithms need to pay attention to bias in data training sets and in model output. Those building AI algorithms need to ensure they obtain access to diverse and representative datasets, which can be challenging, as data may be fragmented across different healthcare systems and data privacy and security must be ensured. In addition, as noted in “Legal consideration for artificial intelligence in radiology and cardiology”, ”there is not a good regulatory framework for AI in the U.S. …there is no guidance on how to deploy the technology safely and there are no clear protections from lawsuits”. While the FDA released the “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” in 2021, this acts more as a framework and is based on total product lifecycle, it does not provide guidance on who should be responsible for product or software malfunctions that could impact patient care, resulting in misdiagnosis or even death. Moreover, as highlighted in the “Legal considerations for AI…” article, the technology team needs to understand AI’s clinical impact and exactly where and how the AI can impact liability. For example, it is important to map out “all the places where error can occur” and all the “points of potential failure” including installation, server maintenance and procedures to ensure the AI is running correctly. In addition, the article stresses the need for vendors and enterprises implementing AI solutions to have validated methods to monitor and test the algorithms. Moreover, given the inconsistency of state laws and jury interpretations, Brent Savoie, M.D., J.D. Section Chief of Cardiothoracic Imaging for Vanderbilt University Medical Center recommends, “if there are not specific regulatory protections for an AI vendor or healthcare groups using the AI, they may think twice about implementing the AI or doing business in that location.” Another issue will be ensuring the pace of regulation keeps up with the pace of technological change. Clear guidelines and standards for AI in healthcare need to be established to ensure patient safety while promoting innovation. As pointed out in “What's next for AI regulations in medical imaging” by the imaging platform Interlad, “one of the main challenges is that the FDA's traditional regulatory framework and review processes are not designed to keep pace with this speed of innovation, as AI-enabled medical applications are evolving rapidly, sometimes in unanticipated ways.” As such, models then need to be validated for accuracy, reproducibility, and applicability to the clinical problem they are trying to solve often in near-real time, all within the context of relevant data privacy and security laws. The Backdrop: Radiology generates vast amounts of data, and using AI has the potential to reduce read times, improve accuracy, decrease workforce burdens and even pinpoint new or earlier treatments. For example, in an article entitled “How does artificial intelligence in radiology improve efficiency and health outcomes?” the authors point out that, “AI could contribute to this in clinical, but also non-clinical, ways…even before a patient enters the radiology department, AI software might aid the scheduling of imaging appointments and predict no-shows for nudging or more efficient scheduling”. In addition, the article points out that “The workflow might also be optimized by changing the diagnostic process with AI, [for example in mammography screenings] studies have been performed to simulate an alternative workflow in which an AI risk score determines the number of radiology reads (none, single or double), reducing the total amount of reading time.” In addition to reducing the sheer number of images to review, AI and other improvements like computer-aided detection can help decrease the time radiologists spend reading scans. However as the authors highlight, ”besides the quality of the AI system, workflow integration is crucial for making this kind of software a success.” but once that is achieved there can be dramatic improvements in efficiency. For example, “the automated quantification of nodules, brain volumes or other tissues…might mitigate some of the tedious manual work that is part of a radiologist’s job, along with the large interrater variability inherent to these tasks.” Similarly, when AI is combined with computer-aided design systems, AI workflows can be dramatically streamlined. For example, as the authors highlight in “Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging”, “AI-CAD systems’ merit lies in their substantial reduction of false positives, enhancing dependability in clinical settings. [In one study] comparing AI-CAD and traditional CAD software…the AI system outperformed by decreasing the false-positive marks per image (FPPI) by a significant 69%.[The article added] it specifically excelled in identifying microcalcifications and masses, reducing false positives by 83% and 56% respectively”. Clearly, properly trained and implemented AI radiological systems can lead to meaningful improvements in diagnostic accuracy. AI radiology can combine these advancements in early detection to help develop precision treatments. For example, as demonstrated in “How does artificial intelligence in radiology improve efficiency and health outcomes?” by analyzing subtle patterns in medical images that might be missed by human observers. Utilizing the case of women with dense breast tissues, the article noted that screening can be personalized for this group, which is known to have a higher risk of breast cancer. The authors note that "a study from the Netherlands involving more than 40,000 women with extremely dense breast tissue [were scanned using commercially available AI software] resulting in significantly fewer interval cancers” than the control group. These factors collectively create a fertile ground for the development and adoption of AI in radiology, with the potential to significantly improve patient care, enhance diagnostic accuracy, and optimize healthcare delivery. However, it's important to address the challenges and ethical considerations associated with AI in radiology to ensure responsible and safe implementation. Implications: As noted above, AI has significant potential to help increase efficiency and improve workflows all the way through the radiology process. As noted in one study, AI can have an impact “beginning from the time of order entry, scan acquisition [all the way through to] applications supporting image interpretation… and result communication” This is extremely important as the volumes of imaging scans have increased dramatically in recent years, with research indicating that due to “an aging population and a greater reliance on imaging in the United States and Canada, there has been significantly increased computed tomography (230%), magnetic resonance imaging (304%), and ultrasound (164%) imaging use within the last 2 decades” Importantly, this is occurring against the backdrop of fewer radiologists being available to read scans. As Radiology Business stated in an article entitled “10 trends to watch in diagnostic imaging” according to the U.S. Department of Labor Statistics “over 85% of outpatient facilities and hospitals are facing staffing challenges, while they’re anticipating a 10% uptick in demand for staffing across MRI, nuclear medicine, ultrasound, radiologic and cardiovascular technologists.” In addition, while it will be crucial to continue to monitor, validate and audit any AI-based or assisted radiology system for bias, ethical issues and data security, AI algorithms can learn and adapt over time, potentially improving their performance and application for prevention, diagnosis and treatment of disease. Perhaps most importantly, AI in radiology can empower timely and accurate diagnoses which when coupled with personalized treatment plans, can lead to improved patient outcomes and quality of life. This is especially significant in conditions with high morbidity and mortality rates such as cancer. Related Reading: Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges An Artificial Intelligence Training Workshop for Diagnostic Radiology Residents How does artificial intelligence in radiology improve efficiency and health outcomes? Trends in the adoption and integration of AI into radiology workflows 10 trends to watch in diagnostic imaging Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges

  • Better Life Partners: Improving Access to High-Quality OUD Care

    The Driver: Better Life Partners recently raised $26.5M in a Series B funding round led by aMoon and F-Prime Capital with participation from .406 Ventures. As part of the funding, Dr. Yair Schindel, the co-founder and managing partner of aMoon will join Better Life’s board of directors. aMoon is Israel's largest healthtech venture capital fund whose goal is “to partner with exceptional entrepreneurs who harness groundbreaking science and technology to transform healthcare.” The funding brings Better Life’s total funds raised to $38M and will be used to develop and scale its offering and expand its population management services to new and existing markets. Key Takeaways: Non-Latinx Black men and women had approximately a 50% less chance of receiving SUD treatment than non-Latinx White men and women-44% and 51% respectively (Public Health Reports) In 2021, 94% of people aged 12 or older with a substance use disorder did not receive any treatment (SAMHSA) 2% of youths in the United States between the ages of 12-17 have an opioid use disorder (OUD), while almost 4% of adults have an OUD (NCDAS) Drug rehabilitation costs an average of over $13K per person with the cheapest inpatient rehabilitation programs costing approximately $6K per month, while an outpatient rehabilitation program costs about $6K for three months of treatment (NCDAS) The Story: According to the company, Better Life Partners was founded in 2018 by Adam Groff, MD and Steven Kelly to help those with opioid use disorder (OUD) achieve lasting and meaningful recoveries. The company understood that while there are a number of treatment options available for those suffering from OUD, many suffering from OUD lack access to high-quality health care, particularly those that are evidence-based practices and very localized. The company partners with local organizations to provide harm reduction and integrated medical, behavioral, and social care. The company views itself as the “multispecialty practice of the future”. As noted in the press release about the fund raising, “the company provides on-site (in-person) and virtual care in the community forged with a trauma -informed, harm reduction approach, while also supporting population level outcomes. The company currently offers medication assisted treatment (MAT), therapy, coaching as well as care access and coordination. They are currently operating in the northeastern U.S. in the states of Maine, Massachusetts, New Hampshire, and Vermont. The Differentiators: Better Life views their approach as “hyper-local” in that they work hand-in-hand with mission-driven community organizations, treatment providers, and public health organizations to bring better care to the people they serve. As noted in a recent article in FinSMEs, Better Life “provides care in a community-embedded and whole-health approach” that works with alternative payment models. According to the company, “these partnerships are intended to help connect patients to a broader spectrum of services including harm-reduction and physical health care. As noted by the Boston Business Journal, Better Life Partners has partnerships with a broad array of community organizations including “recovery centers, syringe exchanges, food banks, shelters and homelessness resources, faith-based charities and churches, community development programs, and women’s and children’s support.” The company currently accepts Medicaid, Medicare, and some commercial insurance. The Big Picture : As widely noted, OUD has become a nationwide problem dating back nearly 20 years and consisting of two phrases. As we noted in our blog post “Pain Management, Lessons from Pear Therapeutics & a Path Forward-The HSB Blog 7/14/23“, the first phase (2000-2010) began when drug overdose mortality rates soared among middle-aged adults between 25-54 who became addicted to prescription opioid painkillers that drove the epidemic.. By contrast, the second phase (which has run since the 2010s) consists of opioid drug reformulation and declining prescription rates” but still high rates of addiction. As a result, the need for treatment options that are both flexible and personalized is dramatic and cannot be overstated. For example, as noted by the National Center on Drug Abuse Statistics, approximately 2% of youths in the United States between the ages of 12-17 have an OUD, while almost 4% of adults have an OUD. Importantly, reaching people who suffer from OUD where they are is often one of the biggest barriers to care and is often addressed by community services. According to the Kaiser Family Foundation, these organizations often “remove affordability barriers to accessing needed treatment services, particularly for people with OUD who are more likely to have low incomes compared to the general population and are disproportionately covered by Medicaid or are uninsured.” In addition, having a strong and supportive social and community environment are essential to remaining drug-free. For example, SAMHSA’s recovery framework is based on the idea that “the processes of personal change (e.g. wellness, purpose, self-esteem, hope, self-efficacy, financial stability) and social reintegration (e.g. social support, community, and having a stable and safe home) are instrumental in maintaining abstinence.” Substance-use treatment startup Better Life Partners completes $26.5M Series B, fundraise, Better Life Partners Lands $26.5M for Virtual SUD, Mental Health Platform

  • Navigating the Ethical Landmines of AI in Healthcare-The HSB Blog 8/25/23

    Our Take: Ethical concerns over the use of AI in healthcare are intricate and nuanced. While AI-based algorithms have the promise to deliver more personalized, effective and efficient healthcare delivery they also hold the potential to exacerbate biases and disparities already present in the system, posing risks to data privacy and security. While protections will be imperfect and an iterative process as the use of AI, particularly generative AI, evolves in healthcare, patients must be kept informed about the use of AI-based systems and technologies in their care, and given clear information in a fashion that ensures informed consent. As AI continues to progress, maintaining ethical standards will require vital collaboration among professionals, developers, policymakers, and ethicists with ongoing updates to ethical guidelines to prioritize patient and societal welfare. Key Takeaways: As of January 2023, there were 520 FDA-cleared AI algorithms, approximately 396 of which were for radiology and 58 of which were for cardiology (Radiology Business) One study found that a widely used model of health risk reduced the number of Black patients identified for extra care by more than half due to racial bias (Science) The first AI models for medical use were approved by the FDA in 1995, with only 50 approved over the first 18 years, while almost 200 were approved in 2023 alone (Encord) AI applications have the potential to cut annual U.S. healthcare costs by $150 billion by 2026 as Ai is used more for drug discovery and development, and improving medical research (Accenture) The Problem: Ethical issues around the use of AI in healthcare encompass a broad range of complex problems and dilemmas including privacy and data security, bias, fairness, explainability, transparency and job displacement. One of the most problematic and widely debated issues around the use of AI in healthcare relates to bias and fairness. Since AI algorithms are developed by human beings, they can inherit biases from the humans who write the code that create those algorithms and select the data sets that the models will be trained on. In fact, the problem often starts with the data sets which the models are trained on which are often limited in their societal representation. As noted in “Can AI Ever Overcome Built-In Human Biases?”, ”AI systems absorb implicit biases from datasets that reflect existing societal inequities. And algorithms programmed to maximize accuracy propagate these biases rather than challenge them.” For example, as noted in the above referenced article, two of the most common biases relate to race and gender. Facial recognition systems trained mostly on light-skinned faces will inevitably struggle with dark-skinned faces and an AI recruiting tool t was found to penalize resumes containing the word “women’s” and downrank graduates of two all-women's colleges. As a result, models based on this data can lead to unequal or discriminatory treatment, undermining fairness in healthcare and perpetuating existing healthcare disparities. In addition, ethical issues around AI in healthcare arise from concerns related to data privacy and security. The use of AI in healthcare often involves the processing and analysis of vast amounts of sensitive patient data which are then applied to things such as predictive analytics, and precision medicine among other things. Given the ever-increasing digitization of healthcare data and sheer amount of data points available on patients through tools such as sensors, remote patient monitoring and other wearable devices, data will increasingly become at risk. As a result, as noted in “Enabling collaborative governance of medical AI”, “medical AI’s complexity, opacity, and rapid scalability to hundreds of millions of patients through commonplace EHRs demand centralized governance. Already, there are well documented case studies of commonly used medical AI systems potentially causing harm to millions or unnecessarily burdening clinicians, including…Epic’s sepsis model at Michigan Medicine and elsewhere.” Protecting the privacy and security of this data, and ensuring it is not misused or breached, will likely remain a significant ethical challenge in healthcare. There are also significant concerns relating to transparency and explainability. In layman's terms, many AI algorithms such as deep learning models, operate as "black boxes," where it is difficult if not impossible to determine what a decision or recommendation was based on. This creates issues specific to healthcare where clinicians need to be able to explain the clinical basis for their recommendations and want to be able to evaluate any recommendations around existing or evolving treatment protocols. As pointed out in a recent article in Jama Health Forum, “Patients expressed significant concerns about …the potential for artificial intelligence to misdiagnose and to reduce time with clinicians.” The article went on to highlight that “racial and ethnic minority individuals [expressed] greater concern than White people.“ Clearly, lack of transparency can raise concerns about accountability and trust in these types of models. One final concern surrounding the use of AI in healthcare which should not be minimized revolves around the potential for job displacement among clinicians and other staff in healthcare. This has become an even greater concern more recently with the evolution of generative AI and will be true for both so-called “lower-risk” non-clinical applications and eventually even more clinical applications. As AI systems become more capable of handling various tasks, there is the potential that certain roles traditionally performed by humans, such as initial triage or diagnostics could be automated. Hence as noted in “Enabling collaborative governance of medical AI”, “front-line clinicians must be made aware of medical AI’s indications for use and understand how and how not to use it,” As outlined in a recent article in the Lancet entitled, “AI in medicine: creating a safe and equitable future”, “[AI] could change practice for the better as an aid—not a replacement—for doctors. But doctors cannot ignore AI. Medical educators must prepare health-care workers for a digitally augmented future.” The Backdrop: The landscape of the healthcare ecosystem is being significantly reshaped by the rapid advancements in artificial intelligence (AI) and machine learning especially with the rapid developments around generative AI. These technological strides have ushered in an era where AI can more rapidly and easily be applied to accelerate the digital transformation in healthcare. The capabilities of AI systems, especially in the domains of high-volume data analysis, predictive analytics, genomics and increasingly diagnosis and treatment recommendations seem to be growing by the day. For example, as noted in the Lancet article, “AI in medicine: creating a safe and equitable future”, “The Lancet Oncology recently published one of the first randomized controlled trials of AI-supported mammography, demonstrating a similar cancer detection rate and nearly halved screen-reading workload compared with unassisted reading. AI has [also] driven progress in infectious diseases and molecular medicine and has enhanced field-deployable diagnostic tools.” AI's ability to process vast datasets, recognize complex patterns, and provide insights that were previously unattainable with human or machine-assisted comprehension has garnered substantial attention within the healthcare community. Consequently, despite several earlier periods of hyperbole, it appears that at least augmented intelligence (or AI light) is here to stay and will remain a pivotal force driving innovation and the practice of medicine.” In addition, with the ongoing digitization of healthcare data, healthcare organizations now have access to an ever-increasing amount of patient and clinical research data. These technological advancements have provided healthcare organizations with unprecedented access to an abundance of patient data. As noted in “Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare”, “the integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education”. As the article goes on to note, The convergence of health data and AI technologies has the potential to not only enhance the efficiency and precision of healthcare delivery but also to usher in a new era of data-driven and patient-centric healthcare solutions. The dramatic increase in the use of data and AI in healthcare however, cannot and should not occur in a vacuum, there needs to be standards and guardrails in place to safeguard their application. Fortunately, numerous organizations and esteemed ethicists are actively engaged in the formulation and development of comprehensive guidelines and ethical frameworks. These initiatives represent a proactive response to the dynamic landscape of healthcare AI, aiming not only to regulate its application but to provide a principled and responsible framework for its deployment. As highlighted in a recent article in Nature entitled, “Enabling collaborative governance of medical AI”, these frameworks proactively address potential challenges as AI evolves, guiding the ethical balance between innovation and responsibility. They must promote ongoing dialogue and collaboration among healthcare professionals, AI developers, policymakers, and patient advocates to align AI in healthcare with ethical principles and the best interests of all. ”Policymakers must invest in human and technical infrastructure to facilitate that governance. Infrastructure might include technical investments (IT systems and processes for robust, low-cost medical AI evaluation), procedural developments (best practices for pre-implementation evaluation, care pathway integration and post-integration monitoring) or human training (training grants for clinical AI specialists). Implications: Ethical issues and tensions in the application of AI in healthcare are far-reaching and have significant consequences for numerous stakeholders, including patients, healthcare providers, policymakers and society as a whole. Ethical concerns in AI can erode patient trust in healthcare systems. Increasing trust and confidence in models derived and used ethically while avoiding bias is crucial in the development and deployment of artificial intelligence and machine learning systems. The steps and strategies to achieve this,include solutions such as transparency and explainability, diverse and inclusive teams, bias detection and mitigation, etc. In terms of transparency and explainability solutions include making your model's decision-making process as transparent as possible by documenting data sources, preprocessing steps, model architecture, training sets and where possible hyperparameters. Utilize explainable AI techniques, such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations), to provide insights into how the model arrived at its predictions. In addition to addressing and reducing the potential for bias, AI models should be developed by diverse and inclusive teams of varying backgrounds wherever possible. Research has consistently shown that diverse teams perform better (both in terms of productivity and quality of final product) and diverse perspectives can help address bias more effectively. Consciously and unconsciously, diversity encourages ethical discussions and raises awareness of potential biases with the possibility of aggravating disparities. AI teams should also proactively implement bias detection tools to identify potential bias in data and model outputs. Use of bias mitigation techniques, such as re-sampling, re-weighting, and adversarial training, to reduce bias in both the training data and the model's predictions should be standard. AI’s use in healthcare must also be accompanied by structured and frequent audits both post-training and post-deployment, in addition, audits should routinely assess the controls and procedures that have been developed, and ensure they are being followed. All of the above are crucial to avoid the ethical lapses that can impact patient outcomes, lead to misdiagnosis, suboptimal treatments, and even potentially harm to patients. Increasingly ethical breaches can result in legal and regulatory actions against healthcare organizations and developers of AI tools. Inadvertent data breach disclosures of non-compliance with data protection guidelines like HIPAA, GDPR and state privacy regulations can lead to fines and legal liabilities. Addressing the ethical issues in the development and deployment of AI in healthcare is critical for realizing the full potential of AI while ensuring that patients and society as a whole realized the full benefit from these technological advancements. Related Reading: Can AI Ever Overcome Built-In Human Biases? Enabling collaborative governance of medical AI Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms AI in medicine: creating a safe and equitable future Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare

  • Zipline:Conquering Last Mile Delivery in Health Care

    The Driver: Zipline recently announced that it is partnering with nutrition and supplements retailer GNC to start drone delivery of online orders in select markets, beginning with Salt Lake City Utah, this summer. Zipline raised $330M in a Series F funding led by Reinvent Capital and Baillie Gifford in April of 2023, bringing its total funding to $821M according to Crunchbase. Zipline is a California-based automated logistics company that designs, manufactures, and operates drones to deliver vital medical products. The funds will be used to design and manufacture drones to help those in need. Key Takeaways: In a study comparing drones and road paving for blood delivery in Rwanda, facilities were able to reduce inventory …without impacting service levels, and a 40% reduction in the number of blood products destroyed or damaged (U. of Pennsylvania) Drones can reduce energy consumption by 94% and 31% and GHG emissions by 84% and 29% per package delivered by replacing diesel trucks and electric vans, respectively (Carnegie Mellon) In May 2023, Associated Couriers announced they will enter into a partnership with Zipline to deliver specialty prescriptions and medications to long-term care facilities across Long Island (Associated Couriers) Medium and heavy trucks in the United States are responsible for 37% of transportation-related greenhouse gas (GHG) emissions (Carnegie Mellon) The Story: Zipline was founded by CEO Keller Rinaudo and according to its website “Zipline is on a mission to build the world’s first logistics system that serves all people equally…to [transform] access to healthcare, consumer products, and food.” The company originally started delivering blood and medical products in Rwanda in 2016 and has since expanded to food, retail, agriculture products, and animal health products and now has operations in the US, Rwanda, Ghana, Nigeria, Cote d'Ivoire, Kenya, and Japan. According to an article in Axios, Zipline began operations in the U.S. by delivering PPE during COVID in partnership with North Carolina-based hospital system Novant Health. At that time drone delivery was operated under a waiver from the FAA. CEO Clifton believes the company’s drone technology is more ecologically efficient and friendly than current delivery technologies like internal combustion engines. For example, Clifton notes that businesses tend to utilize “the same 3,000-pound gas combustion vehicles driven by humans to make billions of deliveries that usually weigh less than five pounds. This is slow, it’s expensive, and it’s terrible for the planet. We actually think it’s inevitable that this is going to shift towards systems that are quiet, less obtrusive, and actually good for the environment,” The Differentiators: Zipline creates and deploys different autonomous drones to help deliver goods to difficult places in an eco-friendly way. With operations in seven countries, Zipline has covered over 45 million autonomous miles to help increase access to healthcare to people around the world. Zipline delivers consumer products, food, and other goods. The company has two delivery platforms- one for long-range and one for precise home delivery. As noted in a press release, when using the company’s Platform 2 approach, when the Zip arrives at its destination, it hovers safely and quietly at that altitude, while its fully autonomous delivery droid maneuvers down a tether, steers to the correct location, and gently drops off its package to areas as small as a patio table or the front steps of a home. According to the company they have completed deliveries to thousands of homes, businesses, and hospitals across the US, Rwanda, Ghana, Nigeria, Kenya, and Japan. The company states that their efforts have been reported to have saved lives, lowered costs, increased convenience and reduced harmful emissions compared with traditional delivery methods. In May of 2023, Associated Couriers announced they will enter into a partnership with Zipline to deliver specialty prescriptions and medications to long-term care facilities across Long Island. The Big Picture: Zipline’s drone delivery can help get goods to hard-to-reach locations more quickly and efficiently. As noted, Zipline’s drone flights first began in 2016 to help with the national blood delivery network in Rwanda. The speed and flexibility of Zipline’s delivery system can help save lives. For example, in a study comparing drone delivery with paving roads, researchers from the University of Pennsylvania found an 88% reduction of in-hospital maternal deaths from postpartum hemorrhage in Rwanda. The authors noted that as a result of Zipline’s logistics and delivery system, they found that “transfusing facilities [were able to] substantially decrease their on-hand inventory and wastage, but do not find any change in the management of blood inventory after paving roads.” Interestingly while the authors were looking at critical supplies like blood supply, it appears that some of their findings could be extrapolated to show other economic benefits. For instance, the authors noted, “that facilities were able to reduce their on-hand inventory …without impacting service levels, [they found] a 40% reduction in the number of blood products destroyed or damaged [and did] not find statistically significant evidence of a change in the number of blood units used.” In addition to aiding in the delivery of supplies, Zipline can have a big impact on the environment and climate change. Drones reduce inefficiencies and waste, especially carbon emissions. The United States transport sector heavily relies on petroleum, especially in the use of medium and heavy trucks. As noted in an article entitled, “Drone flight data reveal energy and greenhouse gas (GHG) emissions savings for very small package delivery” the U.S. transportation sector contributes to 37% of transportation-related greenhouse gas emissions, which is a significant contributor to climate change. Even light-duty vehicles contribute to the problem, accounting for 57% of transportation greenhouse gas (GHG) emissions and 64% of transportation energy use. Transportation can also be a major source of nitrogen oxides (NOxs) and other air pollutants, which can have adverse effects on human health and the environment. The authors found that “drones can reduce the energy consumption by 94% and 31% and GHG emissions by 84% and 29% per package delivered by replacing diesel trucks and electric vans, respectively.” Utilizing drones will help maximize energy productivity. This will aid in reducing the amount of greenhouse gases within the atmosphere as well as reducing the energy and climate impacts of package delivery. GNC partners with instant logistics provider Zipline for drone delivery service,South San Francisco drone delivery startup Zipline raises $330M; valuation jumps past $4B

  • Implementing SDOH Screening Requires Strategic Planning & Road to Impact-The HSB Blog 8/11/23

    Our Take: The integration of digital health tools and provider screening for Social Determinants of Health (SDOH) issues can greatly enhance patient care and our understanding of barriers to care. Digital health tools enable real-time data collection and also help identify SDOH-related issues promptly for intervention. However, implementing SDOH screening requires taking careful privacy measures as well as collaboration with community organizations. This approach fosters patient-centered, equitable healthcare. Key Takeaways: While initially 33% of staff and 58% of clinicians surveyed in one study felt that the clinic was “too busy” to deal with SDOH, by the end of the study those numbers had declined to 10% and 21% respectively (FPM) Despite the fact almost 90% of hospitals and systems surveyed reported screening patients for social needs, only 30% reported having a formal relationship with community-based providers for their target population (Deloitte) Although parents can see a throughline between child health and some SDOH, they are reticent to discuss some of those topics (Public Agenda & United Hospital Fund) Of the 49 provider-based SDOH programs that disclosed funding in one survey, hospitals and health systems committed approximately $2.5 billion, with a median investment of $2M/ program and a mean of $31.5M (Health Affairs) The Problem: While the integration of digital health tools and the incorporation of Social Determinants of Health (SDOH) into hospital screening processes can offer significant benefits, this does not come without challenges to overcome in order to fully realize the value of this approach. While the benefits of screening for SDOH have been proven and a number of toolkits for screening for SDOH exist including those from the American Academy of Family Physicians, American Academy of Pediatrics, and the National Association of Community Health Centers, clinicians can often find the task overwhelming. For example, as noted in “ The Feasibility of Screening for Social Determinants of Health: Seven Lessons Learned”, “In the authors' pilot study, 58 percent of clinicians began the project thinking they were too busy for social determinants of health (SDOH) screening.” In addition, integrating SDOH data from digital tools into existing electronic health records (EHR) and workflows can be technically challenging as hospitals often use various systems that may not seamlessly communicate with each other leading to additional data integration and interoperability issues. There is also the challenge of resource allocation and workload of SDOH screening as implementing SDOH screening requires additional resources. This includes personnel to manage data collection, analysis, and interventions as well as professionals and staff that are trained on how to interpret and utilize SDOH data effectively. The Backdrop: While for a number of years, there has been a rising recognition of SDOH's impact on health, it is only in the last several years that the focus has been on measuring and managing the most efficient and effective way to provide these resources. In addition, once providers have decided what and how to measure these impacts, it is important to determine how the results of such surveys will be handled. For example, as noted in “Considerations for Social Determinants of Health Screening Design” not only do they “need to consider the tools they’ll use to deploy the screening, which determinants to look at during screening, and how providers will talk about SDOH with patients to ensure it’s a respectful interaction” they will also need to make sure they have thought through which SDOH issues may “have an immediate and tangible solution to fix”, otherwise “it can be frustrating for both patient and provider—and it can damage patient trust—for a social need to arise and [then for patients to] hear there is no way to fix it.” As a result of this situation, providers have more recently teamed up with corporations, community organizations and others (including health insurance companies) to not only screen for SDOH but also to invest their own funds more directly in addressing the SDOH needs of patients. For example, a 2020 article in Health Affairs found that of the 49 SDOH provider-based SDOH programs that disclosed funding, ”the total funds committed specifically from health systems or hospitals were approximately $2.5 billion, with a median investment per program of $2 million and a mean of $31.5 million”. In addition, the authors also noted the dominant choice among organizations that chose to address a single SDOH was housing. The authors noted that "housing-related programs included strategies such as the direct building of affordable housing, often with a fraction set aside for homeless patients or those with high use of health care; funding for health system employees to purchase local homes to revitalize neighborhoods; and eviction prevention and housing stabilization programs." While the article went on to point out that “these investments still represent [only a] small fraction of overall spending by health systems, which currently are much more likely to be developing screening and referral programs”, it does indicate that providers consider the potential for significant and ongoing financial investments that might accompany any screening initiatives. Implications: Integrating screening for SDOH to improve patient care can be a substantial undertaking and can require a significant commitment of both human and financial resources. However, digital health tools can allow hospitals to gather comprehensive SDOH data, leading to more personalized and patient-centered care plans. This holistic approach can help providers address patients' unique circumstances and needs, which if handled correctly can improve overall satisfaction and engagement. Real-time data collection and analysis enable hospitals to identify SDOH-related barriers promptly. Early intervention and preventive measures can reduce the progression of health disparities and complications, particularly in children, ultimately leading to better health outcomes and improved quality of life. Moreover, it is important to train and communicate with stakeholders on the impacts on workflow and to ensure their concerns are heard. While, as noted, initially in one study, 33 percent of staff and 58 percent of clinicians felt that the clinic was “too busy” to deal with patients' social needs by the end of the study only 10 percent of staff and 21 percent of clinicians, felt that the clinic was “too busy” to deal with patient's social needs.” When they investigated the large drop in opposition to screening, the authors found simply that “in the end, the work was not overwhelming, as some had feared it would be.” Similarly, providers should make sure they are thoughtfully and adequately communicating the goals and purposes of such SDOH screening tools with patients. For as noted in “Considerations for Social Determinants of Health Screening Design” SDOH screening can be challenging because patients aren’t always comfortable discussing often sensitive personal information that does not directly pertain to their health (for example: It could be difficult for patients to admit they are housing insecure)”. Researchers from Public Agenda and United Hospital Fund also reported that “although parents can see a throughline between child health and some SDOH, they are reticent to discuss some of those topics. Particularly, parents or guardians were worried about discussing their own mental health, legal issues, or domestic problems, especially if they did not have an established rapport with the pediatrician.” As hospitals become more deeply involved in their communities by collaborating with local organizations and public health agencies, not only can this engagement contribute to community health improvement and foster trust, it can also have meaningful financial reforms. For example, the Health Affairs article referenced above also noted that, “although a recent study found no association between overall community benefit spending and readmission rates, hospitals in the top quintile of spending that was directed toward the community had significantly lower readmission rates than those in the bottom quintile.” Related Reading: The Feasibility of Screening for Social Determinants of Health: Seven Lessons Learned Quantifying Health Systems’ Investment In Social Determinants Of Health, By Sector, 2017–19 Most providers don't screen for social determinants of health Considerations for Social Determinants of Health Screening Design

  • Herself Health: Targeting the Health Needs of Women 65+

    The Driver: Herself Health recently raised a $26 million Series A funding round led by investor Michael Cline of Accretive with participation from Juxtapose. The funding brings Herself Health’s total funding to $33M according to Crunchbase. Herself Health is a startup that offers primary care for women aged 65 and older. The funds will be used for clinic expansion, virtual care expansion, increased in-person care and community engagement offerings, and to attract and retain new talent. Key Takeaways: (Complete Last) When there is no clear explanation for certain symptoms in women over 50 years, menopause is frequently used as an overruling container diagnosis (NCBI). As women age, they are twice as likely to be diagnosed with Alzheimer’s disease and are more likely than men to experience strokes that are associated with worse outcomes (CDC) Over one-quarter of women ages 65 to 74 and over half of women ages 85 and older live alone (Commonwealth Fund) Only 20 percent of ob/gyn residencies offer training on menopause, and 80 percent of medical residents report feeling "barely comfortable" discussing or treating menopause (Commonwealth Fund) The Story: Herself Health was founded in 2022 by CEO Kristen Helton, the former head of Amazon’s Amazon Care healthcare subsidiary, together with investment firm Juxtapose. While their initial goal was to find a way to assist older adults, after conducting surveys of more than 700 women aged 65 and over Helton and Juxtapose made some surprising findings. For example, they found that women of this age were almost a third more likely to be misdiagnosed than men, nearly half as likely to get a proper diagnosis of heart disease and almost a third as likely to be misdiagnosed after a stroke. In addition, the data indicated that women of this age were more likely to suffer from osteoporosis, arthritis and be misdiagnosed when it comes to other conditions. Based on those findings, Helton decided to focus on creating services to help older women with their specific health goals by understanding their ambitions, needs, and challenges. In her words, "women 65+ face unique health and social challenges as they age, and for far too long, their concerns, needs, and desires have been ignored. That's why we are designing Herself Health to be the value-based solution to improve outcomes and help women find joy, purpose, and better quality of life. Our fundamental goal is to elevate the patient experience and provide meaningful in-person and virtual support that provides women 65+ with a primary care experience designed specifically for them." She emphasized that the company’s goal is to attempt to address the unique social and medical challenges women face as they age. The Differentiators: Herself Health attempts to distinguish itself by prioritizing the holistic aspects of health and well-being, including mental health, mobility, social, and behavioral health. The company plans to target health concerns that are more prevalent in older women, such as Alzheimer's, osteoporosis, arthritis, diabetes, thyroid health, and weight management. As noted in Fierce Health, Herself Health uses health coaches to help connect a patient to her care team. As they point out, “this allows for patient education, follow-ups and assessing gaps in care”. They also coordinate with any specialists their clients currently see and who are accepted by their insurance. During the visitation, clients will discuss their personalized goals, have an exam, and find the proper health methods with trained clinicians. According to the company, Herself Health connects life goals with health goals to help its patients get more life out of life. The company highlights focused care, genuine relationships, a whole-person approach, and unique goals for each patient. According to the company, they "offer everything she'd expect from her primary care practice, with a special focus on conditions that commonly affect women 65+" and are on a mission to create change. In addition, clients have the option to set up an individual patient portal with the ability to send a message to their care team throughout the entire day and can schedule any necessary follow-up appointments with doctors or specialists. The Big Picture: Significant gaps and structural barriers inhibit the current primary healthcare system from meeting the needs of older women. Women must receive and have access to comprehensive, high-quality primary health care that is tailored to their needs at all ages and stages of life. This includes receiving sex-specific, sex-aware, and gender-sensitive care. According to the Commonwealth Fund, the United States primary health care system does not effectively meet women's needs as they age and transition through stages of life. For example, it was reported that "Health status indicators show that women in the U.S. have worse outcomes than women in other high-income countries. For example, the U.S. maternal mortality rate is higher than the rate in any other high-income country and continues to rise" Furthermore, the effects of the gap in quality healthcare in aging women are amplified in women of color, such as in the African American community. Moreover, various diagnoses in women are reported to be undermined, and it is reported to take several years to establish a comparable diagnosis in women than men A recent peer-reviewed article stated that "frequently used undetermined diagnoses such as fibromyalgia, chronic fatigue syndrome, and psychosocial distress are typically more often present in women. In addition, as it often happens in clinical practice when there is no clear explanation for certain symptoms in women over 50 years, menopause is frequently used as an overruling container diagnosis." According to the Commonwealth Fund, this is further complicated by the fact that "only 20 percent of ob/gyn residencies offer training on menopause, and 80 percent of medical residents report feeling "barely comfortable" discussing or treating menopause". When combined with the fact that "over one-quarter of women ages 65 to 74 and over half of the women ages 85 and older live alone" this not only contributes to their lack of health and overall well-being but results in more women experiencing more unhealthy years of aging than men. Herself Health will use $26M to redefine primary care for women 65-plus, Herself Health, providing primary care to women 65 and over, raises $26M

bottom of page