top of page

Search

215 results found with an empty search

  • CMR Surgical: Advancing Flexibility and Precision in Robotic Surgery

    The Driver: CMR Surgical recently raised $165M. The funding round was led by Softbank and Tencent and included all of its existing investors including Ally Bridge Group, Cambridge Innovation Capital, Escala Capital, LGT, RPMI Railpen, and Watrium. The funding brings CMR Surgical’s total funds raised to $1.1B per Crunchbase and the company’s value remains the same as it was at the time of its Series D in 2021 when it was $3b according to Sifted. The company stated it plans to use the funds to drive continued product innovation, including new technological developments, and to support the further commercialization of the system in key existing, and new, geographies. Key Takeaways: From January 2012 through June 2018, the use of robotic-assisted surgery for all general surgery procedures increased from 1.8% to 15.1%, equaling an 8.4-fold change (JAMA) Cost savings from robotic surgery generally appear to be a function of operating time and reduction in complications with one study showing approximately a 3030-minuteeduction in time and a 10% reduction in complications to achieve savings (BMJ Surgery, Interventions, and Health Technologies). The market for soft-tissue robotic-assisted minimal access surgery is projected to exceed $7B per year (CMR Surgical) The typical costs for a robotic-assisted surgical machine range anywhere from $1.5M to $2.0M USD (Cureus) The Story: CMR was founded in 2014 with the goal of giving as many people in as many places in the world access to minimal-access surgery (MAS). As Mark Slack, Chief Medical Officer noted in 2022, “our goal is to make robotic-assisted surgery more accessible globally, offering a solution with flexible and novel financing models that can work for both public and private contracts and for low-and middle-income countries. The company sells robotic-assisted minimal access surgery systems to hospitals for hernia repair, colectomies (partial or full removal of a colon), hysterectomies, sacrocolpopexies (repair weakness or damage in pelvic organs often using surgical mesh) and lobectomies (removal of a lobe of a lung). As noted by Fierce Health back in 2017, CMR is combining the design and economics of its devices to broaden the use of robotic-assisted surgery in hospitals. “CMR wants the device …to be economically viable for more hospitals. As it stands, hospitals make big upfront investments to acquire robots. This limits uptake…CMR has designed Versius to cost less than other systems. And it is pairing this cost-conscious design with a business model that could make the economics more favorable still for hospitals.” As a result, Versius is being used in routine clinical practice to deliver high-quality surgical care to patients around the world. The Differentiators As noted by the company, what distinguishes CMR Surgical’s Versius robotic surgical assistant is its patented “V-wrist technology, which allows [its] small, fully wristed instruments to have seven degrees of freedom which can be rotated 360 degrees in both directions by the arm. This technology, which bio mimics the human arm, has helped [CMR surgical] make the units so much smaller than other systems.” CMR Surgical argues that this gives surgeons increased precision, accuracy and proficiency allowing them to reach hard-to-reach areas when necessary. This in turn facilitates the small “footprint and modular design” of CMR’s Versius robotic surgical assistant, where each robotic “arm” is independent of the other, allowing a surgeon to place a singular arm or “port” where necessary to best suit the needs of the patient for a given procedure. This is in contrast to market leader, Intuitive Surgical’s DaVinci Robot where all arms emanate from a single pod above the patient, giving rise to a so-called Octopus configuration of robot arms. CMR Surgical units also allow surgeons to either sit or stand at the surgical console, or even change positions during operations, thereby causing less physical strain on the body during lengthy procedures. This can be particularly important given current workforce shortages and issues around clinician satisfaction. The Big Picture: As noted by the Mayo Clinic, “the primary benefit of robotic surgery for patients is faster recovery” primarily due to smaller incisions and less blood loss during procedures. This “allows patients to return to daily activities sooner...and have fewer surgical complications.” In addition, robotically assisted surgery can reduce opioid use and help reduce the overall cost of and length of hospitalizations. For example, as noted by a 2021 study in BMJ Surgery, Interventions and Health Technologies, cost savings from robotic surgery generally appear to be a function of operating time and reduction in complications with one study showing approximately a 30-minute reduction in time and a 10% reduction in complications to achieve savings However, it should be noted that this topic has been the subject of considerable debate (please see “Robotic Surgery: A Comprehensive Review of the Literature and Current Trends”, Cureus, July 2023 for a review of current trends, applications and issues). We do believe that over time as lower-cost robotically assisted surgery like CMR Surgical’s Versius come to market, costs will decrease, helping to reduce length of stay and system costs which will be crucial for hospitals that are under continuous margin pressure. In addition, as adoption and technologies such as robotics and artificial reality (AR) and virtual reality (VR) increase, the training and proficiency of surgeons should increase as well. For example, according to a recent article in Semiconductor Engineering, having a recording of the procedure will enable “future analysis to improve the process and for educational purposes. [As such] there is great hope that as time goes by robotic-assisted surgery will increase accuracy, efficiency, and safety, all while potentially reducing healthcare costs.” CMR Surgical raises $165M for robotic-aided minimal access surgery, CMR Surgical raises $165m from existing investors SoftBank and Tencent

  • AI in Radiology: Aiding Workflows and Accuracy as Workforce Pressures Mount-The HSB Blog 9/30/23

    Our Take: Artificial intelligence has numerous applications in radiology and has been rapidly evolving to help improve care, reduce costs, and reduce the burden on radiologists. AI algorithms are being developed to assist radiologists in the analysis and interpretation of medical images and can help identify abnormalities, quantify tumor sizes, and highlight potentially relevant areas for further review. AI also can help automate time-consuming tasks in radiology, such as image segmentation and feature extraction which can significantly reduce the workload for radiologists, allowing them to focus more on complex cases and patient care. This is particularly important as there is already a worldwide shortage of radiologists, which is projected to worsen as the population ages. For instance, in the U.S., the growth of the Medicare population has significantly outpaced the number of radiologists entering the field in recent years. As noted by one study presented at the Radiological Society of North America (RSNA), “the growth of the Medicare population outpaced the diagnostic radiology (DR) workforce by about 5% from 2012 to 2019” and there are no signs of this imbalance improving given that “between 2010 and 2020, the number of DR trainees entering the workforce increased just 2.5% compared to a 34% increase in the number of adults over 65.” As such, AI has the potential to revolutionize radiology by improving the speed and accuracy of image analysis and improving the quality of care. However, its adoption should be carefully managed to ensure patient safety and the continued involvement of radiologists in the decision-making process. Key Takeaways: Between 2010 and 2020, the number of diagnostic radiology trainees entering the workforce increased by just 2.5% compared to a 34% increase in the number of adults over 65 (RSNA) In one study, comparing AI-CAD and traditional CAD software, the AI system outperformed by decreasing the false-positive marks per image (FPPI) by a significant 69% (Diagnostics) Over 85% of outpatient facilities and hospitals are facing staffing challenges, while they’re anticipating a 10% uptick in demand for staffing across MRI, nuclear medicine, ultrasound, radiologic and cardiovascular technologists (U.S. DOL) In one study from the Netherlands of over 40K women with extremely dense breast tissue scanning using commercially available AI software led to significantly fewer interval cancers than the control group (Pediatric Radiology) The Problem: Radiology is a true early adopter of AI in clinical practice. There were 520 FDA-cleared AI algorithms that were cleared as of January 2023 over three-quarters of which were for radiology. Nevertheless, several challenges need to be addressed to further broaden AI’s integration into healthcare even deeper. For example, while AI could eventually eliminate the need for additional readings or verifications by other radiologists, these algorithms need to be validated and tested in clinical practice before organizations will actually put them into practice and trust them. One issue is the dual problems of data quantity and data quality. AI algorithms require large volumes of high-quality data for training and validation. However, as pointed out in a summary of an RSNA-MICCAI Pane entitled “Leveraging the Full Potential of AI—Radiologists and Data Scientists Working Together” older images may have certain idiosyncrasies. For example, while, "there was no good reason for it, a small percentage of the cases had text burned into the images, information such as dates, and computed radiography cassette numbers and other researchers…[were] still running into issues because they’re going back to older data that may have burned-in text." Also, as with any AI data set, developers of algorithms need to pay attention to bias in data training sets and in model output. Those building AI algorithms need to ensure they obtain access to diverse and representative datasets, which can be challenging, as data may be fragmented across different healthcare systems and data privacy and security must be ensured. In addition, as noted in “Legal consideration for artificial intelligence in radiology and cardiology”, ”there is not a good regulatory framework for AI in the U.S. …there is no guidance on how to deploy the technology safely and there are no clear protections from lawsuits”. While the FDA released the “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan” in 2021, this acts more as a framework and is based on total product lifecycle, it does not provide guidance on who should be responsible for product or software malfunctions that could impact patient care, resulting in misdiagnosis or even death. Moreover, as highlighted in the “Legal considerations for AI…” article, the technology team needs to understand AI’s clinical impact and exactly where and how the AI can impact liability. For example, it is important to map out “all the places where error can occur” and all the “points of potential failure” including installation, server maintenance and procedures to ensure the AI is running correctly. In addition, the article stresses the need for vendors and enterprises implementing AI solutions to have validated methods to monitor and test the algorithms. Moreover, given the inconsistency of state laws and jury interpretations, Brent Savoie, M.D., J.D. Section Chief of Cardiothoracic Imaging for Vanderbilt University Medical Center recommends, “if there are not specific regulatory protections for an AI vendor or healthcare groups using the AI, they may think twice about implementing the AI or doing business in that location.” Another issue will be ensuring the pace of regulation keeps up with the pace of technological change. Clear guidelines and standards for AI in healthcare need to be established to ensure patient safety while promoting innovation. As pointed out in “What's next for AI regulations in medical imaging” by the imaging platform Interlad, “one of the main challenges is that the FDA's traditional regulatory framework and review processes are not designed to keep pace with this speed of innovation, as AI-enabled medical applications are evolving rapidly, sometimes in unanticipated ways.” As such, models then need to be validated for accuracy, reproducibility, and applicability to the clinical problem they are trying to solve often in near-real time, all within the context of relevant data privacy and security laws. The Backdrop: Radiology generates vast amounts of data, and using AI has the potential to reduce read times, improve accuracy, decrease workforce burdens and even pinpoint new or earlier treatments. For example, in an article entitled “How does artificial intelligence in radiology improve efficiency and health outcomes?” the authors point out that, “AI could contribute to this in clinical, but also non-clinical, ways…even before a patient enters the radiology department, AI software might aid the scheduling of imaging appointments and predict no-shows for nudging or more efficient scheduling”. In addition, the article points out that “The workflow might also be optimized by changing the diagnostic process with AI, [for example in mammography screenings] studies have been performed to simulate an alternative workflow in which an AI risk score determines the number of radiology reads (none, single or double), reducing the total amount of reading time.” In addition to reducing the sheer number of images to review, AI and other improvements like computer-aided detection can help decrease the time radiologists spend reading scans. However as the authors highlight, ”besides the quality of the AI system, workflow integration is crucial for making this kind of software a success.” but once that is achieved there can be dramatic improvements in efficiency. For example, “the automated quantification of nodules, brain volumes or other tissues…might mitigate some of the tedious manual work that is part of a radiologist’s job, along with the large interrater variability inherent to these tasks.” Similarly, when AI is combined with computer-aided design systems, AI workflows can be dramatically streamlined. For example, as the authors highlight in “Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging”, “AI-CAD systems’ merit lies in their substantial reduction of false positives, enhancing dependability in clinical settings. [In one study] comparing AI-CAD and traditional CAD software…the AI system outperformed by decreasing the false-positive marks per image (FPPI) by a significant 69%.[The article added] it specifically excelled in identifying microcalcifications and masses, reducing false positives by 83% and 56% respectively”. Clearly, properly trained and implemented AI radiological systems can lead to meaningful improvements in diagnostic accuracy. AI radiology can combine these advancements in early detection to help develop precision treatments. For example, as demonstrated in “How does artificial intelligence in radiology improve efficiency and health outcomes?” by analyzing subtle patterns in medical images that might be missed by human observers. Utilizing the case of women with dense breast tissues, the article noted that screening can be personalized for this group, which is known to have a higher risk of breast cancer. The authors note that "a study from the Netherlands involving more than 40,000 women with extremely dense breast tissue [were scanned using commercially available AI software] resulting in significantly fewer interval cancers” than the control group. These factors collectively create a fertile ground for the development and adoption of AI in radiology, with the potential to significantly improve patient care, enhance diagnostic accuracy, and optimize healthcare delivery. However, it's important to address the challenges and ethical considerations associated with AI in radiology to ensure responsible and safe implementation. Implications: As noted above, AI has significant potential to help increase efficiency and improve workflows all the way through the radiology process. As noted in one study, AI can have an impact “beginning from the time of order entry, scan acquisition [all the way through to] applications supporting image interpretation… and result communication” This is extremely important as the volumes of imaging scans have increased dramatically in recent years, with research indicating that due to “an aging population and a greater reliance on imaging in the United States and Canada, there has been significantly increased computed tomography (230%), magnetic resonance imaging (304%), and ultrasound (164%) imaging use within the last 2 decades” Importantly, this is occurring against the backdrop of fewer radiologists being available to read scans. As Radiology Business stated in an article entitled “10 trends to watch in diagnostic imaging” according to the U.S. Department of Labor Statistics “over 85% of outpatient facilities and hospitals are facing staffing challenges, while they’re anticipating a 10% uptick in demand for staffing across MRI, nuclear medicine, ultrasound, radiologic and cardiovascular technologists.” In addition, while it will be crucial to continue to monitor, validate and audit any AI-based or assisted radiology system for bias, ethical issues and data security, AI algorithms can learn and adapt over time, potentially improving their performance and application for prevention, diagnosis and treatment of disease. Perhaps most importantly, AI in radiology can empower timely and accurate diagnoses which when coupled with personalized treatment plans, can lead to improved patient outcomes and quality of life. This is especially significant in conditions with high morbidity and mortality rates such as cancer. Related Reading: Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges An Artificial Intelligence Training Workshop for Diagnostic Radiology Residents How does artificial intelligence in radiology improve efficiency and health outcomes? Trends in the adoption and integration of AI into radiology workflows 10 trends to watch in diagnostic imaging Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges

  • Pain Management, Lessons from Pear Therapeutics & a Path Forward-The HSB Blog 7/14/23

    Our Take: The integration of digital health technologies holds great promise in revolutionizing nonopioid pain management offering both innovative solutions for pain relief and improved patient care. While opioids have historically been used for relief of chronic and severe acute pain, for approximately the last 20 years the U.S. has been experiencing an opioid epidemic creating an urgent need for effective nonopioid alternatives for pain management. Opioids are associated with various adverse effects, ranging from respiratory depression to dependence/addiction, overdose, and death. By providing personalized monitoring, nonpharmacological interventions, educational resources, and remote access to care, digital health can enhance patient outcomes, reduce opioid reliance, and contribute to a safer and more effective approach to pain management. Key Takeaways: U.S. healthcare spending, spending on musculoskeletal disorders cost an estimated $380 billion and the prevalence of low back pain, neck pain, and other joint-related conditions grew over 60% on average between 1996-2016 (Harvard Business Review) People who live with chronic pain are four times more likely to suffer from depression or anxiety (World Health Organization) Over 80 million Americans report chronic non-cancer pain, defined as non-malignant pain that lasts longer than three months, not associated with end of life (BMC Health Services Research) Access to the diagnosis, treatment and management of pain is highly unequal globally, with almost half the world's population is not able to access essential health services (BMC Medicine) The Problem: Non Opioid pain management refers to a range of strategies and treatments aimed at relieving pain without the use of opioid medications. These approaches include nonpharmacological interventions such as digital therapeutics or DTx (typically apps are prescribed by a physician and used by a patient on their digital device), utilizing augmented reality and virtual reality technologies to provide immersive experiences and cognitive-behavioral therapy, among others.. For example, one method highlighted in “Pain Control Technology Embraces the Challenge to Reduce or Eliminate Opioids'', describes “COOLIEF* Cooled Radiofrequency (RF) a non-opioid, minimally invasive, non-surgical outpatient procedure providing long-lasting relief for chronic pain patients.” However, while non-opioid pain management has gained recognition as a safer and more sustainable option for addressing chronic and acute pain it has struggled to gain payer acceptance with many payers still viewing treatments as investigational, despite several drugs receiving FDA approval. The continuous monitoring enabled by digital substance use disorder (SUD) tools empowers personalized and remote monitoring of patients' pain levels, timely communication with care teams, adjustments to pain management plans and even interventions if necessary. As highlighted in “Pain Control Technology Embraces the Challenge to Reduce or Eliminate Opioids”, the industry has wholeheartedly embraced the challenge put to it by the FDA which was “intended to spur the development of medical devices, diagnostic tests, and digital health technologies (mobile health applications included) to aid in the fight against the opioid crisis and achieve the prevention and treatment of opioid use disorder (OUD).” Non-opioid pain management strategies offer a safer alternative to opioid-based treatments, but their effective implementation and widespread adoption can be hindered by factors related to digital health bringing about specific challenges that providers must overcome when trying to integrate digital health technologies into nonopioid pain management approaches. First and foremost, digital health tools and platforms may not be easily accessible or affordable for all patients. Factors such as cost of devices, internet connectivity, and technical literacy can create barriers, particularly for individuals from low-income backgrounds or who reside in underserved areas. For example, in Telehealth for management of chronic non-cancer pain and opioid use disorder in safety net primary care, the authors point out that "research indicates that among patients with opioid use disorder, use of telehealth can increase patient engagement over time, yet telehealth’s role in expanding access to care, improving medication recall, and increasing patients’ level of attendance is not well understood, especially in urban settings." Undoubtedly, accessibility issues like this hamper the equitable implementation of digital health solutions for nonopioid pain management. In addition, while these products need regulatory approval and patient acceptance, those are not always enough, as they need to have payers on board as well. For example, many in the industry point to Pear Therapeutics and its subsequent bankruptcy as both an example of a trailblazer in digital treatments for SUD as well as an example for what not to do. For example, as noted in “Pear Therapeutics: What Happened and What does this mean for DTx “Pear failed to secure widespread reimbursement for its solutions in the U.S. despite FDA clearance (more on Pear Therapeutics in “Backdrop”). Moreover, seamlessly integrating digital health tools with existing healthcare systems and electronic health records (EHRs) is essential for effective non opioid pain management. As noted in “The Rise and Fall of Pear Therapeutics'', “the company relied heavily on third-party pharmacies, further complicating their operations,” As we have noted before, interoperability challenges between different platforms, lack of standardized data formats, and limited integration capabilities in healthcare are decided impediment to success. In the case of digital tools for pain management, where speed and accuracy of patient information is of paramount importance, this can impede the efficient exchange of information between and among clinicians and hinder the comprehensive management of a patient's pain. The Backdrop: As validated in “The Opioid Epidemic: A Geography in Two Phases”, “the United States has been experiencing a drug overdose mortality epidemic marked by the introduction and spread of opioids across rural and urban communities over the past 20 years and can be defined by two phases. In the first phase, beginning around 2000 and ending in the early 2010s, drug overdose mortality rates soared among middle-aged adults between 25-54 who became addicted to prescription opioid painkillers that drove the epidemic. In the second phase, which the authors refer to as “The Illicit Phase of the Epidemic”, since the early 2010s, opioid drug reformulation and declining prescription rates have resulted in ebbing mortality from prescription opioids. At the same time, illicit opioids such as heroin and, increasingly, fentanyl and related synthetic opioids rapidly entered the scene—causing a growing share of drug overdose deaths. Currently, approximately, 4.8M adults in the US have a current or past opioid use disorder diagnosis, and over 80M report chronic non-cancer pain, defined as non-malignant pain that lasts longer than three months, not associated with end of life, according to “Telehealth for management of chronic non-cancer pain and opioid use disorder in safety net primary care”. The widespread use of opioids for this pain management has led to a significant increase in addiction, overdose, and mortality rates. This crisis has spurred a critical need for alternative approaches to pain management that reduce reliance on opioids. However, as noted above, engaging patients in their own pain management, and ensuring compliance with nonopioid treatment plans can be challenging. In “Digital health in pain assessment, diagnosis, and management: Overview and perspectives'', the authors note that digital apps such as Pain Check can be helpful because of the subjectivity of pain assessment and the difficulty experienced when people cannot self-report pain as a result of being non-verbal or cognitively declined. As a result, while these digital health technologies offer opportunities for patient education, remote monitoring, and personalized interventions, ensuring patient motivation, adherence, and understanding of these tools becomes crucial for successful implementation. Implications: Digital health tools provide alternative strategies and interventions for pain relief, reducing the reliance on opioid medications. By offering non-pharmacological options such as virtual reality, augmented reality, mindfulness exercises, and physical therapy guidance, digital health can contribute to a decrease in opioid prescriptions and mitigate the risks associated with opioid use. These technologies, particularly telehealth services, enable patients to access specialized pain management care irrespective of their geographic location. This is particularly beneficial for individuals in rural or underserved areas who may have limited access to pain specialists which empowers improved access to care for earlier interventions, better pain management, and reduced healthcare disparities. Despite the potential benefits, the implementation of digital health in nonopioid pain management faces challenges including accessibility, regulatory and reimbursement approval, data privacy and security, and interoperability. Digital health platforms generate vast amounts of data related to pain management, treatment outcomes, and patient experiences. Aggregating and analyzing this data can provide valuable insights into the effectiveness of nonopioid interventions, patient preferences, and trends in pain management but this data must be in a form that can be harnessed and analyzed by multiple systems. Nevertheless, the implications of incorporating digital health solutions in nonopioid pain management offer the potential for improved patient outcomes, reduced reliance on opioids, increased access to personalized care, and real-time monitoring, to name a few. In addition, there are other complementary therapies or digital treatments for SUD. Related Reading: Telehealth for management of chronic non-cancer pain and opioid use disorder in safety net primary care Digital Therapeutics: Opportunities And Challenges In Digital Health The Future of Digital Care Pear Therapeutics: What Happened and What does this mean for DTx? Pain Control Technology Embraces the Challenge to Reduce or Eliminate Opioids

  • Better Life Partners: Improving Access to High-Quality OUD Care

    The Driver: Better Life Partners recently raised $26.5M in a Series B funding round led by aMoon and F-Prime Capital with participation from .406 Ventures. As part of the funding, Dr. Yair Schindel, the co-founder and managing partner of aMoon will join Better Life’s board of directors. aMoon is Israel's largest healthtech venture capital fund whose goal is “to partner with exceptional entrepreneurs who harness groundbreaking science and technology to transform healthcare.” The funding brings Better Life’s total funds raised to $38M and will be used to develop and scale its offering and expand its population management services to new and existing markets. Key Takeaways: Non-Latinx Black men and women had approximately a 50% less chance of receiving SUD treatment than non-Latinx White men and women-44% and 51% respectively (Public Health Reports) In 2021, 94% of people aged 12 or older with a substance use disorder did not receive any treatment (SAMHSA) 2% of youths in the United States between the ages of 12-17 have an opioid use disorder (OUD), while almost 4% of adults have an OUD (NCDAS) Drug rehabilitation costs an average of over $13K per person with the cheapest inpatient rehabilitation programs costing approximately $6K per month, while an outpatient rehabilitation program costs about $6K for three months of treatment (NCDAS) The Story: According to the company, Better Life Partners was founded in 2018 by Adam Groff, MD and Steven Kelly to help those with opioid use disorder (OUD) achieve lasting and meaningful recoveries. The company understood that while there are a number of treatment options available for those suffering from OUD, many suffering from OUD lack access to high-quality health care, particularly those that are evidence-based practices and very localized. The company partners with local organizations to provide harm reduction and integrated medical, behavioral, and social care. The company views itself as the “multispecialty practice of the future”. As noted in the press release about the fund raising, “the company provides on-site (in-person) and virtual care in the community forged with a trauma -informed, harm reduction approach, while also supporting population level outcomes. The company currently offers medication assisted treatment (MAT), therapy, coaching as well as care access and coordination. They are currently operating in the northeastern U.S. in the states of Maine, Massachusetts, New Hampshire, and Vermont. The Differentiators: Better Life views their approach as “hyper-local” in that they work hand-in-hand with mission-driven community organizations, treatment providers, and public health organizations to bring better care to the people they serve. As noted in a recent article in FinSMEs, Better Life “provides care in a community-embedded and whole-health approach” that works with alternative payment models. According to the company, “these partnerships are intended to help connect patients to a broader spectrum of services including harm-reduction and physical health care. As noted by the Boston Business Journal, Better Life Partners has partnerships with a broad array of community organizations including “recovery centers, syringe exchanges, food banks, shelters and homelessness resources, faith-based charities and churches, community development programs, and women’s and children’s support.” The company currently accepts Medicaid, Medicare, and some commercial insurance. The Big Picture : As widely noted, OUD has become a nationwide problem dating back nearly 20 years and consisting of two phrases. As we noted in our blog post “Pain Management, Lessons from Pear Therapeutics & a Path Forward-The HSB Blog 7/14/23“, the first phase (2000-2010) began when drug overdose mortality rates soared among middle-aged adults between 25-54 who became addicted to prescription opioid painkillers that drove the epidemic.. By contrast, the second phase (which has run since the 2010s) consists of opioid drug reformulation and declining prescription rates” but still high rates of addiction. As a result, the need for treatment options that are both flexible and personalized is dramatic and cannot be overstated. For example, as noted by the National Center on Drug Abuse Statistics, approximately 2% of youths in the United States between the ages of 12-17 have an OUD, while almost 4% of adults have an OUD. Importantly, reaching people who suffer from OUD where they are is often one of the biggest barriers to care and is often addressed by community services. According to the Kaiser Family Foundation, these organizations often “remove affordability barriers to accessing needed treatment services, particularly for people with OUD who are more likely to have low incomes compared to the general population and are disproportionately covered by Medicaid or are uninsured.” In addition, having a strong and supportive social and community environment are essential to remaining drug-free. For example, SAMHSA’s recovery framework is based on the idea that “the processes of personal change (e.g. wellness, purpose, self-esteem, hope, self-efficacy, financial stability) and social reintegration (e.g. social support, community, and having a stable and safe home) are instrumental in maintaining abstinence.” Substance-use treatment startup Better Life Partners completes $26.5M Series B, fundraise, Better Life Partners Lands $26.5M for Virtual SUD, Mental Health Platform

  • Digital Behavioral Health Tools Can Address Treatment Shortages & Accessibility-The HSB Blog 6/24/23

    Our Take: The integration of digital health solutions in behavioral health has the potential to reshape the care landscape by increasing accessibility, providing personalized treatment options, and facilitating early intervention and prevention. These technologies can not only address the shortage of treatment options but they can also empower individuals with behavioral health issues to actively manage their symptoms and improve their overall well-being. Digital health is a complement to traditional care rather than a replacement. Key Takeaways: One study found “no statistically significant association between the modality of care (telehealth treatment group versus in-person comparison group) and the one-month change scores on standard assessments of depression or anxiety (BMC Psychiatry) As of March 2023, 160 million Americans live in areas with mental health professional shortages, [and] over 8,000 more professionals [are] needed to ensure an adequate supply (Commonwealth Fund) An estimated 21M adults or approximately 8.4% of U.S. adults had at least one major depressive episode (NIMH) The percentage of need for behavioral services that is actually met nationwide is less than 30% (KFF) The Problem: For years there has been a shortage of providers and treatment options that address behavioral health needs. For example, according to a recent report from the Commonwealth Fund, “as of March 2023, 160 million Americans live in areas with mental health professional shortages, [and] over 8,000 more professionals [are] needed to ensure an adequate supply.” Moreover, data from the Kaiser Family Foundation estimates that the percentage of need for behavioral services that is actually met nationwide is less than 30% (27.7%) as of September 2022. While historically it has been difficult to coordinate care between digital interventions and healthcare providers, because of the broader acceptance of telehealth during the Pandemic digital delivery of behavioral care has become more broadly accepted and is now seen as a way to address this care gap. However, as behavioral care moved towards digital delivery it became increasingly clear that ensuring effective communication and data sharing were essential to maximize its benefits. In addition, as we pointed out in “Integrating Telemental Health Into Primary Care Aids Diagnosis and Treatment-The HSB Blog 3/7/22” here, telebehavioral health “may allow for more accessible and affordable care with equal or better outcomes than in-person care, especially for diagnosis and treatment.” This is especially true in the most acute shortage areas which tend to be in rural care. For example, as noted in Digital health technologies and major depressive disorder, “telemedicine or care coordination platforms can help provide remote care to rural areas or hard-to-reach communities, thereby enhancing patient-provider collaboration.” Moreover, although digital health has the potential to increase access to treatment for behavioral health conditions, disparities in technology access and digital literacy can perpetuate inequities. As noted in Facts & Figures: Mental Health in Rural America, “rural residents report difficulty accessing healthcare services and an absence of anonymity when seeking care in the South.” The article goes on to note that “a common sentiment among Southerners is that the prevailing stigma and conservative belief system in rural communities can hinder the search for health care.“ These disparities can be particularly acute for low-income individuals, rural populations, older adults, and marginalized communities. As a result, these populations may face even greater barriers in accessing and effectively utilizing digital health tools. Finally, given that digital telebehavioral health solutions collect what some would say is patients' most sensitive personal health data, their use and integration raises concerns about data privacy and security. The Backdrop: Based on data from the Centers for Disease Control (CDC), approximately 50% of Americans will be diagnosed with a mental illness at some point in their life and 1 in 25 Americans are currently living with a mental illness. For example, the National Institutes of Mental Health (NIMH) has found that “an estimated 21M adults or approximately 8.4% of U.S. adults had at least one major depressive episode (defined as “a period of at least two weeks when a person experienced a depressed mood or loss of interest or pleasure in daily activities, including problems with sleep, eating, energy, concentration, or self-worth”). However, the widespread adoption of smartphones, high-speed internet, and digital connectivity has created opportunities for reaching individuals like those suffering from depression remotely and also provides a means to bridge geographical barriers. For example, as noted in “Digital health tools for the passive monitoring of depression: a systematic review of methods”, “with the global trend toward increased smartphone ownership (44.9% worldwide, 83.3% in the UK) and wearable device usage …this new science of “remote sensing”, sometimes referred to as digital phenotyping or personal sensing presents a realistic avenue for the management and treatment of depression” as well as other behavioral health disorders. Furthermore, the emergence of data analytics, artificial intelligence, and machine learning has the potential to enable a new modality of personalized behavioral healthcare approaches. By leveraging patient data, algorithms can help tailor treatment plans, predict risk factors, and optimize interventions for individuals with behavioral health conditions. As highlighted in a recent article in Scientific American entitled “AI Chatbots Could Help Provide Therapy, but Caution Is Needed”, "as an assistant for human providers…LLM chatbots could greatly improve mental health services, particularly among marginalized, severely ill people.” This is particularly true in terms of helping with the administrative burden where “programs such as ChatGPT could easily summarize patients’ sessions, write necessary reports, and allow therapists and psychiatrists to spend more time treating people." Implications: Digital health interventions can help address barriers to accessing mental healthcare, especially for individuals in underserved areas or with limited mobility. Remote platforms, telemedicine, and mobile applications provide convenient and accessible avenues for individuals to seek support and treatment for depression. For example, in an article entitled, “Comparison of in-person vs. telebehavioral health outcomes from rural populations across America”, the study’s authors found “There was no statistically significant association between the modality of care (telehealth treatment group versus in-person comparison group) and the one-month change scores for either PHQ-9 (a standard assessment for depression) or GAD-7 (a standard assessment for anxiety)” leading them to conclude “no clinical or statistical differences in improvements in depression or anxiety symptoms as measured by the PHQ-9 and GAD-7 between patients treated via telehealth or in-person.” In addition, digital health tools can enable efficient screening and early detection of behavioral health symptoms. Automated assessments, digital questionnaires, and mood-tracking apps can help identify individuals at risk, allowing for timely intervention and preventive measures to reduce the severity and duration of episodes in need of treatment. Digital health solutions can also help extend the reach of facilities and clinicians through such technologies as remote monitoring while wearable devices and mobile applications can track mood patterns, sleep quality, activity levels, and other relevant data. This information can support self-management strategies and help healthcare providers monitor progress, make informed treatment adjustments, and provide timely interventions as needed. However, a note of caution is also warranted as the use of digital health in mental health care raises important ethical and regulatory considerations. Protecting patient confidentiality, ensuring data encryption, and implementing robust security measures are essential to build trust and maintain the integrity of digital health platforms for depression. While technologies and generative AI hold great promise in alleviating the shortage of practitioners, as noted in “AI Chatbots Could Help Provide Therapy, but Caution Is Needed’ an AI chatbot called Tessa, which was not based on generative AI but …gave scripted advice to users. Would sometimes give weight-loss tips, which can be triggering to people with eating disorders.” Although this is likely to improve as the technology improves, understanding and addressing nuances in treatment could be a key to the effectiveness of the technology. In addition, Safeguarding patient privacy, ensuring data security, and maintaining ethical standards in the use of AI algorithms and predictive analytics are critical to protecting the rights and well-being of individuals with behavioral health issues. Related Reading: Comparison of in-person vs. telebehavioral health outcomes from rural populations across America Understanding the U.S. Behavioral Health Workforce Shortage Mental Health Care Health Professional Shortage Areas (HPSAs) 2020 National Survey on Drug Use and Health (NSDUH)

  • What Clinicians and Administrators Need to Know When Implementing AI-The HSB Blog 3/9/23

    Our Take: There are several basic issues and challenges in deploying AI that all clinicians and administrators should be aware of and inquire about to ensure that they are being properly being considered when AI is being implemented in their organization. Applications of artificial intelligence in healthcare hold great promise to increase both the scale of medical discoveries and the efficiency of healthcare infrastructure. As such healthcare-related research and investment have exploded over the last several years. For example, according to the State of AI Report 2020, academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017. In addition, 99% of healthcare institutions surveyed by CB Insights are either currently deploying (38%) or planning to deploy AI (61%) in the near future. However, as witnessed by recent errors discovered surrounding the application of an AI-based Sepsis model, while AI can improve the quality of care, improve access and reduce costs, models must be implemented correctly or they will be of questionable value and even dangerous. Key Takeaways: According to Forrester's "The Cloud, Data, and AI Imperative for Healthcare" Report the 3 greatest challenges to implementing AI are: 1) integrating insights into existing clinical workflows; 2) consolidating fragmented data; and, 3) achieving clinically reliable clean data Researchers working to uncover insights into prescribing patterns for certain antipsychotic medications found that approximately 27% of prescriptions were missing dosages Even after doing work to standardize and label patient data, in at least one broad study almost 10% of items in the data repository didn’t have proper identifiers Academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017 The Problem: While it is commonly accepted that computers can outperform humans in terms of computational speed, in its current state many would argue that artificial intelligence is really “augmented intelligence” defined by the IEEE as “a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it.” Current AI models are still highly dependent upon the quantity and quality of data available for them to be trained on, the inherent assumptions underlying the models as well as the human biases (intentional and unintentional) of those developing the models along with a number of other factors. As noted in a recent review of the book “I, Warbot” about computational warfare by Kings College, AI lecturer Kenneth Payne, “these gizmos exhibit ‘exploratory creativity'-essentially a brute force calculation of probabilities. That is fundamentally different from ‘transformational creativity”, which entails the ability to consider a problem in a wholly new way and requires playfulness, imagination and a sense of meaning.” As such, those creating AI models for healthcare need to ensure they set the guardrails for its use and audit its models both pre and post-development to ensure they conform to existing laws and best practices. The Backdrop: When implementing an AI project there are a number of steps and considerations that should be taken into account to ensure its success. While it is important to identify the best use and type with any kind of project, given the cost of the technical talent involved, the level of computational infrastructure typically needed (if done internally) and the potential to influence leadership attitudes towards the use and viability of AI as an organizational tool, it is even more important here. As noted above one of the most important keys to implementing an AI project is the quantity and quality of data resources available to the firm. Data should be looked at with respect to both quality (to ensure that it is free of missing, incoherent, unreliable, or incorrect values) and quantity. In terms of data quality, as noted in “Artificial Intelligence: A Non-Technical Introduction”, data can be: 1) noisy (have data sets with conflicting data), 2) dirty (have data sets with inconsistent and erroneous data), 3) sparse (have data with missing or no values at all, or, 4) inadequate (have data sets that have contained inadequate or biased data). As noted in an article in “Extracting and Utilizing Electronic Health Data from Epic for Research”, “to provide the cleanest and most robust datasets for statistical analysis, numerous statistical techniques including similarity calculations and fuzzy matching are used to clean, parse, map, and validate the raw EHR data.” which is generally the largest source of healthcare data for AI research. When looking to implement AI it is important to consider and understand the levels of data loss and the ability to correct for it. For example, researchers looking to apply AI to uncover insights into prescribing patterns into second-generation antipsychotic medications (SGAs) found that approximately 27% of the prescriptions in their data set were missing dosages and even after undertaking a 3-step correction procedure, 1% were missing dosages. While this may be deemed an acceptable number it is important to be aware of the data loss and know this information in order to properly evaluate if it is within tolerable limits. In terms of inadequate data, ensuring that data is free of bias is extremely important. While we have all recently been made keenly aware of the impact of racial and ethnic bias on models (ex: facial recognition models trained only on Caucasians) there are a number of other biases which models should be evaluated for. According to “7 Types of Data Bias in Machine Learning” these include: 1) sample bias (not representing the desired population accurately), 2) exclusion bias (the intentional or unintentional exclusion or certain variables from data prior to processing), 3) measurement bias (ex: due to poorly chosen measurements that create systematic distortions of data, like poorly phrased surveys); 4) recall bias (when similar data is inconsistently labeled), 5) observer bias ( when the labelers of data let their personal views influence data classification/annotation), 6) racial bias (when data samples skew in favor of or against certain ethnic or demographic groups), 7) association bias (when a machine learning model reinforces a bias present in a model). In addition to data quality, data quantity is as imperative. For example, in order to properly train machine learning models, you need to have a sufficiently large number of observations to create an accurate predictor of the parameters you’re trying to forecast. While the precise number of observations needed will vary based on the complexity of the data you’re using, the complexity of the model you want to build, and the impact of the amount of “statistical noise” generated by the data itself, an article in the Journal of Machine Learning Research suggested that at least 100,000 observations are needed to train a regression or classification model. Moreover, it is important that numerous data points are not captured or sufficiently documented in healthcare. For example, as noted in the above-referenced article on extracting and utilizing Epic EHR data for study based on research at the Cleveland Clinic in 2018, even after doing significant work to standardize and label patient data, “approximately 9% [1,000 out of 32,000 data points per patient] of columns in the data repository” were not using the assigned identifiers. While it is likely that methods have improved since this research was performed, given the size and resources that an institution like the Cleveland Clinic had to bear on the problem, it indicates the larger size of the problem. Once the model has been developed there should be a process in place to ensure that the model is transparent and explainable by creating a mechanism that allows non-technologists to understand and assess the factors the model used and what parameters it relied most heavily upon in coming to its conclusions. For example, as noted by the State of AI Report 2020, “AI research is less open than you think, only 15% of papers publish their [algorithmic] code” used to weight and create models. In addition, there should be a system of controls, policies, and audits in place that provide feedback as to the potential errors in the application of the model as well as disparate impact or bias in its conclusions. Implications: As noted in “Artificial Intelligence Basics: A Non-Technical Introduction” it’s important to have realistic expectations for what can be accomplished by an AI project and how to plan for it. In the book, the author Andrew Taulli references Andrew Ng, the former Head of Google Brain, who suggests the following parameters; an AI project should take between 6-12 months to complete, have an industry-specific focus, should notably help the company, doesn’t have to be transformative, and, have high-quality data points. In our opinion, it is particularly important to form collaborative, cross-platform teams of data scientists, physicians, and other front-line clinicians (particularly those closest to patients like nurses) to get as broad input on the problem as possible. While AI holds great promise, proponents will have to prove themselves by running targeted pilots and should be careful not to overreach at the risk of poisoning the well of opportunity. As so astutely pointed out in “5 Steps for Planning A Healthcare Artificial Intelligence Project: “artificial intelligence isn’t something that can be passively infused into an organization like a teabag into a cup of hot water. AI must be deployed carefully, piece by piece, in a measured and measurable way.” Data scientists need to ensure that the models they create produce relevant output that provide context and the ability for clinicians to have a meaningful impact upon the results and not just generate additional alerts that will go unheeded. For example, as Rob Bart, Chief Medical Information Officer at UPMC noted in a recent presentation at HIMSS, data should provide “personalized health information, personalized data” and should have “situational awareness in order to turn data into better consumable information for clinical decision making” in healthcare. Along those lines, it is important to take a realistic assessment of “where your organization lies on the maturity curve”, how good is your data, how deep is your bench of data scientists and clinicians available to work on an AI project in order to inventory, clean and prepare your data. AI talent is highly compensated and in heavy demand. Do you have the resources necessary to build and sustain a team internally or will you need to hire external consultants? How will you select and manage those consultants, etc.? All of these are questions that need to be carefully considered and answered before undertaking the project. In addition, healthcare providers need to consider the special relationship between clinician and patient and the need to preserve trust, transparency, and privacy. While AI holds a tremendous allure for healthcare and the potential for it to overcome, and in fact make up for its underinvestment in information technology relative to other industries, all of this needs to be done with a well-thought-out, coherent and justified strategy as its foundation. Related Readings: Artificial Intelligence Basics: A Non-Technical Introduction. Tom Taulli (publishers site) Artificial Intelligence (AI): Healthcare’s New Nervous System An Interdisciplinary Approach to Reducing Errors in Extracted Electronic Health Record Data for Research 5 Steps for Planning a Healthcare Artificial Intelligence Project

  • Navigating the Ethical Landmines of AI in Healthcare-The HSB Blog 8/25/23

    Our Take: Ethical concerns over the use of AI in healthcare are intricate and nuanced. While AI-based algorithms have the promise to deliver more personalized, effective and efficient healthcare delivery they also hold the potential to exacerbate biases and disparities already present in the system, posing risks to data privacy and security. While protections will be imperfect and an iterative process as the use of AI, particularly generative AI, evolves in healthcare, patients must be kept informed about the use of AI-based systems and technologies in their care, and given clear information in a fashion that ensures informed consent. As AI continues to progress, maintaining ethical standards will require vital collaboration among professionals, developers, policymakers, and ethicists with ongoing updates to ethical guidelines to prioritize patient and societal welfare. Key Takeaways: As of January 2023, there were 520 FDA-cleared AI algorithms, approximately 396 of which were for radiology and 58 of which were for cardiology (Radiology Business) One study found that a widely used model of health risk reduced the number of Black patients identified for extra care by more than half due to racial bias (Science) The first AI models for medical use were approved by the FDA in 1995, with only 50 approved over the first 18 years, while almost 200 were approved in 2023 alone (Encord) AI applications have the potential to cut annual U.S. healthcare costs by $150 billion by 2026 as Ai is used more for drug discovery and development, and improving medical research (Accenture) The Problem: Ethical issues around the use of AI in healthcare encompass a broad range of complex problems and dilemmas including privacy and data security, bias, fairness, explainability, transparency and job displacement. One of the most problematic and widely debated issues around the use of AI in healthcare relates to bias and fairness. Since AI algorithms are developed by human beings, they can inherit biases from the humans who write the code that create those algorithms and select the data sets that the models will be trained on. In fact, the problem often starts with the data sets which the models are trained on which are often limited in their societal representation. As noted in “Can AI Ever Overcome Built-In Human Biases?”, ”AI systems absorb implicit biases from datasets that reflect existing societal inequities. And algorithms programmed to maximize accuracy propagate these biases rather than challenge them.” For example, as noted in the above referenced article, two of the most common biases relate to race and gender. Facial recognition systems trained mostly on light-skinned faces will inevitably struggle with dark-skinned faces and an AI recruiting tool t was found to penalize resumes containing the word “women’s” and downrank graduates of two all-women's colleges. As a result, models based on this data can lead to unequal or discriminatory treatment, undermining fairness in healthcare and perpetuating existing healthcare disparities. In addition, ethical issues around AI in healthcare arise from concerns related to data privacy and security. The use of AI in healthcare often involves the processing and analysis of vast amounts of sensitive patient data which are then applied to things such as predictive analytics, and precision medicine among other things. Given the ever-increasing digitization of healthcare data and sheer amount of data points available on patients through tools such as sensors, remote patient monitoring and other wearable devices, data will increasingly become at risk. As a result, as noted in “Enabling collaborative governance of medical AI”, “medical AI’s complexity, opacity, and rapid scalability to hundreds of millions of patients through commonplace EHRs demand centralized governance. Already, there are well documented case studies of commonly used medical AI systems potentially causing harm to millions or unnecessarily burdening clinicians, including…Epic’s sepsis model at Michigan Medicine and elsewhere.” Protecting the privacy and security of this data, and ensuring it is not misused or breached, will likely remain a significant ethical challenge in healthcare. There are also significant concerns relating to transparency and explainability. In layman's terms, many AI algorithms such as deep learning models, operate as "black boxes," where it is difficult if not impossible to determine what a decision or recommendation was based on. This creates issues specific to healthcare where clinicians need to be able to explain the clinical basis for their recommendations and want to be able to evaluate any recommendations around existing or evolving treatment protocols. As pointed out in a recent article in Jama Health Forum, “Patients expressed significant concerns about …the potential for artificial intelligence to misdiagnose and to reduce time with clinicians.” The article went on to highlight that “racial and ethnic minority individuals [expressed] greater concern than White people.“ Clearly, lack of transparency can raise concerns about accountability and trust in these types of models. One final concern surrounding the use of AI in healthcare which should not be minimized revolves around the potential for job displacement among clinicians and other staff in healthcare. This has become an even greater concern more recently with the evolution of generative AI and will be true for both so-called “lower-risk” non-clinical applications and eventually even more clinical applications. As AI systems become more capable of handling various tasks, there is the potential that certain roles traditionally performed by humans, such as initial triage or diagnostics could be automated. Hence as noted in “Enabling collaborative governance of medical AI”, “front-line clinicians must be made aware of medical AI’s indications for use and understand how and how not to use it,” As outlined in a recent article in the Lancet entitled, “AI in medicine: creating a safe and equitable future”, “[AI] could change practice for the better as an aid—not a replacement—for doctors. But doctors cannot ignore AI. Medical educators must prepare health-care workers for a digitally augmented future.” The Backdrop: The landscape of the healthcare ecosystem is being significantly reshaped by the rapid advancements in artificial intelligence (AI) and machine learning especially with the rapid developments around generative AI. These technological strides have ushered in an era where AI can more rapidly and easily be applied to accelerate the digital transformation in healthcare. The capabilities of AI systems, especially in the domains of high-volume data analysis, predictive analytics, genomics and increasingly diagnosis and treatment recommendations seem to be growing by the day. For example, as noted in the Lancet article, “AI in medicine: creating a safe and equitable future”, “The Lancet Oncology recently published one of the first randomized controlled trials of AI-supported mammography, demonstrating a similar cancer detection rate and nearly halved screen-reading workload compared with unassisted reading. AI has [also] driven progress in infectious diseases and molecular medicine and has enhanced field-deployable diagnostic tools.” AI's ability to process vast datasets, recognize complex patterns, and provide insights that were previously unattainable with human or machine-assisted comprehension has garnered substantial attention within the healthcare community. Consequently, despite several earlier periods of hyperbole, it appears that at least augmented intelligence (or AI light) is here to stay and will remain a pivotal force driving innovation and the practice of medicine.” In addition, with the ongoing digitization of healthcare data, healthcare organizations now have access to an ever-increasing amount of patient and clinical research data. These technological advancements have provided healthcare organizations with unprecedented access to an abundance of patient data. As noted in “Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare”, “the integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education”. As the article goes on to note, The convergence of health data and AI technologies has the potential to not only enhance the efficiency and precision of healthcare delivery but also to usher in a new era of data-driven and patient-centric healthcare solutions. The dramatic increase in the use of data and AI in healthcare however, cannot and should not occur in a vacuum, there needs to be standards and guardrails in place to safeguard their application. Fortunately, numerous organizations and esteemed ethicists are actively engaged in the formulation and development of comprehensive guidelines and ethical frameworks. These initiatives represent a proactive response to the dynamic landscape of healthcare AI, aiming not only to regulate its application but to provide a principled and responsible framework for its deployment. As highlighted in a recent article in Nature entitled, “Enabling collaborative governance of medical AI”, these frameworks proactively address potential challenges as AI evolves, guiding the ethical balance between innovation and responsibility. They must promote ongoing dialogue and collaboration among healthcare professionals, AI developers, policymakers, and patient advocates to align AI in healthcare with ethical principles and the best interests of all. ”Policymakers must invest in human and technical infrastructure to facilitate that governance. Infrastructure might include technical investments (IT systems and processes for robust, low-cost medical AI evaluation), procedural developments (best practices for pre-implementation evaluation, care pathway integration and post-integration monitoring) or human training (training grants for clinical AI specialists). Implications: Ethical issues and tensions in the application of AI in healthcare are far-reaching and have significant consequences for numerous stakeholders, including patients, healthcare providers, policymakers and society as a whole. Ethical concerns in AI can erode patient trust in healthcare systems. Increasing trust and confidence in models derived and used ethically while avoiding bias is crucial in the development and deployment of artificial intelligence and machine learning systems. The steps and strategies to achieve this,include solutions such as transparency and explainability, diverse and inclusive teams, bias detection and mitigation, etc. In terms of transparency and explainability solutions include making your model's decision-making process as transparent as possible by documenting data sources, preprocessing steps, model architecture, training sets and where possible hyperparameters. Utilize explainable AI techniques, such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations), to provide insights into how the model arrived at its predictions. In addition to addressing and reducing the potential for bias, AI models should be developed by diverse and inclusive teams of varying backgrounds wherever possible. Research has consistently shown that diverse teams perform better (both in terms of productivity and quality of final product) and diverse perspectives can help address bias more effectively. Consciously and unconsciously, diversity encourages ethical discussions and raises awareness of potential biases with the possibility of aggravating disparities. AI teams should also proactively implement bias detection tools to identify potential bias in data and model outputs. Use of bias mitigation techniques, such as re-sampling, re-weighting, and adversarial training, to reduce bias in both the training data and the model's predictions should be standard. AI’s use in healthcare must also be accompanied by structured and frequent audits both post-training and post-deployment, in addition, audits should routinely assess the controls and procedures that have been developed, and ensure they are being followed. All of the above are crucial to avoid the ethical lapses that can impact patient outcomes, lead to misdiagnosis, suboptimal treatments, and even potentially harm to patients. Increasingly ethical breaches can result in legal and regulatory actions against healthcare organizations and developers of AI tools. Inadvertent data breach disclosures of non-compliance with data protection guidelines like HIPAA, GDPR and state privacy regulations can lead to fines and legal liabilities. Addressing the ethical issues in the development and deployment of AI in healthcare is critical for realizing the full potential of AI while ensuring that patients and society as a whole realized the full benefit from these technological advancements. Related Reading: Can AI Ever Overcome Built-In Human Biases? Enabling collaborative governance of medical AI Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms AI in medicine: creating a safe and equitable future Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare

  • Zipline:Conquering Last Mile Delivery in Health Care

    The Driver: Zipline recently announced that it is partnering with nutrition and supplements retailer GNC to start drone delivery of online orders in select markets, beginning with Salt Lake City Utah, this summer. Zipline raised $330M in a Series F funding led by Reinvent Capital and Baillie Gifford in April of 2023, bringing its total funding to $821M according to Crunchbase. Zipline is a California-based automated logistics company that designs, manufactures, and operates drones to deliver vital medical products. The funds will be used to design and manufacture drones to help those in need. Key Takeaways: In a study comparing drones and road paving for blood delivery in Rwanda, facilities were able to reduce inventory …without impacting service levels, and a 40% reduction in the number of blood products destroyed or damaged (U. of Pennsylvania) Drones can reduce energy consumption by 94% and 31% and GHG emissions by 84% and 29% per package delivered by replacing diesel trucks and electric vans, respectively (Carnegie Mellon) In May 2023, Associated Couriers announced they will enter into a partnership with Zipline to deliver specialty prescriptions and medications to long-term care facilities across Long Island (Associated Couriers) Medium and heavy trucks in the United States are responsible for 37% of transportation-related greenhouse gas (GHG) emissions (Carnegie Mellon) The Story: Zipline was founded by CEO Keller Rinaudo and according to its website “Zipline is on a mission to build the world’s first logistics system that serves all people equally…to [transform] access to healthcare, consumer products, and food.” The company originally started delivering blood and medical products in Rwanda in 2016 and has since expanded to food, retail, agriculture products, and animal health products and now has operations in the US, Rwanda, Ghana, Nigeria, Cote d'Ivoire, Kenya, and Japan. According to an article in Axios, Zipline began operations in the U.S. by delivering PPE during COVID in partnership with North Carolina-based hospital system Novant Health. At that time drone delivery was operated under a waiver from the FAA. CEO Clifton believes the company’s drone technology is more ecologically efficient and friendly than current delivery technologies like internal combustion engines. For example, Clifton notes that businesses tend to utilize “the same 3,000-pound gas combustion vehicles driven by humans to make billions of deliveries that usually weigh less than five pounds. This is slow, it’s expensive, and it’s terrible for the planet. We actually think it’s inevitable that this is going to shift towards systems that are quiet, less obtrusive, and actually good for the environment,” The Differentiators: Zipline creates and deploys different autonomous drones to help deliver goods to difficult places in an eco-friendly way. With operations in seven countries, Zipline has covered over 45 million autonomous miles to help increase access to healthcare to people around the world. Zipline delivers consumer products, food, and other goods. The company has two delivery platforms- one for long-range and one for precise home delivery. As noted in a press release, when using the company’s Platform 2 approach, when the Zip arrives at its destination, it hovers safely and quietly at that altitude, while its fully autonomous delivery droid maneuvers down a tether, steers to the correct location, and gently drops off its package to areas as small as a patio table or the front steps of a home. According to the company they have completed deliveries to thousands of homes, businesses, and hospitals across the US, Rwanda, Ghana, Nigeria, Kenya, and Japan. The company states that their efforts have been reported to have saved lives, lowered costs, increased convenience and reduced harmful emissions compared with traditional delivery methods. In May of 2023, Associated Couriers announced they will enter into a partnership with Zipline to deliver specialty prescriptions and medications to long-term care facilities across Long Island. The Big Picture: Zipline’s drone delivery can help get goods to hard-to-reach locations more quickly and efficiently. As noted, Zipline’s drone flights first began in 2016 to help with the national blood delivery network in Rwanda. The speed and flexibility of Zipline’s delivery system can help save lives. For example, in a study comparing drone delivery with paving roads, researchers from the University of Pennsylvania found an 88% reduction of in-hospital maternal deaths from postpartum hemorrhage in Rwanda. The authors noted that as a result of Zipline’s logistics and delivery system, they found that “transfusing facilities [were able to] substantially decrease their on-hand inventory and wastage, but do not find any change in the management of blood inventory after paving roads.” Interestingly while the authors were looking at critical supplies like blood supply, it appears that some of their findings could be extrapolated to show other economic benefits. For instance, the authors noted, “that facilities were able to reduce their on-hand inventory …without impacting service levels, [they found] a 40% reduction in the number of blood products destroyed or damaged [and did] not find statistically significant evidence of a change in the number of blood units used.” In addition to aiding in the delivery of supplies, Zipline can have a big impact on the environment and climate change. Drones reduce inefficiencies and waste, especially carbon emissions. The United States transport sector heavily relies on petroleum, especially in the use of medium and heavy trucks. As noted in an article entitled, “Drone flight data reveal energy and greenhouse gas (GHG) emissions savings for very small package delivery” the U.S. transportation sector contributes to 37% of transportation-related greenhouse gas emissions, which is a significant contributor to climate change. Even light-duty vehicles contribute to the problem, accounting for 57% of transportation greenhouse gas (GHG) emissions and 64% of transportation energy use. Transportation can also be a major source of nitrogen oxides (NOxs) and other air pollutants, which can have adverse effects on human health and the environment. The authors found that “drones can reduce the energy consumption by 94% and 31% and GHG emissions by 84% and 29% per package delivered by replacing diesel trucks and electric vans, respectively.” Utilizing drones will help maximize energy productivity. This will aid in reducing the amount of greenhouse gases within the atmosphere as well as reducing the energy and climate impacts of package delivery. GNC partners with instant logistics provider Zipline for drone delivery service,South San Francisco drone delivery startup Zipline raises $330M; valuation jumps past $4B

  • Implementing SDOH Screening Requires Strategic Planning & Road to Impact-The HSB Blog 8/11/23

    Our Take: The integration of digital health tools and provider screening for Social Determinants of Health (SDOH) issues can greatly enhance patient care and our understanding of barriers to care. Digital health tools enable real-time data collection and also help identify SDOH-related issues promptly for intervention. However, implementing SDOH screening requires taking careful privacy measures as well as collaboration with community organizations. This approach fosters patient-centered, equitable healthcare. Key Takeaways: While initially 33% of staff and 58% of clinicians surveyed in one study felt that the clinic was “too busy” to deal with SDOH, by the end of the study those numbers had declined to 10% and 21% respectively (FPM) Despite the fact almost 90% of hospitals and systems surveyed reported screening patients for social needs, only 30% reported having a formal relationship with community-based providers for their target population (Deloitte) Although parents can see a throughline between child health and some SDOH, they are reticent to discuss some of those topics (Public Agenda & United Hospital Fund) Of the 49 provider-based SDOH programs that disclosed funding in one survey, hospitals and health systems committed approximately $2.5 billion, with a median investment of $2M/ program and a mean of $31.5M (Health Affairs) The Problem: While the integration of digital health tools and the incorporation of Social Determinants of Health (SDOH) into hospital screening processes can offer significant benefits, this does not come without challenges to overcome in order to fully realize the value of this approach. While the benefits of screening for SDOH have been proven and a number of toolkits for screening for SDOH exist including those from the American Academy of Family Physicians, American Academy of Pediatrics, and the National Association of Community Health Centers, clinicians can often find the task overwhelming. For example, as noted in “ The Feasibility of Screening for Social Determinants of Health: Seven Lessons Learned”, “In the authors' pilot study, 58 percent of clinicians began the project thinking they were too busy for social determinants of health (SDOH) screening.” In addition, integrating SDOH data from digital tools into existing electronic health records (EHR) and workflows can be technically challenging as hospitals often use various systems that may not seamlessly communicate with each other leading to additional data integration and interoperability issues. There is also the challenge of resource allocation and workload of SDOH screening as implementing SDOH screening requires additional resources. This includes personnel to manage data collection, analysis, and interventions as well as professionals and staff that are trained on how to interpret and utilize SDOH data effectively. The Backdrop: While for a number of years, there has been a rising recognition of SDOH's impact on health, it is only in the last several years that the focus has been on measuring and managing the most efficient and effective way to provide these resources. In addition, once providers have decided what and how to measure these impacts, it is important to determine how the results of such surveys will be handled. For example, as noted in “Considerations for Social Determinants of Health Screening Design” not only do they “need to consider the tools they’ll use to deploy the screening, which determinants to look at during screening, and how providers will talk about SDOH with patients to ensure it’s a respectful interaction” they will also need to make sure they have thought through which SDOH issues may “have an immediate and tangible solution to fix”, otherwise “it can be frustrating for both patient and provider—and it can damage patient trust—for a social need to arise and [then for patients to] hear there is no way to fix it.” As a result of this situation, providers have more recently teamed up with corporations, community organizations and others (including health insurance companies) to not only screen for SDOH but also to invest their own funds more directly in addressing the SDOH needs of patients. For example, a 2020 article in Health Affairs found that of the 49 SDOH provider-based SDOH programs that disclosed funding, ”the total funds committed specifically from health systems or hospitals were approximately $2.5 billion, with a median investment per program of $2 million and a mean of $31.5 million”. In addition, the authors also noted the dominant choice among organizations that chose to address a single SDOH was housing. The authors noted that "housing-related programs included strategies such as the direct building of affordable housing, often with a fraction set aside for homeless patients or those with high use of health care; funding for health system employees to purchase local homes to revitalize neighborhoods; and eviction prevention and housing stabilization programs." While the article went on to point out that “these investments still represent [only a] small fraction of overall spending by health systems, which currently are much more likely to be developing screening and referral programs”, it does indicate that providers consider the potential for significant and ongoing financial investments that might accompany any screening initiatives. Implications: Integrating screening for SDOH to improve patient care can be a substantial undertaking and can require a significant commitment of both human and financial resources. However, digital health tools can allow hospitals to gather comprehensive SDOH data, leading to more personalized and patient-centered care plans. This holistic approach can help providers address patients' unique circumstances and needs, which if handled correctly can improve overall satisfaction and engagement. Real-time data collection and analysis enable hospitals to identify SDOH-related barriers promptly. Early intervention and preventive measures can reduce the progression of health disparities and complications, particularly in children, ultimately leading to better health outcomes and improved quality of life. Moreover, it is important to train and communicate with stakeholders on the impacts on workflow and to ensure their concerns are heard. While, as noted, initially in one study, 33 percent of staff and 58 percent of clinicians felt that the clinic was “too busy” to deal with patients' social needs by the end of the study only 10 percent of staff and 21 percent of clinicians, felt that the clinic was “too busy” to deal with patient's social needs.” When they investigated the large drop in opposition to screening, the authors found simply that “in the end, the work was not overwhelming, as some had feared it would be.” Similarly, providers should make sure they are thoughtfully and adequately communicating the goals and purposes of such SDOH screening tools with patients. For as noted in “Considerations for Social Determinants of Health Screening Design” SDOH screening can be challenging because patients aren’t always comfortable discussing often sensitive personal information that does not directly pertain to their health (for example: It could be difficult for patients to admit they are housing insecure)”. Researchers from Public Agenda and United Hospital Fund also reported that “although parents can see a throughline between child health and some SDOH, they are reticent to discuss some of those topics. Particularly, parents or guardians were worried about discussing their own mental health, legal issues, or domestic problems, especially if they did not have an established rapport with the pediatrician.” As hospitals become more deeply involved in their communities by collaborating with local organizations and public health agencies, not only can this engagement contribute to community health improvement and foster trust, it can also have meaningful financial reforms. For example, the Health Affairs article referenced above also noted that, “although a recent study found no association between overall community benefit spending and readmission rates, hospitals in the top quintile of spending that was directed toward the community had significantly lower readmission rates than those in the bottom quintile.” Related Reading: The Feasibility of Screening for Social Determinants of Health: Seven Lessons Learned Quantifying Health Systems’ Investment In Social Determinants Of Health, By Sector, 2017–19 Most providers don't screen for social determinants of health Considerations for Social Determinants of Health Screening Design

  • Herself Health: Targeting the Health Needs of Women 65+

    The Driver: Herself Health recently raised a $26 million Series A funding round led by investor Michael Cline of Accretive with participation from Juxtapose. The funding brings Herself Health’s total funding to $33M according to Crunchbase. Herself Health is a startup that offers primary care for women aged 65 and older. The funds will be used for clinic expansion, virtual care expansion, increased in-person care and community engagement offerings, and to attract and retain new talent. Key Takeaways: (Complete Last) When there is no clear explanation for certain symptoms in women over 50 years, menopause is frequently used as an overruling container diagnosis (NCBI). As women age, they are twice as likely to be diagnosed with Alzheimer’s disease and are more likely than men to experience strokes that are associated with worse outcomes (CDC) Over one-quarter of women ages 65 to 74 and over half of women ages 85 and older live alone (Commonwealth Fund) Only 20 percent of ob/gyn residencies offer training on menopause, and 80 percent of medical residents report feeling "barely comfortable" discussing or treating menopause (Commonwealth Fund) The Story: Herself Health was founded in 2022 by CEO Kristen Helton, the former head of Amazon’s Amazon Care healthcare subsidiary, together with investment firm Juxtapose. While their initial goal was to find a way to assist older adults, after conducting surveys of more than 700 women aged 65 and over Helton and Juxtapose made some surprising findings. For example, they found that women of this age were almost a third more likely to be misdiagnosed than men, nearly half as likely to get a proper diagnosis of heart disease and almost a third as likely to be misdiagnosed after a stroke. In addition, the data indicated that women of this age were more likely to suffer from osteoporosis, arthritis and be misdiagnosed when it comes to other conditions. Based on those findings, Helton decided to focus on creating services to help older women with their specific health goals by understanding their ambitions, needs, and challenges. In her words, "women 65+ face unique health and social challenges as they age, and for far too long, their concerns, needs, and desires have been ignored. That's why we are designing Herself Health to be the value-based solution to improve outcomes and help women find joy, purpose, and better quality of life. Our fundamental goal is to elevate the patient experience and provide meaningful in-person and virtual support that provides women 65+ with a primary care experience designed specifically for them." She emphasized that the company’s goal is to attempt to address the unique social and medical challenges women face as they age. The Differentiators: Herself Health attempts to distinguish itself by prioritizing the holistic aspects of health and well-being, including mental health, mobility, social, and behavioral health. The company plans to target health concerns that are more prevalent in older women, such as Alzheimer's, osteoporosis, arthritis, diabetes, thyroid health, and weight management. As noted in Fierce Health, Herself Health uses health coaches to help connect a patient to her care team. As they point out, “this allows for patient education, follow-ups and assessing gaps in care”. They also coordinate with any specialists their clients currently see and who are accepted by their insurance. During the visitation, clients will discuss their personalized goals, have an exam, and find the proper health methods with trained clinicians. According to the company, Herself Health connects life goals with health goals to help its patients get more life out of life. The company highlights focused care, genuine relationships, a whole-person approach, and unique goals for each patient. According to the company, they "offer everything she'd expect from her primary care practice, with a special focus on conditions that commonly affect women 65+" and are on a mission to create change. In addition, clients have the option to set up an individual patient portal with the ability to send a message to their care team throughout the entire day and can schedule any necessary follow-up appointments with doctors or specialists. The Big Picture: Significant gaps and structural barriers inhibit the current primary healthcare system from meeting the needs of older women. Women must receive and have access to comprehensive, high-quality primary health care that is tailored to their needs at all ages and stages of life. This includes receiving sex-specific, sex-aware, and gender-sensitive care. According to the Commonwealth Fund, the United States primary health care system does not effectively meet women's needs as they age and transition through stages of life. For example, it was reported that "Health status indicators show that women in the U.S. have worse outcomes than women in other high-income countries. For example, the U.S. maternal mortality rate is higher than the rate in any other high-income country and continues to rise" Furthermore, the effects of the gap in quality healthcare in aging women are amplified in women of color, such as in the African American community. Moreover, various diagnoses in women are reported to be undermined, and it is reported to take several years to establish a comparable diagnosis in women than men A recent peer-reviewed article stated that "frequently used undetermined diagnoses such as fibromyalgia, chronic fatigue syndrome, and psychosocial distress are typically more often present in women. In addition, as it often happens in clinical practice when there is no clear explanation for certain symptoms in women over 50 years, menopause is frequently used as an overruling container diagnosis." According to the Commonwealth Fund, this is further complicated by the fact that "only 20 percent of ob/gyn residencies offer training on menopause, and 80 percent of medical residents report feeling "barely comfortable" discussing or treating menopause". When combined with the fact that "over one-quarter of women ages 65 to 74 and over half of the women ages 85 and older live alone" this not only contributes to their lack of health and overall well-being but results in more women experiencing more unhealthy years of aging than men. Herself Health will use $26M to redefine primary care for women 65-plus, Herself Health, providing primary care to women 65 and over, raises $26M

  • Digital Health & AI in Oncology, Delivering Improved & More Personalized Care-The HSB Blog 7/28/23

    Our Take: Digital health technologies in oncology have emerged as a promising and transformative force in cancer care. By leveraging the power of digital and communication technologies, these innovative tools are reshaping the landscape of cancer diagnosis, treatment, monitoring, and patient support. Artificial Intelligence and data analytics have emerged as essential companions in the fight against cancer. Telemedicine and remote patient monitoring have broken barriers in delivering quality care, especially for patients residing in remote areas. Digital health technologies in oncology will play a critical role in revolutionizing cancer care and research. Embracing innovation and technology will lead to improved treatments and better patient outcomes worldwide. Key Takeaways: The single most effective lever in cancer treatment is early detection with the five-year survival rates for the top five cancers being anywhere from 4 to 13 times higher at Stage 1 versus Stage 4 (World Economic Forum) US health-care spending for medical services and prescription drugs related to Cancer is projected to reach US$246 billion by 2030 (Jrnl. National Cancer Institute) In 2019, there were approximately 23.6 million new cancer cases and 10 million cancer deaths globally, representing a 26% increase in new cases and a 21% increase in fatalities vs. 2010 (World Economic Forum). The number of clinical trials employing a digital health device as part of the intervention has grown from 8 in 2000 to over 1100 in 2018—a 34.8% CAGR (Jrnl. National Cancer Institute) The Problem: Implementing digital health solutions in oncology requires acceptance and training among healthcare providers. For digital health solutions to be effective, they need to be seamlessly integrated into existing clinical workflows. Physicians may be reluctant to fully embrace new technologies, and a lack of training can lead to underutilization or misuse of digital health tools. Disrupting or burdening healthcare providers' routines may hinder the adoption and acceptance of these technologies particularly given the challenges or workforce demand and burnout. In addition, digital tools, such as AI-driven diagnostics or decision-support systems, heavily rely on data accuracy and algorithm reproducibility. While technology can help reduce errors, it can also introduce new types of errors. In the context of oncology, incorrect or incomplete data entry, system malfunctions, and user errors can lead to misdiagnosis errors and adverse patient outcomes. The inaccurate predictions or misinterpretations of diagnostic information could lead to serious consequences for patients, including delayed or inappropriate treatments. Like other AI and ML models, AI and ML algorithms used in oncology may suffer from bias if the training data is not diverse and representative. Moreover, any breach or mishandling of patient information could have severe consequences, leading to potential legal and ethical issues. As digital health technologies collect and process sensitive patient data, ensuring data privacy and security becomes paramount. By addressing these obstacles, the oncology community can maximize the benefits of these innovative tools and advance the field of cancer treatment. The Backdrop: Digital health technologies in oncology have emerged against the backdrop of a rapidly evolving healthcare landscape, characterized by increasing cancer incidence rates, growing complexity in cancer treatments, and the need for personalized, patient-centric care. The demand for accessible and convenient healthcare services has grown, especially in specialties like oncology, where patients often face a shortage of providers, travel burdens and frequent follow-ups. As noted in , “Digital health for optimal supportive care in oncology: benefits, limits, and future perspectives”, “The terms digital health, telehealth, and eHealth are interchangeable and are defined as the provision of healthcare services supported by telecommunications or digital technology to improve or support healthcare services.” Perhaps most importantly, the authors note, “eHealth solutions can be part of each step of the healthcare process (i.e., prevention, diagnosis, decision-making, treatment/intervention, and follow-up).” Along those lines, the digital transformation of healthcare has fostered a broader degree of collaboration between oncologists, researchers, technology companies, and patient advocates. As noted in “Maximizing the Value of AI in Cancer Care”, “the use of AI and ML to collect and analyze real-time patient experience data in oncology is revolutionizing how we approach cancer care, guiding treatment decisions and ultimately improving patient outcomes.” This not only accelerates already existing multidisciplinary approaches to care, but facilitates a greater exchange of knowledge and progress in care protocols. Not surprisingly, the exponential growth of computing power, data storage capacity and analytics, including predictive analytics, has paved the way for sophisticated digital health tools and AI algorithms applicable to oncology. These tools can process vast amounts of patient data, including genomic profiles, imaging data, and clinical records and apply them to improve treatment protocols. For example, in “Is AI-enabled radiomics the next frontier in oncology?”, the authors look at “Radiomics uses AI-driven analytics to extract meaningful data from traditional imaging modalities such as CT, MRI or PET scans.” The technology “then curates, annotates and analyzes that quantitative data to deliver a wealth of information that cannot be observed visually in an image.” The convergence of technological advancements, the need for personalized medicine, and the emphasis on patient-centered care have set the stage for digital health technologies to dramatically transform cancer care. As these technologies continue to evolve and overcome challenges, they hold the promise of transforming oncology, improving treatment outcomes, and positively impacting the lives of cancer patients worldwide. Implications: Telemedicine and remote monitoring technologies enable continuous tracking of patients' health status, treatment response, and side effects. This allows healthcare providers to intervene promptly when necessary, ensuring better management of patients' well-being. Digital health technologies prioritize patient needs and preferences, allowing for more personalized care and treatment plans. Patients can be given higher quality information and have greater involvement in their treatment decisions, helping to reduce patient anxiety during treatment and thereby improving satisfaction and overall well-being. Digital health tools, including AI-powered imaging analysis and risk assessment algorithms, aid in early cancer detection. Timely identification of cancer can lead to earlier interventions and potentially better treatment outcomes. As highlighted in “Digital Health Applications in Oncology: An Opportunity to Seize”, these types of prediction tools “could be used to influence clinician decision making along the cancer spectrum, such as after chemotherapy, after colorectal cancer surgery, or in discharge planning. Such ML-based predictive algorithms may be used to “nudge” clinicians toward value-based care streams for high-risk patients or to default patients into population health management programs to improve advance care planning and/or reduce unplanned utilization.” The rapid evolution of digital health technologies fosters a culture of continuous innovation in oncology. As new tools and applications are developed and integrated into clinical practice, the field advances further, leading to ongoing improvements in cancer care. Interestingly, these include helping eliminate waste and overtreatment near the end of life. For example, in “Delivering Affordable Cancer Care in the 21st Century: Workshop Summary” the authors noted that "unrealistic expectations and misaligned financial incentives are contributing to the overuse and misuse of interventions in cancer care”. The paper highlighted that “overuse is particularly problematic in individuals with advanced cancer, noting a high rate of treatment with chemotherapy close to the end of life, more time spent in the emergency room and the hospital, and less time in hospice care." Clearly this is one area where more objective data and analytics could be used to help explain and justify less invasive and costly end of life treatments. The opportunities for digital health technologies in oncology are expansive, offering new avenues for personalized, data-driven, and patient-centric cancer care. By leveraging these technologies effectively, healthcare providers can make significant strides in improving cancer outcomes and leading to improved outcomes and higher satisfaction for patients. Related Reading: Digital Health Applications in Oncology: An Opportunity to Seize Is AI-enabled radiomics the next frontier in oncology? 6 experts reveal the technologies set to revolutionize cancer care Maximizing the Value of AI in Cancer Care The Tech Revolutionizing Cancer Research and Care

  • Author Heath: Addressing the Unmet Mental Health Needs of Seniors in MA

    The Driver: Author Health recently raised $115 million in a Series A funding round from General Atlantic and Flare Capital Partners. Author Health is a hybrid care platform focused on providing in-office and virtual mental health services to seniors enrolled in Medicare Advantage plans. The funds will be used to help Author expand geographically and increase its partnerships with both payers and providers. Key Takeaways: Over a quarter of Medicare beneficiaries reported skipping or putting off needed mental health care because of costs, compared to fewer than 10% of older adults in the U.K., France, Germany or Sweden (Commonwealth Fund) 45% of all opioid-related deaths involve individuals 45 years or older, and since 1999, drug overdose death rates have increased the most (by 6-fold) among those aged 55 to 64 years (JAMA) Approximately 1.7 million Medicare beneficiaries were estimated to have past-year substance use disorder (National Library of Medicine) While older adults comprise just 12% of the population, they make up approximately 18% of suicides (National Council on Aging) The Story: Author Help was founded by Gary Gottlieb, the former CEO of Partners HealthCare and Katherine Hobbs Knuton, previously CEO of Optum Behavioral Health. Both Gottlieb and Knuton are psychiatrists and lamented the ways that the system has not worked, particularly for Medicare patients in need of care for more serious mental health conditions. For example, as Knutson noted at the Behavioral Health Business Invest conference last year while at Optum, she “saw the power of value-based care in addressing the interplay of physical and behavioral health [and noted] the pain point for insurers trying to find innovations and services on the market that help them address diverse populations.” She highlighted that these solutions often misallocated conditions and resources, with severely ill patients in need of psychiatrists often unable to see them for care. Similarly, as Gottlieb noted in an interview with the Boston Business Journal, “Since my earliest days as a geriatric psychiatrist and later as an executive, I’ve been active in developing and supporting programs that are focused on the care of the whole person, her/his/their family, and the system around them and that work to integrate behavioral health and primary care, for older patients in particular,” In the words of the company, Author Health was born out of the necessity to fundamentally shift how our health system prioritizes behavioral health and preventive services. By aligning Medicare Advantage health plans and clinicians to improve health, and using a combination of technology and community relationship-building to open access for hard-to-reach populations, Author Health is primed to meet the growing demand for psychiatric care on a national scale.” Currently, services are covered by Humana Medicare Advantage only but are working towards gaining more insurance providers in the near future. The Differentiators: Author Health separates itself from other comparable digital health companies by focusing on serious conditions instead of mild ones as well as by deeply integrating their services with primary care. According to the company, patients “with serious mental illness and substance use disorders often suffer from co-occurring medical conditions and health problems like diabetes, high blood pressure, and cardiovascular disease.” and are in need of integrated care. As such the company offers a team-based approach to psychiatric care, bringing together specialized physicians, nurses, therapists, and community health workers to deliver a mix of virtual and in-person care for individuals who are often disconnected from the traditional healthcare system. Through this model, patients and their caregivers are given an avenue to reconnect with healthcare providers and receive personalized and comprehensive treatment within their communities, and outside of hospitals and institutions.” The severe mental health conditions available for treatment and support include depression, anxiety, schizophrenia, psychosis, Alzheimer's Disease, dementia, and substance use disorders, among others. The company believes “This care delivery model leads to improved quality of life for patients and their caregivers, as well as a reduction in medical emergency and inpatient hospital care, which in turn creates savings for health plans focused on value-based care.” The Big Picture : There is currently a lack of coverage for serious mental health needs for older adults, partly as a result of Medicare coverage gaps. According to the Commonwealth Fund, Medicare is reported to not cover assertive community treatment, peer support services, or psychiatric rehabilitation. This is a significant problem, especially considering the large number of Medicare beneficiaries with co-occurring substance use disorders and mental health conditions. In fact, research from the National Library of Medicine stated that "approximately 1.7 million Medicare beneficiaries were estimated to have past-year substance use disorder," and a large portion have co-occurring mental health conditions. Moreover, "45% of all opioid-related deaths involve individuals 45 years or older, and since 1999, drug overdose death rates have increased the most (by 6-fold) among those aged 55 to 64 years" compared to other age groups within the United States. In addition, as noted in “Patterns in Geographic Distribution of Substance Use Disorder Treatment Facilities in the US and Accepted Forms of Payment From 2010 to 2021” one reason that Medicare beneficiaries have difficulty accessing services for SUDs is due to low acceptance of Medicare in SUD treatment facilities. Despite the prevalence of high mental illness and substance use disorders in the Medicare enrollee population, coverage gaps remain a need to be addressed by solutions like Author Health. Author Health Nabbed $115M To Serve Medicare Patients. It’s Not The Only One, Author Health Raises $115M for Senior-Focused Mental Health

bottom of page