top of page

Search

207 items found for ""

  • Digital Behavioral Health Tools Can Address Treatment Shortages & Accessibility-The HSB Blog 6/24/23

    Our Take: The integration of digital health solutions in behavioral health has the potential to reshape the care landscape by increasing accessibility, providing personalized treatment options, and facilitating early intervention and prevention. These technologies can not only address the shortage of treatment options but they can also empower individuals with behavioral health issues to actively manage their symptoms and improve their overall well-being. Digital health is a complement to traditional care rather than a replacement. Key Takeaways: One study found “no statistically significant association between the modality of care (telehealth treatment group versus in-person comparison group) and the one-month change scores on standard assessments of depression or anxiety (BMC Psychiatry) As of March 2023, 160 million Americans live in areas with mental health professional shortages, [and] over 8,000 more professionals [are] needed to ensure an adequate supply (Commonwealth Fund) An estimated 21M adults or approximately 8.4% of U.S. adults had at least one major depressive episode (NIMH) The percentage of need for behavioral services that is actually met nationwide is less than 30% (KFF) The Problem: For years there has been a shortage of providers and treatment options that address behavioral health needs. For example, according to a recent report from the Commonwealth Fund, “as of March 2023, 160 million Americans live in areas with mental health professional shortages, [and] over 8,000 more professionals [are] needed to ensure an adequate supply.” Moreover, data from the Kaiser Family Foundation estimates that the percentage of need for behavioral services that is actually met nationwide is less than 30% (27.7%) as of September 2022. While historically it has been difficult to coordinate care between digital interventions and healthcare providers, because of the broader acceptance of telehealth during the Pandemic digital delivery of behavioral care has become more broadly accepted and is now seen as a way to address this care gap. However, as behavioral care moved towards digital delivery it became increasingly clear that ensuring effective communication and data sharing were essential to maximize its benefits. In addition, as we pointed out in “Integrating Telemental Health Into Primary Care Aids Diagnosis and Treatment-The HSB Blog 3/7/22” here, telebehavioral health “may allow for more accessible and affordable care with equal or better outcomes than in-person care, especially for diagnosis and treatment.” This is especially true in the most acute shortage areas which tend to be in rural care. For example, as noted in Digital health technologies and major depressive disorder, “telemedicine or care coordination platforms can help provide remote care to rural areas or hard-to-reach communities, thereby enhancing patient-provider collaboration.” Moreover, although digital health has the potential to increase access to treatment for behavioral health conditions, disparities in technology access and digital literacy can perpetuate inequities. As noted in Facts & Figures: Mental Health in Rural America, “rural residents report difficulty accessing healthcare services and an absence of anonymity when seeking care in the South.” The article goes on to note that “a common sentiment among Southerners is that the prevailing stigma and conservative belief system in rural communities can hinder the search for health care.“ These disparities can be particularly acute for low-income individuals, rural populations, older adults, and marginalized communities. As a result, these populations may face even greater barriers in accessing and effectively utilizing digital health tools. Finally, given that digital telebehavioral health solutions collect what some would say is patients' most sensitive personal health data, their use and integration raises concerns about data privacy and security. The Backdrop: Based on data from the Centers for Disease Control (CDC), approximately 50% of Americans will be diagnosed with a mental illness at some point in their life and 1 in 25 Americans are currently living with a mental illness. For example, the National Institutes of Mental Health (NIMH) has found that “an estimated 21M adults or approximately 8.4% of U.S. adults had at least one major depressive episode (defined as “a period of at least two weeks when a person experienced a depressed mood or loss of interest or pleasure in daily activities, including problems with sleep, eating, energy, concentration, or self-worth”). However, the widespread adoption of smartphones, high-speed internet, and digital connectivity has created opportunities for reaching individuals like those suffering from depression remotely and also provides a means to bridge geographical barriers. For example, as noted in “Digital health tools for the passive monitoring of depression: a systematic review of methods”, “with the global trend toward increased smartphone ownership (44.9% worldwide, 83.3% in the UK) and wearable device usage …this new science of “remote sensing”, sometimes referred to as digital phenotyping or personal sensing presents a realistic avenue for the management and treatment of depression” as well as other behavioral health disorders. Furthermore, the emergence of data analytics, artificial intelligence, and machine learning has the potential to enable a new modality of personalized behavioral healthcare approaches. By leveraging patient data, algorithms can help tailor treatment plans, predict risk factors, and optimize interventions for individuals with behavioral health conditions. As highlighted in a recent article in Scientific American entitled “AI Chatbots Could Help Provide Therapy, but Caution Is Needed”, "as an assistant for human providers…LLM chatbots could greatly improve mental health services, particularly among marginalized, severely ill people.” This is particularly true in terms of helping with the administrative burden where “programs such as ChatGPT could easily summarize patients’ sessions, write necessary reports, and allow therapists and psychiatrists to spend more time treating people." Implications: Digital health interventions can help address barriers to accessing mental healthcare, especially for individuals in underserved areas or with limited mobility. Remote platforms, telemedicine, and mobile applications provide convenient and accessible avenues for individuals to seek support and treatment for depression. For example, in an article entitled, “Comparison of in-person vs. telebehavioral health outcomes from rural populations across America”, the study’s authors found “There was no statistically significant association between the modality of care (telehealth treatment group versus in-person comparison group) and the one-month change scores for either PHQ-9 (a standard assessment for depression) or GAD-7 (a standard assessment for anxiety)” leading them to conclude “no clinical or statistical differences in improvements in depression or anxiety symptoms as measured by the PHQ-9 and GAD-7 between patients treated via telehealth or in-person.” In addition, digital health tools can enable efficient screening and early detection of behavioral health symptoms. Automated assessments, digital questionnaires, and mood-tracking apps can help identify individuals at risk, allowing for timely intervention and preventive measures to reduce the severity and duration of episodes in need of treatment. Digital health solutions can also help extend the reach of facilities and clinicians through such technologies as remote monitoring while wearable devices and mobile applications can track mood patterns, sleep quality, activity levels, and other relevant data. This information can support self-management strategies and help healthcare providers monitor progress, make informed treatment adjustments, and provide timely interventions as needed. However, a note of caution is also warranted as the use of digital health in mental health care raises important ethical and regulatory considerations. Protecting patient confidentiality, ensuring data encryption, and implementing robust security measures are essential to build trust and maintain the integrity of digital health platforms for depression. While technologies and generative AI hold great promise in alleviating the shortage of practitioners, as noted in “AI Chatbots Could Help Provide Therapy, but Caution Is Needed’ an AI chatbot called Tessa, which was not based on generative AI but …gave scripted advice to users. Would sometimes give weight-loss tips, which can be triggering to people with eating disorders.” Although this is likely to improve as the technology improves, understanding and addressing nuances in treatment could be a key to the effectiveness of the technology. In addition, Safeguarding patient privacy, ensuring data security, and maintaining ethical standards in the use of AI algorithms and predictive analytics are critical to protecting the rights and well-being of individuals with behavioral health issues. Related Reading: Comparison of in-person vs. telebehavioral health outcomes from rural populations across America Understanding the U.S. Behavioral Health Workforce Shortage Mental Health Care Health Professional Shortage Areas (HPSAs) 2020 National Survey on Drug Use and Health (NSDUH)

  • Femtech: Women Overcoming Challenges of Accessibility & Design to Control Their Health-The HSB Blog

    Our Take: While digital tools geared at addressing the needs of female patients, often referred to as “femtech” have gained in popularity significant challenges remain as women seek to regain at least partial autonomy over the care of their bodies These challenges include limited accessibility, biased design, lack of privacy, lack of regulation, and a narrow evidence base. This is particularly true as women increasingly seek more and better information, control, and gender-specific care over their health and bodies, something that has become of utmost concern in the wake of the Dobbs decision. While investment in women’s health has increased dramatically over the last 10 years “it accounted for just 3% of health technology funding for women-led companies in 2020 – and only 0.64% was allocated to businesses led by women of color”, according to the World Intellectual Property Organization (WIPO). Moreover, Femtech sales accounted for approximately $821M in global revenues in 2019 which pales in comparison to the almost $500B currently spent worldwide on women’s healthcare and the $1T projected by 2026 by an article in The Lancet Digital Health. Key Takeaways: The cost of menopause due to lost days of work is nearly $2B and almost $25B in annual medical costs (Mayo Clinic) Femtech sales hit $821M globally in 2019 which pales in comparison to the almost $500B currently spent worldwide on women’s healthcare and the $1T projected by 2026 (World International Property Organization & The Lancet Digital Health) Femtech represents a promising opportunity to address existing gaps in women’s health care, the intended recipients of femtech innovations largely appear to be healthy, affluent, white, cis women (JMIR) 42% of women say they have never discussed menopause with their healthcare provider, yet hormonal changes due to menopause can raise the risk of heart disease, lead to losses in bone mass, and result in concentration problems, among other things (AARP) The Problem: Despite the growing femtech sector, there are still gaps in representation and inclusivity. Additionally, not all femtech solutions undergo rigorous scientific validation or regulatory scrutiny. This can result in varying levels of accuracy and reliability in the products and services offered. As a result, some apps and devices may lack clinical evidence to support their claims, potentially leading to misinformation or ineffective outcomes. Integrating femtech solutions within the existing healthcare system and collaborating with healthcare providers can be challenging. Limited interoperability and a lack of standardized protocols can hinder the seamless incorporation of femtech into routine healthcare practices. There is also a risk that overdiagnosis and unnecessary medicalization with certain femtech solutions can have unintended results. For example, fertility tracking apps that provide inaccurate or misleading information about a woman's fertility status can lead to undue stress or unnecessary medical procedures including additional ultrasounds or hormonal testing. The Backdrop: The rapid advancement of technology, particularly in the fields of digital health, wearable devices, and mobile applications, has created new opportunities for addressing women's health needs. In addition, the global women's empowerment movement has played a significant role in bringing attention to women's health issues and demanding better solutions. Advocacy efforts have highlighted the need for improved healthcare access, research, and awareness of women's unique health concerns. However, even though innovative solutions in the realm of women's health have grown in number and breadth, these technologies are not often targeted to meeting the needs of the underrepresented. For example, as noted in “A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States.”, “while femtech represents a promising opportunity to address existing gaps in women’s health care, the intended recipients of femtech innovations largely appear to be healthy, affluent, white, cis women. In the current model, an opportunity is missed to engage populations who have been historically underserved and bear the largest burden of poor pregnancy and perinatal outcomes.” Research and medical advancements have predominantly focused on male-centric models, leaving women's health needs understudied or misunderstood. In the words of one observer, “Doctors and clinicians were trained to look at women as if they were just small men”. Femtech aims to bridge this gap and provide tailored solutions specifically designed for women. However, the regulatory environment is uncertain and clinical evidence is often lacking, leading to some treatments being less robust than they should be. For example, in “Fertile Ground: Rethinking Regulatory Standards for Femtech”, the authors state ``the FDA classifies apps that dispense fertility and pregnancy information as low-risk, meaning that they do not require agency approval, even if some women use them for contraceptive purposes.” Not only could this leave users vulnerable to serious health issues such as unwanted pregnancies or needless infertility treatments, but it could also potentially lead to false readings causing additional emotional distress and unnecessary testing and procedures. Implications: Femtech has the potential to empower women by providing them with easy access to convenient and accurate information related to conditions impacting only women. These tools can help promote education and awareness about various aspects of women's health, such as menstrual health, fertility, pregnancy, menopause, sexual health, and overall well-being, many of which doctors may be unaware or under-educated about. For example, according to AARP 42% of women say they have never discussed menopause with their healthcare provider, yet hormonal changes due to menopause can raise the risk of heart disease, lead to losses in bone mass, and result in concentration problems, among other things. Moreover, conditions like menopause have a significant productivity impact on society since women between the ages of 45-65 are often in the heat of their careers. One study from the Mayo Clinic Women’s Health indicated that the cost to society of menopause due to lost days of work is nearly $2B and almost $25B in annual medical costs and that does not take into account the additional costs of reduced hours at work, loss of employment, early retirement or the impact of changing jobs. Sadly, it appears that the U.S. is well behind the rest of the world in terms of recognition and coverage of the diagnosis and treatment of conditions specific to women. As noted by Chris Keynon, “the conversation in the United Kingdom is about 2 years more aware in terms of sophistication and awareness [of women’s health issues] than in the U.S.”. Improving this and broadening the widespread use of femtech solutions would generate valuable data insights that can be aggregated and anonymized to help destigmatize women’s health, improve practices, and advance scientific understanding of women's health. While femtech has the potential to positively impact women's health, it's important to address challenges such as affordability, inclusivity, data privacy, and regulatory considerations to ensure that these technologies are ethical, equitable, and effectively meet the diverse needs of women, particularly the underserved. As noted in “A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States”, “in the current model, an opportunity is missed to engage populations who have been historically underserved and bear the largest burden of poor pregnancy and parietal outcomes.” To combat this the authors recommend 3 principles: 1) creating interdisciplinary stakeholder inclusive teams-review content and functionality iteratively with members of key stakeholder groups (patients, medical experts, community leaders).; 2) taking a person-centered approach-designing features and content that incorporate clinical best practices while focusing on user's informational needs and personal values; and, 3) advancing reproductive equity-using advisory panels that include content and lived-experience experts with diverse backgrounds and perspectives. By incorporating principles such as these, Femtech solutions can enable personalized and patient-centric healthcare, allowing women to actively manage their health, monitor their health indicators, receive personalized insights, and make informed decisions about their healthcare. Related Reading: Can Femtech Transform Women’s Healthcare? Menopause symptoms cost billions in medical expenses and lost days of work, study suggests A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States Femtech: Digital Help for Women’s Health Care Across the Life Span

  • Strive Health: At-Home & Virtual Value-Based Kidney Care

    The Driver: Strive Health, a provider of technology-enabled value-based at-home and virtual kidney care, recently raised $166M in Series C funding in a round led by NEA and included participation from CVS Health Ventures, CapitalG, Echo Health Ventures, Town Hall Ventures, Ascension Ventures, and Redpoint. This brings the company’s total funding to $386M since its founding. According to the company the funding will be used to expand partnerships with Medicare Advantage and commercial payers, as well as expanding into new and existing markets. Key Takeaways: More than 1 in 7 adults in the U.S. (15% of the adult population) have kidney disease and approximately 90% of those with kidney disease don't know they have it (National Kidney Foundation) Strive is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care and a 42% reduction in hospitalizations (Strive Health) Approximately 34M people have early undiagnosed, early-stage (stage 1-3) kidney disease accounting for over 50% of the cost of kidney disease (U.S. Renal Data System) McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home accounting for about $5B in savings. The Story: Strive Health was co-founded by CEO Chris Riopelle and Bob Badal, both for (from?) DaVita. As noted by the Denver Business Journal, Riopelle and Badal realized that only a small percentage of those with chronic kidney disease, about 10%, are aware of it and have been diagnosed. [In fact] about 80% of patients start on dialysis by crashing into it: getting so sick that they end up in the hospital, and learning there that their kidneys have shut down and they need to start treatment.” Riopell and Badal realized there had to be a better way and understood that finding a way to diagnose and intervene with these patients earlier in the process could improve the experience and the quality of care. As noted on the company’s website, Riopelle wants to help those who deserve a better patient journey and is deeply committed to fundamentally changing how kidney care works for the almost 40M patients in need in America. Currently, “kidney care focuses almost entirely on ESKD and in-center dialysis, missing the primary drivers of high cost and poor outcomes. Strive is [attempting to change] the kidney care paradigm to identify patients earlier, prioritize the right care at the right time and drive better outcomes – all while lowering costs.” The Differentiators: Strive’s care model looks to combine technology such as Artificial Intelligence to empower caregivers to, in their words, “methodically reinvent kidney care”. According to the company, Strive’s Care Multiplier uses machine learning models that are trained on over 100 million patient records to calculate risk scores, end-stage kidney disease crash predictions, admission and readmission predictions as well as disease progression predictions. In addition, the company states that its artificial intelligence algorithms can identify patients whose kidney disease is undiagnosed thereby allowing earlier interventions. Strive then uses this data to “form an integrated care delivery system that supports the entire patient journey from chronic kidney disease (CKD) to end-stage kidney disease (ESKD), As highlighted by MedCity News “The company provides at-home and virtual support for chronic kidney disease, end-stage kidney disease, dialysis, and kidney transplant and connects the patients with a care team, nurse practitioner, registered nurse, case manager, and care coordinator.” Strive states that it is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care, a 42% reduction in hospitalizations, and has achieved a 94% overall patient satisfaction rate. The company serves 80,000 CKD and ESRD patients in 30 states and has established partnerships with 600 nephrology providers across 10 states. The Big Picture: According to the National Kidney Foundation, approximately 90% of those with kidney disease don’t know they have it and Medicare costs for all people with all stages of kidney disease were $130 billion. For patients in need of a kidney transplant, this amounts to over $80K per person per year spending, much of which could have been prevented with early detection and treatment. Moreover, over 50% of the cost of untreated kidney disease is for people with early-stage disease (stage 1-3 disease) where companies like Strive Health and its competitors (Monogram, Somatus) could make a meaningful impact on the cost of care and patient quality of life. By helping to move kidney care out of facilities and improving its effectiveness. This would also have a significant financial impact as McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home, amounting to about $5B in spending. Thus Strive and its competitors are significantly reducing the total cost of care. This is also propelled by regulatory moves such as the 21st Century Cures Act which now allows dialysis patients to become enrolled in Medicare Advantage plans. By combining the power of improved data and analytics with personalized care, companies like Strive may be able to change the progression of this very costly chronic disease. Strive Health Rakes In $166M for Value-based Kidney Care, Strive Health grabs $166M to provide end-to-end kidney care

  • Generative AI as Enabler of Culturally Competent Care-The HSB Blog 5/26/23

    Our Take: Differences in language,culture and inaccurate descriptions can cause patients to refuse help from other races (ex: black patients/white doctors, Chinese patients/English-speaking doctors) technology like generative AI may help by employing appropriate language which would help narrow the gap. Using generative AI to process natural language, developing language models and chatbots to understand and respond to patients of all backgrounds in a culturally appropriate way, generative AI can be trained on data sets containing cultural backgrounds, dialects, and linguistic nuances allowing it to understand a variety of accents and dialects. This would enable individuals from different cultural backgrounds to interact effectively with the technology. Giving patients the ability to engage with technology in a way that respects their cultural traditions and language preferences promotes a sense of dignity, respect, and empowerment. By working towards cultural inclusion, technology has the potential not only to reduce differences, but to promote understanding, empathy, and harmony in our increasingly interconnected world. Key Takeaways: African Americans and Latinos experience 30% to 40% poorer health outcomes than White Americans Research shows poor care for the underserved is because of fear, lack of access to quality healthcare, distrust of doctors, and often dismissed symptoms and pains One study found that black patients were significantly less likely than white patients to receive analgesics for extremity fractures in the emergency room (57% vs. 74%), despite having similar self-reports of pain Each additional standard deviation improvement score that hospitals received in cultural competency, translated into an increase of 0.9% in nurse communication and 1.3% in staff responsiveness on patient satisfaction surveys The Problem: While culturally appropriate language is important to promote inclusiveness and reduce disparities, there are challenges and potential problems with its implementation. Because of the cultural diversity of the United States, cultural norms and practices do not ensure that everyone living in the United States will be culturally respected and vary greatly between communities and regions. Because of differences in tradition, religion, society, language, and socialization, individuals within various communities may not feel respected or secure. With literally thousands of languages and dialects in the world, each having its own unique cultural background, dealing with linguistic diversity and ensuring culturally appropriate language can become quite a complex task. In health care this has been shown to lead to neglect or underservice in certain communities through intentional and unintentional slights. As society continues to change, social inclusiveness cannot be overlooked as an integral component of care. Inclusiveness is not only an important but necessary element of care so that patients feel respected and valued in a system that recognizes the cultural practices and identities of different communities. Research has demonstrated that this leads to improved clinician-patient interactions, compliance, and data sharing by patients. More recently the health care system has come to recognize the impact the unique cultural needs of often overlooked groups such as people with disabilities, lesbian, gay, bisexual, and transgender (LGBT) people have on the quality and effectiveness of care as well (while not a racial or ethnic group these provide further evidence of the need for culturally competent care). The Backdrop: According to “Understanding Cultural Differences in the U.S.”, from the U.S.A.Hello, website cultures can differ in 18 different ways including, communication, physical contact (shaking hands, personal space), manners, political correctness, family, treatment of women & girls (men and women going to school/work together, sharing tasks), elders (multigenerational homes), marriage (traditions, views on same-sex marriage), health, education, work, time, money, tips religion, holidays, names and language. Recognizing and understanding cultural differences is important to create trust and security with patients. Generative AI can help bridge language and cultural barriers that often prevent non-English speakers from accessing essential health services. According to Marcin Frąckiewicz, “generative AI can serve as a virtual health assistant, providing accurate and personalized health advice to users. By making health information more accessible, individuals can make better-informed decisions about their health and well-being.” Different cultural backgrounds interact effectively with technology. Similarly, speech synthesis can provide text-to-speech capabilities in a variety of languages, enabling technology to communicate with users in their preferred language. This can often be done more rapidly and more efficiently than when having to locate and find an appropriate translator. Using generative AI can carry out multicultural expression, according to the product and service integration of multicultural expression. A variety of visual depictions, avatars, and characters of different racial, ethnic, and cultural backgrounds. Inclusion can be promoted through such technologies, allowing users to feel represented and valued. In addition, generative AI can help identify and correct potential instances of cultural insensitivity or technological bias. Moreover, since generative AI is iterative it allows for continuous improvement, ensuring that culturally appropriate language and experience are prioritized. Implications: While still in the developmental stages, generative AI can be one tool to assist healthcare organizations in delivering culturally competent care for patients. At its most fundamental level, generative AI models can provide translation services to help overcome language barriers and help communicate between healthcare providers and patients who speak different languages. It can enhance doctor-patient interaction, improve patient satisfaction and reduce misunderstanding. Although communications created with generative AI will likely lack nuance and will have difficulty creating empathetic emotional connections with patients, it will be a step in the right direction. Over time we expect generative AI and other generative AI models to be able to provide up-to-date information about medical conditions, treatment guidelines, medications, and research results. By collecting and analyzing patient data through remote monitoring, facilitating virtual consultations, and providing real-time information and guidance. Moreover, technologies like generative AI can contribute to remote monitoring and telemedicine initiatives to improve access to health care services, particularly for those living in remote areas or with limited mobility. Related Reading: Hurtling into the future’: The potential and thorny ethics of generative AI in healthcare Structural Racism In Historical And Modern US Health Care Policy Can Hospital Cultural Competency Reduce Disparities in Patient Experiences with Care? Racism and discrimination in health care: Providers and patients

  • Exploring Latine Culture-Reducing Disparities Begins with Cultural Sensitivity-The HSB Blog 5/5/23

    As we were preparing for this week’s “Our Take”, we had a little internal debate about Cinco De Mayo. Was it a Mexican holiday, or was it a Latino holiday? As we explored this, we came across an article from the Washington Post entitled “Cinco de Mayo is not a Mexican holiday. It's an American one” which argued “Cinco de Mayo is a celebration created by and for Latino communities in the United States. And the celebration of Cinco de Mayo is more about U.S. Latino history and culture than Mexican history.” As we read this, we realized there is often a tendency to view and categorize other cultures with monolithic and homogeneous labels that are often inadequate. Doing so can lead to broad generalizations that perpetuate the inequities in health care. Increasing cultural sensitization is a way to begin addressing and reducing those inequities. If we are to understand ethnic disparities in health care and deliver culturally appropriate and equitable care, we need to understand the nuances and idiosyncrasies of other cultures. Cinco de Mayo seemed a good place to start so we found the article “Ethnic Bias and the Latine Experience” in the American Counseling Association’s magazine, Counseling Today. While we don’t necessarily agree with everything in the article, we found this to be one of the most thorough and broad attempts to understand the Latine culture in the U.S. and as a result, we reprint an excerpt from it here with permission. Key Takeaways: Only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, and only 3% said they use Latinx to describe themselves (Pew Research Center) The U.S. Latine population was 62.1 million in 2020, or 19% of all Americans and is projected to increase to 111.2 million, or 28% of the U.S. population by 2060 (Pew Research Center & UCLA Latino Policy & Politics Institute). Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. It was created as a “super category” for the 1970 census after of Mexican American and other Hispanic organizations advocated for federal data collection. In 2019, 61.5% of Latines were of Mexican origin or heritage, while 9.7% were if Puerto Rican or of Puerto Rican heritage, and Cubans, Salvadorans, Dominicans, Guatemalans, Colombians, and Hondurans each equaling a million or more in 2019. Ethnic Bias and the Latine Experience “She looks Latina.” “He doesn’t look Black.” “They sound Hispanic.” “She doesn’t sound Asian.” “I think they’re mixed.” Conversations all around us bear witness to the inclination to classify people into groups. This categorization of people is built into the fabric of American life, a fabric not originally intended to cover everyone. Inherent advantages and dominance historically favored white male landowners (with the exception of Jewish or Catholic men). Like Indigenous and Black communities, people of Hispanic or Latine descent continue to navigate a system not created for them. (In the next section, we explain why we prefer to use the term Latine as a gender-neutral or nonbinary alternative to Latino.) The objective of this article is to enhance counselors’ cultural sensitivities when providing services to Latine communities. We will discuss the unique discrimination challenges faced by Latines and provide tips for counselor effectiveness. A culturally responsive discussion about the mental health effects of ethnic bias on the Latine experience begins with a definition of key terms. The American Psychological Association’s (APA) Dictionary of Psychology defines ethnic as “denoting or referring to a group of people having a shared social, cultural, linguistic, and usually racial background,” and it can sometimes include the religious background of a group of people. The U.S. census has only two ethnic categories: Hispanic (Latino) and non-Hispanic (non-Latino). Ethnic bias is discrimination against individuals based on their ethnic group, often resulting in inequities. Nomenclature is problematic and ever evolving in the U.S. system of categorizing people into racial and ethnic groups. Every racialized group in the United States has gone through numerous label adjustments from within and outside the group. For example, First Nation people have been called Indian, American Indian, Native American, and Indigenous American. People of African descent have been called Colored, Negro, Black, Afro-American, and African American. Similarly, the word choices for the collective Hispanic description have also evolved over the years: Hispanic, Latino/a, Latinx and Latine. These are pan-ethnic terms representing cultural origins — regardless of race — for people with an ancestral heritage from Latin American countries and territories, who according to the Pew Research Center, prefer to be identified by their ancestral land (e.g., Mexican, Cuban, Ecuadorian) rather than by a collective pan-ethnic label. The history, and the debate, of nomenclature for this collective group set the stage for understanding the ethnicization of a large and diverse population. Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. According to the Pew Research Center, the term Hispanic was first used after Mexican American and other Hispanic organizations advocated for federal data collection about U.S. residents of Mexico, Cuba, Puerto Rico, Central and South America, and other Spanish-speaking origins. The U.S. Census Bureau responded by creating the “super category” of Hispanic on the 1970 census. The term Latino/a gained popularity in the 1990s to represent communities of people who descend from or live in Latin American regions, regardless of their language of origin (excluding people from Spain). This allowed for gender separation, with Latina representing the female gender and Latino representing the male gender or combined male-female groups. The Pew Research Center noted that Latino first appeared on the U.S. census in 2000 alongside Hispanic, and the two terms are now used interchangeably. While the two terms often overlap, there are exceptions. People from Brazil, French Guiana, Surinam and Guyana, for example, are Latino because these countries are in Latin America, but they are not considered Hispanic because they’re not primarily Spanish-speaking. These regions were colonized by the French, Portuguese, and Italians, so their languages derive from other ancient Latin-based languages instead of Spanish. Latinx has been used as a more progressive, gender- neutral or nonbinary alternative to Latino. Latinx emerged as the preferred term for people who saw gender inclusivity and intersectionality represented through use of the letter “x.” Others, however, note that “x” is a letter forced into languages during colonial conquests, so they reject the imposing use of this colonizing letter. Interestingly, for the population it is intended to identify, only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, according to a Pew survey of U.S. Hispanic adults conducted in December 2019. And only 3% said they use Latinx to describe themselves. In this article, we use the term Latine, the newest word used by this population. Latine has the letter “e” to represent gender neutrality. We like this term because it comes from within the population rather than being assigned by others, and it is void of the controversial “x” introduced by colonists. Next, we look at the meaning and impact of ethnicization and ethnic bias toward Latines in the United States. We explore the ways that bias, and discrimination affect the nation’s largest group of minoritized people, and we recommend actionable solutions to enhance counselors’ cultural sensitivities when providing services to Latine communities. Latines come from more than 20 Latin American countries and several territories, including the U.S. territory of Puerto Rico. There are seven countries in Central America: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua, and Panama. The official language in six of these countries is Spanish, with English being the official language of much of the Caribbean coast including Belize (in addition to Indigenous languages spoken throughout the region). South America has three major territories and 12 countries: Brazil, Argentina, Paraguay, Uruguay, Chile, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname. The official language in most of these countries is Spanish, followed by Portuguese, although it is estimated that there are over a thousand different tribal languages and dialects spoken in many of these countries. The Pew Research Center reported that the U.S. Latine population reached 62.1 million in 2020, accounting for 19% of all Americans. In 2019, 61.5% of all Latines indicated they were of Mexican origin, either born in Mexico or with ancestral roots in Mexico. The next largest group, comprising 9.7% of the U.S. Latine population, are either Puerto Rican born or of Puerto Rican heritage. Cubans, Salvadorans, Dominicans, Guatemalans, Colombians and Hondurans each had a population of a million or more in 2019. Although there are notable similarities, the Latine population is not an ethnic monolith. Latine cultures are diverse, with different foods, folklore, Spanish dialect, religious nuances, rituals and cultural celebrations. Despite the varying cultural experiences, many of the issues facing Latine communities remain the same. Copyright Counseling Today, October 2022, American Counseling Association

  • Oshi Health: Virtual Care Comes to Digestive Health (An Update)

    (We originally profiled Oshi Health in October of 2021, please see our original Scouting Report “Scouting Report-Oshi Health: Virtual Care Comes to Digestive Health” dated 10/22/21 here). The Driver: Oshi Health, a digital gastrointestinal care startup, recently raised $30M in Series B funding in a round led by Koch Disruptive Technologies with participation from existing investors including Flare Capital Partners, Bessemer Venture Partners, Frist Cressey Ventures, CVS Health Ventures, and Takeda Digital Ventures. In addition to the institutional investors, individual investors who joined the Series A round included Jonathan Bush, founder and CEO of Zus Health, (and cofounder of Athenahealth), and Russell Glass, CEO of Headspace Health. According to the company, the ???? will be used to accelerate the next phase of Oshi’s growth, to scale its clinical team nationwide and forge relationships with health plans, employers, channel partners, and provider groups. Key Takeaways: Approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. A study sponsored by the company and presented at the Institute for Health Improvement in January 2023 found Osh’s program resulted in all-cause medical cost savings of approximately $11K per patient in just six months According to Vivante Health, $136B per year is spent on GI conditions in the US, that is more than heart disease ($113B), trauma ($103B), or mental health ($99B). Direct costs for IBS alone are as high as $10 billion and indirect costs can total almost $20 billion. The Story: CEO and founder Sam Holliday encountered GI issues while observing his mother and sister’s ordeal when managing their own IBS care. Holliday’s experience watching his family deal with the difficulties of managing IBS without clinical assistance prompted his interest in companies like Virta Health that use food as medicine for treating diabetes. Holliday saw the contrast between his family’s experience and Virta’s approach of supporting people in reversing the impact of diabetes on their daily lives. Holliday was fascinated with Virta’s holistic approach of care that prioritized the user's experience while saving cost and providing easy access to virtual care that leveraged technology and was data driven. The possibilities that Virta’s virtual model presented for GI spurred the idea of creating Oshi Health. Oshi Health’s encompassing platform supports patients by granting them access to GI specialists, prescriptions, and lab work from the comforts of their homes. Oshi Health provides comprehensive and patient-focused care to patients with GI conditions such as Irritable Bowel Syndrome (IBS), Crohn’s disease, Inflammatory Bowel Disease (IBD), and Gastroesophageal Reflux Disease (GERD). Oshi Health works by connecting patients with an integrated team of GI specialists including board-certified gastroenterologists and registered dieticians. Oshi Health has clinicians that assess symptoms and order lab tests and diagnostics if needed. In addition to the licensed GI doctors and dieticians, patients have the option of speaking with GI-specialized mental health clinicians and nurse practitioners as well. This allows patients to form a customized plan as many of these conditions often involve a mental health component in addition to a physical one. The customized plan attempts to capture the patient’s needs regarding anxiety, nutrition, or stress. The Oshi Health service extends beyond testing and planning by providing stand-by health coaches and care teams to support patients and help them stay on track. Oshi Health’s platform also offers an app designed to help patients take action and stay organized through their GI care journey. With the app, patients can record their symptoms, quality of life measurements, and other factors known to impact their diet, sleep, or exercise. The app also features useful educational materials and recipes to help patients learn more about their condition. The company states their products and services are currently available to over 20 million people as a preferred in-network virtual gastroenterology clinic for national and regional insurers, as well as their employer customers. In April of this year, Oshi and Aetna announced a partnership that provides Aetna commercial members with in-network access to Oshi’s integrated multidisciplinary care teams. In addition, in March of 2022, Firefly Health – a virtual-first healthcare company named Oshi as its preferred partner for digestive care. In January 2023, the company announced clinical trial results of a company-sponsored study at the Institute for Healthcare Improvement (IHI). The study demonstrated that virtual multidisciplinary care for gastrointestinal (GI) disorders from Oshi Health resulted in significantly higher levels of patient engagement, satisfaction, and symptom control resulting in all-cause medical cost savings of approximately $11K per patient in just six months. The Differentiators: About 1 in 4 people suffer from diagnosed GI conditions and many more suffer from chronic undiagnosed symptoms. “GI conditions are really stigmatized”. says Holliday. Integrated GI care is a missing piece of the healthcare infrastructure. There’s a huge group of people who don’t have anywhere to go to access care that’s proven to work.” By using this virtual-first model and increasing access to care at a lower cost for patients, Oshi Health is attempting to revolutionize GI care. Through its approach Oshi is attempting to tap into the large market for food-related health conditions. For example, according to a report entitled “Let Food Be Thy Medicine: Americans Use Diet to Manage Chronic Ailments,” approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. Oshi calculates that these conditions drive approximately $135 billion in annual healthcare costs and have a collective impact greater than diabetes, heart disease and mental health combined. Oshi Health plans to use a community of irritable bowel disease (IBD) patients for disease research, personalized insights, and a new digital therapeutic. Oshi Health is one of the first companies to address GI disorders and is easily accessible for patients through a virtual approach. It is the only virtual platform exclusively for GI patients. Being virtual helps patients receive treatment from the convenience of their home, reduces stigma, and saves them the time and hassle of commuting for GI treatment which can involve extensive and costly testing. Patients are in control of their care and have access to a team of medical professionals at their fingertips. The Big Picture: Oshi Health’s commitment to making GI care accessible, convenient, and affordable is more likely to lower the costs of treatments such as cognitive-behavioral therapy, colonoscopies, X-rays, sonograms, and more in healthcare centers. Oshi Health helps avoid preventable and expensive ER visits as well as unnecessary colonoscopies and endoscopies. For example, according to a study cited by the company, unmanaged digestive symptoms are the #1 cause of emergency department treat-and-release visits. By incorporating access to psychologists in the membership plan rather than requiring it to be paid out of pocket it allows patients to access the services of mental health professionals quickly and easily. Patients can schedule appointments within 3 days and there is always support between visits with care plan implementation. By incorporating both physical and mental health Oshi Health is attempting to treat not just the symptoms but the causes as well, helping to lower the costs versus receiving in-person which can lower the incidence of GI conditions and increase the quality of care they receive. As noted by the company “Oshi Health is able to intercept and change the trajectory of unmanaged symptom escalations” helping to drive improvements in outcomes and cost savings. Billions of dollars are saved annually on avoidable treatments and expenses because these patients are receiving the treatment they need rather than physicians ordering expensive tests without knowing the root cause of their problems. Due to the comprehensiveness of the virtual-first care model, physicians, dieticians, health coaches, care coordinators, and psychologists are all involved in the patient's journey. Under traditional models, , patients with GI issues see just a gastroenterologist. By contrast, in Oshi’s model, a holistic team is involved every step of the way that is customized for each patient’s lifestyle that creates a personalized care plan for their lifestyle. Oshi Health scores $30M to scale gastrointestinal care company, Virtual GI care startup Oshi Health takes up $30M backed by CVS, Takeda venture arms

  • What Sports Analytics Can Teach Us About Integrating AI Into Care

    Our Take: Last week, our founder Jeff Englander, had the pleasure of having dinner with Dr. David Rhew, Global Chief Medical Officer and V.P. of Healthcare for Microsoft. After a broad-ranging discussion about the application of A.I., A/R, V/R, and Cloud to healthcare they came to realize they were both big sports fans. David of his hometown Detroit teams and Jeff of his hometown Boston teams. Toward the end of dinner, Jeff referenced an article he had written in May of 2018 on “What Steph & LeBron Can Teach Business About Analytics” and how sports demonstrate many practical ways to gain acceptance and integrate analytics into an organization. Based on that discussion, we thought it would be timely to reprint it here: As I sat watching the NBA conference finals last night, I began thinking about what I had learned from the MIT Sloan Sports Analytics conferences I went to over the last several years. I thought about how successful sports had been and the NBA in particular in applying sports analytics and what lessons businesses could learn to help them apply analytics to their businesses. Five basic skills stood out that sports franchises had been able to apply to their organizations that were readily transferrable to the business world: Intensive focus; Deep integration Limited analytic “burden” Trust in the process Communication and alignment. 1) Intensive focus - when teams deploy sports analytics they bring incredible focus to the task. One player noted that they do not focus on how to stop LeBron, not even on how to stop LeBron from going left off the dribble, but instead how to stop LeBron from going left off the dribble coming off the pick and roll. This degree of pinpoint analysis and application of the data has contributed to the success and continued refinement of sports analytics on the court, field, rink, etc. 2) Deep integration - each of the analytics groups I spoke with attempted to informally integrate their interactions into the daily routines of players and coaches through natural interactions (the Warriors analytics guy used to rebound for Steph Curry at practice). Analytics groups worked to demystify what they were doing and make themselves approachable. The former St. Louis now L.A. Rams analytics group jokingly coined its office as the “nerds nest”. By integrating themselves into the player’s (and coach's) worlds they were able to break down stereotypes and barriers to acceptance of analytics. 3) Limited analytic “burden” - teams’ data science groups noted given the amount of data they generate it’s important to limit the number of insights they present at any one time. One group made it a rule to discuss or review no more than 3 analytical insights per week with players or coaches. This made their work more accessible, more tangible to players/coaches and helped them quantify the value to the front office. 4) Trust in the process - best illustrated by a player who told the story of working with an analytics group and coaches to design a game plan against an elite offensive player which he followed and executed to a tee. But that night the opposition player couldn’t be stopped and in the player’s words "he dropped 30 on me". The other panelists pointed out that you can’t go away from your system based on short-term results. As one coach noted ‘don’t fail the plan, let the plan fail you…Have faith in the process.” 5) Communication and alignment - last but not least teams stressed the need to be aligned and to communicate that concept clearly all throughout the organization. As Scott Brooks, at the time the coach of the Orlando Magic noted, “we are all in this together, we have to figure this out together”. Surprisingly, at times communication was paramount even for the most successful and highly compensated athletes. For example, at last year’s conference, Chris Bosh a 5x All-Star and 2x NBA Champion, making $18M a year at the time he was referring to, lamented the grueling Miami Heat practices during their near-record 27-game winning streak in 2013, seemingly despite their success (at the time the 2nd longest winning streak in NBA history). When I asked him, what would have made it more bearable, he said communication, just better communication on what they were trying to do. Clearly, professional sports have very successfully applied analytics to their craft and there are a number of lessons that businesses can copy as they seek to gain broader and more effective adoption of analytics throughout the value chain.

  • Viz.ai-Applying AI to Reduce Time to Life Saving Treatments

    The Driver: Viz.ai recently raised $40M in growth capital from CIBC Innovation Banking. The additional funding brings Viz’s total fundraising to approximately $292M. Via has developed a software platform based on artificial intelligence (AI) that is designed to improve communication between care teams handling emergency patients (first applied to stroke patients) by helping improve care coordination and dramatically improved response times. The company will use the funds to help increase expansion and power its expansion, including the possibility of acquisitions. Key Takeaways: While the total cost of strokes in the U.S. was approximately $220 billion, the cost due to under-employment was $38.1 billion, and $30.4 billion from premature mortality (Via.ai & Journal of the Neurological Sciences) The risk of having a first stroke is nearly twice as high for blacks as for whites and blacks have the highest rate of death due to stroke (American Stroke Association) Each one minute [delay in care for stroke victims] translates into 2 million brain cells that die (Viz.ai) Stroke is the number one cause of adult disability in the U.S. and the fifth leading cause of death (American Stroke Association ) The Story: Viz.ai was founded by, Dr. Chris Mansi and David Golan. While working as a neurosurgeon in the U.K. Dr. Mansi observed situations in which a successful surgery was performed yet patients would not survive because of extended time lapses between diagnosis and surgery, which was particularly true with strokes. For example, when doctors believe there has been a stroke, they typically would order x-rays and a series of CT scans, and while the scans themselves typically happened quickly, there often was a sizeable delay before the studies could be read by a competent professional. Once the readings were performed by a radiologist, there was often a further delay in care as clinicians had to inform a local stroke center of the diagnosis and then ensure the patient was transferred to that center for treatment. Dr. Mansi met Golan in graduate school at Stanford while studying for his M.B.A. Golan was suspected of having suffered a stroke prior to entering Stanford and the two classmates lamented the lack of available data for stroke treatment. As noted by the company, “Mansi learned how undertreated large vessel occlusion (LVO) strokes were and wanted to be an agent of change.” Mansi and Golan, collaborated on a plan to apply A.I. to increase the data for stroke treatment and viz.ai was born. As noted in a recent Forbes article, the company’s “software cross-references CT images of a patient’s brain with its database of scans to find early signs of LVO strokes. It then alerts doctors, who see [and communicate] about the images on their phones” and allows those clinicians to communicate with specialists at stroke centers and arrange for patients to be transferred there for care. According to the company, this leads to dramatic decreases in the time it takes for patients to go from diagnosis to procedure, commonly referred to as “door to groin puncture times.” Originally developed for LVO’s Viz has now received FDA approval for 7 A.I. imaging solutions and has extended treatment from LVOs to cerebral aneurysms (February 2022), subdural hemorrhage (July 2022) and most recently hypertrophic cardiomyopathy-HCM (pending). The Differentiators: As noted above, Viz.ai’s system enables it to automatically scan all images in a hospital system and scan for the noted conditions, then alerts clinicians if any are detected. In the case of LVOs the system then allows doctors to view images of patient scans on their phones, exchange messages, and cut crucial time off diagnosis and treatment. As the company notes, this is particularly important for smaller facilities which often lack specialists to interpret scans and arrange for transitions in care. For example, according to a study in the American Journal of Neuroradiology looking at stroke treatment at a facility using viz.ai technology, researchers found “robust improvement” in other stroke response metrics, including door-to-device and door-to-recanalization and a 22% overall decline in time to treatment. This is particularly important in the case of strokes which are the number one cause of disability and the fifth leading cause of death, as time is of the essence for stroke victims with each minute of delay adding one week of disability. Implications: Applying technology to help reduce delays in diagnosis and treatment is one of the most promising applications of artificial intelligence because of the vast amounts of data they can process in short periods of time. While over time, many hope and some fear that these types of technologies will be able to be “taught” how to diagnose and treat illness, in the near term their greatest use lies in augmenting the skills of clinicians by allowing them to focus their attention on areas most in need of an experienced, nuanced diagnosis. This is particularly true for brain injuries where literally every second and every minute count. For example, as noted by the company “every one minute [delay in care for stroke victims] translates into 2 million brain cells that die.” Given that the loss of brain cells results in loss of brain function, disability, or worse the costs to society can be quite high. According to the company, strokes cost the U.S. healthcare system about $220 billion annually and each LVO patient that you are able to treat with a timely thrombectomy” costs one-tenth, or almost $1 million less than those that who aren’t. It is practical examples of the clinical applications of A.I. such as this, which attack very concrete and tangible problems, that are likely to pave the way for acceptance of more complex applications in the healthcare delivery system. Viz.ai partners with Us2.ai to integrate echocardiogram analysis tool;Viz.ai secures Bristol Myers Squibb's backing for hypertrophic cardiomyopathy-spotting AI

  • Explainable AI-Making AI Understandable, Transparent, and Trustworthy-The HSB Blog 3/23/23

    Our Take: Explainable AI, or AI whose methodology, algorithms, and training data can be understood by humans, can address challenges surrounding AI implementation including lack of trust, bias, fairness, accountability, and lack of transparency, among others. For example, a common complaint about AI models is that they are biased and that if the data that AI systems are trained on is biased or incomplete, the resulting model will perpetuate and even amplify that bias. By providing transparency into how an AI model was trained and what factors went into producing a particular result explainable AI can help identify and mitigate bias and fairness issues. In addition, it can also increase accountability by making it easier for users and those impacted by models to trace some of the logic and basis for algorithmic decisions. Finally, by enabling humans to better understand AI models and their development, explainable AI can engender more trust in AI which could accelerate the adoption of AI technologies by helping to ensure these systems were developed with the highest ethical principles of healthcare in mind. Key Takeaways: AI algorithms continuously adjust the weight of inputs to improve prediction accuracy but that can make understanding how the model reaches its conclusions difficult. One way to address this problem is to design systems that explain how the algorithms reach their predictions. ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both of which are well in excess of what any human brain could process and break down. During the Pandemic, the University of Michigan hospital had to deactivate its AI sepsis-alerting model when differences in demographic data for patients affected by the pandemic created discrepancies and a series of false alerts. AI models used to supplement diagnostic practices have been effective in biosignal analyses and studies indicate physicians trust the results when understand how the AI came to its conclusion The Problem: The use of artificial intelligence (AI) in healthcare presents both opportunities and challenges. The complex and opaque nature of many AI algorithms, often referred to as "black boxes", can lead to difficulty in understanding the logical processes behind AI's conclusions. This not only poses a challenge for regulatory compliance and legal liability but also impacts users ability to ensure the systems were developed ethically, are auditable and eventually their ability to trust the conclusions and purpose of the model itself. However, the implementation of processes to make AI more transparent and explainable can be costly and time-consuming and could potentially result in a requirement or at least preference that model developers may need to disclose proprietary intellectual property that went into creating the systems. This process is made even more complex in the U.S. where the lack of general legislation regarding the fair use of personal data and information can hamper the use of AI in healthcare, particularly in clinical contexts where physicians must explain how AI works and how it is trained to reach conclusions. The Backdrop: The concept of explainable AI is to provide human beings involved with using, auditing, and interpreting models a methodology to systematically analyze what data a model was trained on, what predictive factors are more heavily weighted in the models as well as provide cursory insights into how algorithms in particular models arrived at their conclusions/recommendations. This in turn would allow the human beings interacting with the model to better comprehend and trust in the results of a particular AI model instead of the model being viewed as a so-called “black box” where there is limited insight into such factors. In general, many AI algorithms, such as those that utilize deep learning, are often referred to as “black boxes” because they are complex, can have multiple billions and even trillions of parameters upon which calculations are performed and consequently can be difficult to dissect and interpret. For example, ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both well in excess of what any human brain could process and break down. Moreover, because these systems are trained by feeding vast datasets into models which are then designed to learn, adapt and change as they process additional calculations the products of the algorithms are often different from their original design. As a result, the numbers of the parameters the models are working with and the adaptive nature of the machine learning models, engineers and data scientists building these systems cannot fully understand the “thought process” behind an AI’s conclusions or explain how these connections are made. However, as AI is increasingly applied to healthcare in a variety of contexts including medical diagnoses, risk stratification and anomaly detection, it is important that AI developers have methods to ensure they are operating efficiently, impartially, and lawfully in line with regulatory standards both at the model development stage and when models are being rolled into use. As noted in an article published in Nature Medicine, starting the AI development cycle with an interpretable system architecture is necessary because inherent explainability is more compatible with the ethics of healthcare itself than methods to retroactively approximate explainability from black box algorithms. Explainability, although more costly and time-consuming to implement in the development process, ultimately benefits both AI companies and the patients they will eventually serve far more than if they were to forgo it. Adopting a multiple stakeholder view, the layman will find it difficult to make sense of the litany of data that AI are trained on, and that AI recites as part of their generated results, especially if the individual interpreting these results lacks knowledge and training on computer science and programming. Through creating AI with transparency and explainability , developers also create responsible AI that may eventually give way to the larger-scale implementation of AI in a variety of industries, but especially healthcare where more digitization is generating more patient data than ever before along with the need to manage and protect this data in appropriate ways. Creating AI that is explainable ultimately increases end user trust, improves auditability and creates additional opportunities for constructive use of AI for healthcare solutions. This is one way to reduce the hesitation and risks associated with traditional “black box” AI by making legal and regulatory compliance easier, providing the ability for detailed documentation of operating practices, and allowing organizations to create or preserve their reputations for trust and transparency. While a large number of AI-enabled clinical decision support systems are predominantly used to provide supporting advice for physicians in making important diagnostic and triage decisions, a study from the Journal of Scientific Reports found that the this actually helped improve physicians’ diagnostic accuracy, with physician plus AI actually performing better than whey they received human advice concerning the interpretation of patient data (sometimes referred to as the “freestyle chess effect”). AI models used to supplement diagnostic practices have been effective in biosignal analyses such as that of electrocardiogram results to detect biosignal irregularities in patients as quickly and accurately as a human clinician can. For example, a study from the International Journal of Cardiology found that physicians are more inclined to trust the generated results when they can understand how the explainable AI came to its conclusion. As noted in the Columbia Law Journal, however, the most obvious way to make an AI model explainable would be to reveal the source code for the machine learning model, that actually “will often prove unsatisfactory (because of the way machine learning works and because most people will not be able to understand the code)” and because commercial organizations will not want to reveal their trade secrets. As the article notes another approach is to “create a second system alongside the original ‘black box’ model, sometimes called a ‘surrogate model.’” However, a surrogate model only closely approximates the model itself and does not use the same internal weights of the model itself. As such, given the limited risk tolerance in healthcare we doubt such a solution would be acceptable. Implications: As noted by all the buzz around ChatGPT with the recent introduction of ChatGPT4 and its integration into products such as Microsoft’s Copilot and Google’s integration of Bard with Google Workspace, AI products will increasingly become ubiquitous in all aspects of our lives including healthcare. As this happens, AI developers and companies will have to work hard to ensure that these products are transparent and do not purposely or inadvertently contain bias. Along those lines, when working in healthcare in particular, AI companies will have to ensure that they implement frameworks for responsible data use which include 1) ensuring the minimization of bias and discrimination for the benefit of marginalized groups by enforcing non-discrimination and consumer laws in data analysis; 2) providing insight into the factors affecting decision-making algorithms, and 3) requiring organizations to hold themselves accountable to fairness standards and conduct regular internal assessments. In addition as noted in an article form the Congress of Industrial Organizations, in Europe AI developers could be held to legal requirements surrounding transparency without risking IP concerns under Article 22 of the General Data Protection Regulation which codifies an individual’s right to not be subject to decisions based solely on automated processing and requires the supervision of a human in order to minimize overreliance and blind faith in such algorithms. In addition, one of the issues with AI models is due to data shifts, caused when machine learning systems underperform or yield false results due to mismatches between the datasets they were trained on and the real-world data they actually collect and process in practice. For example, as challenges to individuals’ health conditions continue to evolve and new issues emerge, it is important that care providers consider population shifts of disease and how various groups are affected differently. During the Pandemic, the University of Michigan Hospital had to deactivate its AI sepsis-alerting model when differences in demographic data gathered by patients affected by the pandemic created discrepancies with the data the AI system had been trained on, leading to a series of false alerts. As noted in an article in the New England Journal of Medicine this has fundamentally altered the way the AI viewed and understood the relationship between fevers and bacterial sepsis. Episodes like this underscore the need for high-quality, unbiased, and diverse data in order to train models. In addition, given that the regulation of machine learning models and neural networks in healthcare is continuing to evolve, developers must ensure that they continuously monitor and apply new regulations as they evolve, particularly with respect to adaptive AI and informed consent. In addition, developers must ensure that models are tested both in development and post-production to ensure that there is no model drift. With the use of AI models in health care there are special questions that repeatedly need to be asked and answered when using these models. Are AI properly trained to account for the personal aspects of care delivery and consider the individual effects of clinical decision-making, ethically balancing the needs of the many over the needs of the few? Is the data collected and processed by AI secure and safe from malicious actors, and is it accurate enough so that the potential for harm is properly mitigated, particularly against historically underserved or underrepresented groups? Finally, what does the use of these models and these particular algorithms mean with regard to the doctor-patient relationship and the trust vested in our medical professionals? How will decision-making and care be impacted when using AI that may not be sufficiently explainable and transparent enough for doctors themselves to understand the thought process behind and therefore trust the results that are generated? These questions will undoubtedly persist as long as the growth in AI usage continues and it is important that AI is adopted responsibly and with the necessary checks and balances to preserve justice and fairness for the patients they will serve. Related reading: Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram The Judicial Demand for Explainable Artificial Intelligence Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Enhancing trust in artificial intelligence: Audits and explanations can help Art. 22 GDPR – Automated individual decision-making, including profiling - General Data Protection Regulation (GDPR) The Clinician and Dataset Shift in Artificial Intelligence Dissecting racial bias in an algorithm used to manage the health of populations

  • Cognosos-Analyzing and Optimizing Asset Visibility and Management

    The Driver: Cognosos recently raised $25M in a recent venture round led by Riverwood Capital. As part of the funding Joe De Pinho and Eric Ma from Riverwood will join Cognosos’ Board. Cognosos has developed a cloud-based platform of real-time location services (RTLS) and process optimization software. The fundraising round brings Cognosos’ total funding to $38.1M. The proceeds will be used to allow Cognosos to double its staff from the current 50 to 100 as well as continue to expand in healthcare and automotive manufacturing, among other industries. Key Takeaways: Cognosos grew revenue by over 226% in 2022 according to the Atlanta Business Journal and expects to double it year-over-year in the coming year (Cognosos) Between 2020 and 2022 the average price of healthcare services increased at 2.4%, while the Producer Price Index (PPI) increased at 14.3%, leading to a further squeeze on healthcare supply chains as per-patient input costs have exploded (University of Arkansas) A single Cognosos gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets In 2021 it is estimated that at least one-third of healthcare providers operated under negative margins, with a combined loss of $54 billion in net income (Kaufman-Hall) The Story: Cognosos was founded in 2014 and is based on technology developed by the founders at Georgia Tech’s Smart Antenna lab for radio astronomy. According to the company’s website, while investigating techniques in radio astronomy to combine signals from multiple dishes, the founders of Cognosos discovered a way to use software-defined radio (SDR) and cloud-based signal processing. This dramatically lowered the cost and power requirements for wireless sensor transmitters. The result, called RadioCloud, was created with the belief that businesses of all sizes should be able to use ubiquitous, low-cost wireless technology to harness the power of the Internet of Things (IoT). The name Cognosos is derived from the Latin verb cognoscere, meaning “to become aware of, to find.” The company states that they have over 100 customers, 4000 registered users and grew revenue by over 226% in 2022 according to the Atlanta Business Journal. The Differentiators: Cognosos real-time location services (RTLS) technology and process optimization software allows companies to track their assets’ movement, improve operational visibility and safely increase and maximize productivity. The company’s RTLS combines Low Energy Bluetooth technology with AI and its proprietary long-range wireless networking technology to help reduce costs of deployment and improve tracking ability. The company has 10 U.S. patents and states that a single gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets. Their technology is easily deployable without having to pull wires, move tiling or other fixtures minimizing any disruptions to operations and allowing them to leverage existing Bluetooth infrastructure, including Bluetooth-enabled fixtures. The company states that this technology will allow them to bring critical assets on-line much more rapidly than competitors (ex: weeks vs. months). The company’s major clients are in healthcare, automobile, logistics and manufacturing. Implications: For years hospitals and healthcare industries have struggled to effectively manage and optimize their management of assets. The industry is rife with stories of nurses, aides, and others squirreling monitors and other devices away in secret hiding places so that they don't have to spend time and unnecessary energy tracking down what they need. Now more than ever, with hospitals facing increased financial pressures, workforce shortages and employee dissatisfaction, a solution that improves and eases the management of such assets is needed. Given Cognosos' ability to limit the disruption to clinical operations and patient care while helping to reduce costs and employee productivity, they should be positioned well going forward. In addition, since Cognosos’s product is not hardware-based but instead relies on the Cloud and AI software to manage assets, they can provide real-time asset tracking with lower infrastructure costs. Our belief is that solutions like Cognosos’s which leverage both the Cloud and AI in helping to address healthcare’s back-office and supply chain challenges will become the first to be widely adopted and hasten the adoption of these technologies within the industry and pave the way for broader adoption or clinical technology down the road.

  • What Clinicians and Administrators Need to Know When Implementing AI-The HSB Blog 3/9/23

    Our Take: There are several basic issues and challenges in deploying AI that all clinicians and administrators should be aware of and inquire about to ensure that they are being properly being considered when AI is being implemented in their organization. Applications of artificial intelligence in healthcare hold great promise to increase both the scale of medical discoveries and the efficiency of healthcare infrastructure. As such healthcare-related research and investment have exploded over the last several years. For example, according to the State of AI Report 2020, academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017. In addition, 99% of healthcare institutions surveyed by CB Insights are either currently deploying (38%) or planning to deploy AI (61%) in the near future. However, as witnessed by recent errors discovered surrounding the application of an AI-based Sepsis model, while AI can improve the quality of care, improve access and reduce costs, models must be implemented correctly or they will be of questionable value and even dangerous. Key Takeaways: According to Forrester's "The Cloud, Data, and AI Imperative for Healthcare" Report the 3 greatest challenges to implementing AI are: 1) integrating insights into existing clinical workflows; 2) consolidating fragmented data; and, 3) achieving clinically reliable clean data Researchers working to uncover insights into prescribing patterns for certain antipsychotic medications found that approximately 27% of prescriptions were missing dosages Even after doing work to standardize and label patient data, in at least one broad study almost 10% of items in the data repository didn’t have proper identifiers Academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017 The Problem: While it is commonly accepted that computers can outperform humans in terms of computational speed, in its current state many would argue that artificial intelligence is really “augmented intelligence” defined by the IEEE as “a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it.” Current AI models are still highly dependent upon the quantity and quality of data available for them to be trained on, the inherent assumptions underlying the models as well as the human biases (intentional and unintentional) of those developing the models along with a number of other factors. As noted in a recent review of the book “I, Warbot” about computational warfare by Kings College, AI lecturer Kenneth Payne, “these gizmos exhibit ‘exploratory creativity'-essentially a brute force calculation of probabilities. That is fundamentally different from ‘transformational creativity”, which entails the ability to consider a problem in a wholly new way and requires playfulness, imagination and a sense of meaning.” As such, those creating AI models for healthcare need to ensure they set the guardrails for its use and audit its models both pre and post-development to ensure they conform to existing laws and best practices. The Backdrop: When implementing an AI project there are a number of steps and considerations that should be taken into account to ensure its success. While it is important to identify the best use and type with any kind of project, given the cost of the technical talent involved, the level of computational infrastructure typically needed (if done internally) and the potential to influence leadership attitudes towards the use and viability of AI as an organizational tool, it is even more important here. As noted above one of the most important keys to implementing an AI project is the quantity and quality of data resources available to the firm. Data should be looked at with respect to both quality (to ensure that it is free of missing, incoherent, unreliable, or incorrect values) and quantity. In terms of data quality, as noted in “Artificial Intelligence: A Non-Technical Introduction”, data can be: 1) noisy (have data sets with conflicting data), 2) dirty (have data sets with inconsistent and erroneous data), 3) sparse (have data with missing or no values at all, or, 4) inadequate (have data sets that have contained inadequate or biased data). As noted in an article in “Extracting and Utilizing Electronic Health Data from Epic for Research”, “to provide the cleanest and most robust datasets for statistical analysis, numerous statistical techniques including similarity calculations and fuzzy matching are used to clean, parse, map, and validate the raw EHR data.” which is generally the largest source of healthcare data for AI research. When looking to implement AI it is important to consider and understand the levels of data loss and the ability to correct for it. For example, researchers looking to apply AI to uncover insights into prescribing patterns into second-generation antipsychotic medications (SGAs) found that approximately 27% of the prescriptions in their data set were missing dosages and even after undertaking a 3-step correction procedure, 1% were missing dosages. While this may be deemed an acceptable number it is important to be aware of the data loss and know this information in order to properly evaluate if it is within tolerable limits. In terms of inadequate data, ensuring that data is free of bias is extremely important. While we have all recently been made keenly aware of the impact of racial and ethnic bias on models (ex: facial recognition models trained only on Caucasians) there are a number of other biases which models should be evaluated for. According to “7 Types of Data Bias in Machine Learning” these include: 1) sample bias (not representing the desired population accurately), 2) exclusion bias (the intentional or unintentional exclusion or certain variables from data prior to processing), 3) measurement bias (ex: due to poorly chosen measurements that create systematic distortions of data, like poorly phrased surveys); 4) recall bias (when similar data is inconsistently labeled), 5) observer bias ( when the labelers of data let their personal views influence data classification/annotation), 6) racial bias (when data samples skew in favor of or against certain ethnic or demographic groups), 7) association bias (when a machine learning model reinforces a bias present in a model). In addition to data quality, data quantity is as imperative. For example, in order to properly train machine learning models, you need to have a sufficiently large number of observations to create an accurate predictor of the parameters you’re trying to forecast. While the precise number of observations needed will vary based on the complexity of the data you’re using, the complexity of the model you want to build, and the impact of the amount of “statistical noise” generated by the data itself, an article in the Journal of Machine Learning Research suggested that at least 100,000 observations are needed to train a regression or classification model. Moreover, it is important that numerous data points are not captured or sufficiently documented in healthcare. For example, as noted in the above-referenced article on extracting and utilizing Epic EHR data for study based on research at the Cleveland Clinic in 2018, even after doing significant work to standardize and label patient data, “approximately 9% [1,000 out of 32,000 data points per patient] of columns in the data repository” were not using the assigned identifiers. While it is likely that methods have improved since this research was performed, given the size and resources that an institution like the Cleveland Clinic had to bear on the problem, it indicates the larger size of the problem. Once the model has been developed there should be a process in place to ensure that the model is transparent and explainable by creating a mechanism that allows non-technologists to understand and assess the factors the model used and what parameters it relied most heavily upon in coming to its conclusions. For example, as noted by the State of AI Report 2020, “AI research is less open than you think, only 15% of papers publish their [algorithmic] code” used to weight and create models. In addition, there should be a system of controls, policies, and audits in place that provide feedback as to the potential errors in the application of the model as well as disparate impact or bias in its conclusions. Implications: As noted in “Artificial Intelligence Basics: A Non-Technical Introduction” it’s important to have realistic expectations for what can be accomplished by an AI project and how to plan for it. In the book, the author Andrew Taulli references Andrew Ng, the former Head of Google Brain, who suggests the following parameters; an AI project should take between 6-12 months to complete, have an industry-specific focus, should notably help the company, doesn’t have to be transformative, and, have high-quality data points. In our opinion, it is particularly important to form collaborative, cross-platform teams of data scientists, physicians, and other front-line clinicians (particularly those closest to patients like nurses) to get as broad input on the problem as possible. While AI holds great promise, proponents will have to prove themselves by running targeted pilots and should be careful not to overreach at the risk of poisoning the well of opportunity. As so astutely pointed out in “5 Steps for Planning A Healthcare Artificial Intelligence Project: “artificial intelligence isn’t something that can be passively infused into an organization like a teabag into a cup of hot water. AI must be deployed carefully, piece by piece, in a measured and measurable way.” Data scientists need to ensure that the models they create produce relevant output that provide context and the ability for clinicians to have a meaningful impact upon the results and not just generate additional alerts that will go unheeded. For example, as Rob Bart, Chief Medical Information Officer at UPMC noted in a recent presentation at HIMSS, data should provide “personalized health information, personalized data” and should have “situational awareness in order to turn data into better consumable information for clinical decision making” in healthcare. Along those lines, it is important to take a realistic assessment of “where your organization lies on the maturity curve”, how good is your data, how deep is your bench of data scientists and clinicians available to work on an AI project in order to inventory, clean and prepare your data. AI talent is highly compensated and in heavy demand. Do you have the resources necessary to build and sustain a team internally or will you need to hire external consultants? How will you select and manage those consultants, etc.? All of these are questions that need to be carefully considered and answered before undertaking the project. In addition, healthcare providers need to consider the special relationship between clinician and patient and the need to preserve trust, transparency, and privacy. While AI holds a tremendous allure for healthcare and the potential for it to overcome, and in fact make up for its underinvestment in information technology relative to other industries, all of this needs to be done with a well-thought-out, coherent and justified strategy as its foundation. Related Readings: Artificial Intelligence Basics: A Non-Technical Introduction. Tom Taulli (publishers site) Artificial Intelligence (AI): Healthcare’s New Nervous System An Interdisciplinary Approach to Reducing Errors in Extracted Electronic Health Record Data for Research 5 Steps for Planning a Healthcare Artificial Intelligence Project

  • R-Zero-Making Intelligent Air Disinfection More Economical and Efficient

    The Driver: R-Zero recently raised $105M in a Series C financing led by investment firm CDPQ with participation from BMO Financial Group, Qualcomm Ventures, Upfront Ventures, DBL Partners, World Innovation Lab, Mayo Clinic, Bedrock Capital, SOSV and legendary venture capital investor John Doerr. The Series C financing brings the total amount raised by R-Zero to more than $170M since its founding in 2020. The company will use the funds to scale deployments of its disinfection and risk modeling technology to meet growing demand across public and private sectors, including hospitals, senior care communities, parks and recreation, other government facilities, and college and corporate campuses. Key Takeaways: According to the company, R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms R-Zero’s UV-C products cost anywhere from $3K to $28K compared with traditional institutional UV-C technology which can cost anywhere from $60K to $125K Using R-Zero's products results in more than 90% fewer greenhouse gas emissions (GHG) and waste compared to HVAC and chemical approaches For every dollar employers spend on health care, they’re spending 61 cents on illness-related absences and reduced productivity (Integrated Benefits Institute) The Story: R-Zero was co-founded by Grant Morgan, Eli Harris and Ben Boyer. Morgan, who has an engineering background, had worked briefly as CTO of GIST and had been V.P. of product and engineering at iCracked and was previously in R&D in medical devices. Harris co-founded EcoFlow (an energy solutions company) and has been in partnerships and BD at drone company DJI, while Boyer had been an MD at an early-growth stage VC Tenaya Capital. According to the company, the co-founders applied their experience to innovating an outdated legacy industry to make hospital-grade UVC technology accessible to small and medium-sized businesses. Prior to R-Zero these units could cost anywhere from $60-$125K and often lacked the connected infrastructure and analytics necessary to optimize performance and provide risk analytics for its users (ex: how frequently and heavily rooms are being used and when to use disinfection to help mitigate risk). The Differentiators: As noted above, typical institutional ultraviolet (UV) disinfectant lighting technology can be expensive and has the potential to be harmful (hihigh-poweredVC lights can cause eye injuries if people are exposed to them for long periods of time), however, R-Zero has found ways to mitigate these issues. First, as noted in Forbes, its products run anywhere from $28K for their most expensive device the Arc, to the Beam at $5K and the Vive at $3K. Moreover, while the Arc can only be used to disinfect an empty room due to the wavelength of UVC light, the Beam creates a disinfection zone above people in a room while the Vive can be used to combat harmful microorganisms when people are in a room. In addition, according to the company R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms and does so with 90% fewer greenhouse gas emissions and waste compared to HVAC and chemical approaches. As a result, R-Zero can help improve indoor air quality in hospitals and other medical facilities, factories, warehouses, and other workplaces more efficiently and effectively than outdated technologies. Implications: As noted above, R-Zero’s technology will help hospitals and senior care facilities cost effectively sanitize treatment spaces which can’t necessarily be done with current technology. Moreover, as medical care increasingly moves to outpatient settings the ongoing workplace shortage will challenge these facilities to find ways to keep themselves clean and disinfected and avoid disease transmission. For example, the company claims that their customers have been reducing labor costs by 30%-40%, a number which will likely only get higher given the current labor situation. In addition, even in facilities that have the necessary workforce, it is often difficult to optimize staff time to ensure that offices are sanitized and used to maximum capacity. Utilizing devices like the R-Zero Beam or Vive can allow medium-to-small size facilities to constantly and efficiently be disinfecting rooms, making them immediately available for use.. Also, by removing the burdensome task of having already overworked clinical or janitorial staff spend time sanitizing the rooms, R-Zero’s technology can help improve employee productivity and satisfaction at a time when both are stretched thin. This Startup Wants To Bring Disinfecting UV Light Into “Every Physical Space”, R-Zero Raises $105 Million Series C to Improve the Indoor Air We Breathe, This startup built an ultraviolet device that can disinfect a restaurant in minutes

bottom of page