top of page

Search

215 results found with an empty search

  • Femtech: Women Overcoming Challenges of Accessibility & Design to Control Their Health-The HSB Blog

    Our Take: While digital tools geared at addressing the needs of female patients, often referred to as “femtech” have gained in popularity significant challenges remain as women seek to regain at least partial autonomy over the care of their bodies These challenges include limited accessibility, biased design, lack of privacy, lack of regulation, and a narrow evidence base. This is particularly true as women increasingly seek more and better information, control, and gender-specific care over their health and bodies, something that has become of utmost concern in the wake of the Dobbs decision. While investment in women’s health has increased dramatically over the last 10 years “it accounted for just 3% of health technology funding for women-led companies in 2020 – and only 0.64% was allocated to businesses led by women of color”, according to the World Intellectual Property Organization (WIPO). Moreover, Femtech sales accounted for approximately $821M in global revenues in 2019 which pales in comparison to the almost $500B currently spent worldwide on women’s healthcare and the $1T projected by 2026 by an article in The Lancet Digital Health. Key Takeaways: The cost of menopause due to lost days of work is nearly $2B and almost $25B in annual medical costs (Mayo Clinic) Femtech sales hit $821M globally in 2019 which pales in comparison to the almost $500B currently spent worldwide on women’s healthcare and the $1T projected by 2026 (World International Property Organization & The Lancet Digital Health) Femtech represents a promising opportunity to address existing gaps in women’s health care, the intended recipients of femtech innovations largely appear to be healthy, affluent, white, cis women (JMIR) 42% of women say they have never discussed menopause with their healthcare provider, yet hormonal changes due to menopause can raise the risk of heart disease, lead to losses in bone mass, and result in concentration problems, among other things (AARP) The Problem: Despite the growing femtech sector, there are still gaps in representation and inclusivity. Additionally, not all femtech solutions undergo rigorous scientific validation or regulatory scrutiny. This can result in varying levels of accuracy and reliability in the products and services offered. As a result, some apps and devices may lack clinical evidence to support their claims, potentially leading to misinformation or ineffective outcomes. Integrating femtech solutions within the existing healthcare system and collaborating with healthcare providers can be challenging. Limited interoperability and a lack of standardized protocols can hinder the seamless incorporation of femtech into routine healthcare practices. There is also a risk that overdiagnosis and unnecessary medicalization with certain femtech solutions can have unintended results. For example, fertility tracking apps that provide inaccurate or misleading information about a woman's fertility status can lead to undue stress or unnecessary medical procedures including additional ultrasounds or hormonal testing. The Backdrop: The rapid advancement of technology, particularly in the fields of digital health, wearable devices, and mobile applications, has created new opportunities for addressing women's health needs. In addition, the global women's empowerment movement has played a significant role in bringing attention to women's health issues and demanding better solutions. Advocacy efforts have highlighted the need for improved healthcare access, research, and awareness of women's unique health concerns. However, even though innovative solutions in the realm of women's health have grown in number and breadth, these technologies are not often targeted to meeting the needs of the underrepresented. For example, as noted in “A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States.”, “while femtech represents a promising opportunity to address existing gaps in women’s health care, the intended recipients of femtech innovations largely appear to be healthy, affluent, white, cis women. In the current model, an opportunity is missed to engage populations who have been historically underserved and bear the largest burden of poor pregnancy and perinatal outcomes.” Research and medical advancements have predominantly focused on male-centric models, leaving women's health needs understudied or misunderstood. In the words of one observer, “Doctors and clinicians were trained to look at women as if they were just small men”. Femtech aims to bridge this gap and provide tailored solutions specifically designed for women. However, the regulatory environment is uncertain and clinical evidence is often lacking, leading to some treatments being less robust than they should be. For example, in “Fertile Ground: Rethinking Regulatory Standards for Femtech”, the authors state ``the FDA classifies apps that dispense fertility and pregnancy information as low-risk, meaning that they do not require agency approval, even if some women use them for contraceptive purposes.” Not only could this leave users vulnerable to serious health issues such as unwanted pregnancies or needless infertility treatments, but it could also potentially lead to false readings causing additional emotional distress and unnecessary testing and procedures. Implications: Femtech has the potential to empower women by providing them with easy access to convenient and accurate information related to conditions impacting only women. These tools can help promote education and awareness about various aspects of women's health, such as menstrual health, fertility, pregnancy, menopause, sexual health, and overall well-being, many of which doctors may be unaware or under-educated about. For example, according to AARP 42% of women say they have never discussed menopause with their healthcare provider, yet hormonal changes due to menopause can raise the risk of heart disease, lead to losses in bone mass, and result in concentration problems, among other things. Moreover, conditions like menopause have a significant productivity impact on society since women between the ages of 45-65 are often in the heat of their careers. One study from the Mayo Clinic Women’s Health indicated that the cost to society of menopause due to lost days of work is nearly $2B and almost $25B in annual medical costs and that does not take into account the additional costs of reduced hours at work, loss of employment, early retirement or the impact of changing jobs. Sadly, it appears that the U.S. is well behind the rest of the world in terms of recognition and coverage of the diagnosis and treatment of conditions specific to women. As noted by Chris Keynon, “the conversation in the United Kingdom is about 2 years more aware in terms of sophistication and awareness [of women’s health issues] than in the U.S.”. Improving this and broadening the widespread use of femtech solutions would generate valuable data insights that can be aggregated and anonymized to help destigmatize women’s health, improve practices, and advance scientific understanding of women's health. While femtech has the potential to positively impact women's health, it's important to address challenges such as affordability, inclusivity, data privacy, and regulatory considerations to ensure that these technologies are ethical, equitable, and effectively meet the diverse needs of women, particularly the underserved. As noted in “A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States”, “in the current model, an opportunity is missed to engage populations who have been historically underserved and bear the largest burden of poor pregnancy and parietal outcomes.” To combat this the authors recommend 3 principles: 1) creating interdisciplinary stakeholder inclusive teams-review content and functionality iteratively with members of key stakeholder groups (patients, medical experts, community leaders).; 2) taking a person-centered approach-designing features and content that incorporate clinical best practices while focusing on user's informational needs and personal values; and, 3) advancing reproductive equity-using advisory panels that include content and lived-experience experts with diverse backgrounds and perspectives. By incorporating principles such as these, Femtech solutions can enable personalized and patient-centric healthcare, allowing women to actively manage their health, monitor their health indicators, receive personalized insights, and make informed decisions about their healthcare. Related Reading: Can Femtech Transform Women’s Healthcare? Menopause symptoms cost billions in medical expenses and lost days of work, study suggests A Framework for Femtech: Guiding Principles for Developing Digital Reproductive Health Tools in the United States Femtech: Digital Help for Women’s Health Care Across the Life Span

  • Strive Health: At-Home & Virtual Value-Based Kidney Care

    The Driver: Strive Health, a provider of technology-enabled value-based at-home and virtual kidney care, recently raised $166M in Series C funding in a round led by NEA and included participation from CVS Health Ventures, CapitalG, Echo Health Ventures, Town Hall Ventures, Ascension Ventures, and Redpoint. This brings the company’s total funding to $386M since its founding. According to the company the funding will be used to expand partnerships with Medicare Advantage and commercial payers, as well as expanding into new and existing markets. Key Takeaways: More than 1 in 7 adults in the U.S. (15% of the adult population) have kidney disease and approximately 90% of those with kidney disease don't know they have it (National Kidney Foundation) Strive is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care and a 42% reduction in hospitalizations (Strive Health) Approximately 34M people have early undiagnosed, early-stage (stage 1-3) kidney disease accounting for over 50% of the cost of kidney disease (U.S. Renal Data System) McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home accounting for about $5B in savings. The Story: Strive Health was co-founded by CEO Chris Riopelle and Bob Badal, both for (from?) DaVita. As noted by the Denver Business Journal, Riopelle and Badal realized that only a small percentage of those with chronic kidney disease, about 10%, are aware of it and have been diagnosed. [In fact] about 80% of patients start on dialysis by crashing into it: getting so sick that they end up in the hospital, and learning there that their kidneys have shut down and they need to start treatment.” Riopell and Badal realized there had to be a better way and understood that finding a way to diagnose and intervene with these patients earlier in the process could improve the experience and the quality of care. As noted on the company’s website, Riopelle wants to help those who deserve a better patient journey and is deeply committed to fundamentally changing how kidney care works for the almost 40M patients in need in America. Currently, “kidney care focuses almost entirely on ESKD and in-center dialysis, missing the primary drivers of high cost and poor outcomes. Strive is [attempting to change] the kidney care paradigm to identify patients earlier, prioritize the right care at the right time and drive better outcomes – all while lowering costs.” The Differentiators: Strive’s care model looks to combine technology such as Artificial Intelligence to empower caregivers to, in their words, “methodically reinvent kidney care”. According to the company, Strive’s Care Multiplier uses machine learning models that are trained on over 100 million patient records to calculate risk scores, end-stage kidney disease crash predictions, admission and readmission predictions as well as disease progression predictions. In addition, the company states that its artificial intelligence algorithms can identify patients whose kidney disease is undiagnosed thereby allowing earlier interventions. Strive then uses this data to “form an integrated care delivery system that supports the entire patient journey from chronic kidney disease (CKD) to end-stage kidney disease (ESKD), As highlighted by MedCity News “The company provides at-home and virtual support for chronic kidney disease, end-stage kidney disease, dialysis, and kidney transplant and connects the patients with a care team, nurse practitioner, registered nurse, case manager, and care coordinator.” Strive states that it is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care, a 42% reduction in hospitalizations, and has achieved a 94% overall patient satisfaction rate. The company serves 80,000 CKD and ESRD patients in 30 states and has established partnerships with 600 nephrology providers across 10 states. The Big Picture: According to the National Kidney Foundation, approximately 90% of those with kidney disease don’t know they have it and Medicare costs for all people with all stages of kidney disease were $130 billion. For patients in need of a kidney transplant, this amounts to over $80K per person per year spending, much of which could have been prevented with early detection and treatment. Moreover, over 50% of the cost of untreated kidney disease is for people with early-stage disease (stage 1-3 disease) where companies like Strive Health and its competitors (Monogram, Somatus) could make a meaningful impact on the cost of care and patient quality of life. By helping to move kidney care out of facilities and improving its effectiveness. This would also have a significant financial impact as McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home, amounting to about $5B in spending. Thus Strive and its competitors are significantly reducing the total cost of care. This is also propelled by regulatory moves such as the 21st Century Cures Act which now allows dialysis patients to become enrolled in Medicare Advantage plans. By combining the power of improved data and analytics with personalized care, companies like Strive may be able to change the progression of this very costly chronic disease. Strive Health Rakes In $166M for Value-based Kidney Care, Strive Health grabs $166M to provide end-to-end kidney care

  • Generative AI as Enabler of Culturally Competent Care-The HSB Blog 5/26/23

    Our Take: Differences in language,culture and inaccurate descriptions can cause patients to refuse help from other races (ex: black patients/white doctors, Chinese patients/English-speaking doctors) technology like generative AI may help by employing appropriate language which would help narrow the gap. Using generative AI to process natural language, developing language models and chatbots to understand and respond to patients of all backgrounds in a culturally appropriate way, generative AI can be trained on data sets containing cultural backgrounds, dialects, and linguistic nuances allowing it to understand a variety of accents and dialects. This would enable individuals from different cultural backgrounds to interact effectively with the technology. Giving patients the ability to engage with technology in a way that respects their cultural traditions and language preferences promotes a sense of dignity, respect, and empowerment. By working towards cultural inclusion, technology has the potential not only to reduce differences, but to promote understanding, empathy, and harmony in our increasingly interconnected world. Key Takeaways: African Americans and Latinos experience 30% to 40% poorer health outcomes than White Americans Research shows poor care for the underserved is because of fear, lack of access to quality healthcare, distrust of doctors, and often dismissed symptoms and pains One study found that black patients were significantly less likely than white patients to receive analgesics for extremity fractures in the emergency room (57% vs. 74%), despite having similar self-reports of pain Each additional standard deviation improvement score that hospitals received in cultural competency, translated into an increase of 0.9% in nurse communication and 1.3% in staff responsiveness on patient satisfaction surveys The Problem: While culturally appropriate language is important to promote inclusiveness and reduce disparities, there are challenges and potential problems with its implementation. Because of the cultural diversity of the United States, cultural norms and practices do not ensure that everyone living in the United States will be culturally respected and vary greatly between communities and regions. Because of differences in tradition, religion, society, language, and socialization, individuals within various communities may not feel respected or secure. With literally thousands of languages and dialects in the world, each having its own unique cultural background, dealing with linguistic diversity and ensuring culturally appropriate language can become quite a complex task. In health care this has been shown to lead to neglect or underservice in certain communities through intentional and unintentional slights. As society continues to change, social inclusiveness cannot be overlooked as an integral component of care. Inclusiveness is not only an important but necessary element of care so that patients feel respected and valued in a system that recognizes the cultural practices and identities of different communities. Research has demonstrated that this leads to improved clinician-patient interactions, compliance, and data sharing by patients. More recently the health care system has come to recognize the impact the unique cultural needs of often overlooked groups such as people with disabilities, lesbian, gay, bisexual, and transgender (LGBT) people have on the quality and effectiveness of care as well (while not a racial or ethnic group these provide further evidence of the need for culturally competent care). The Backdrop: According to “Understanding Cultural Differences in the U.S.”, from the U.S.A.Hello, website cultures can differ in 18 different ways including, communication, physical contact (shaking hands, personal space), manners, political correctness, family, treatment of women & girls (men and women going to school/work together, sharing tasks), elders (multigenerational homes), marriage (traditions, views on same-sex marriage), health, education, work, time, money, tips religion, holidays, names and language. Recognizing and understanding cultural differences is important to create trust and security with patients. Generative AI can help bridge language and cultural barriers that often prevent non-English speakers from accessing essential health services. According to Marcin Frąckiewicz, “generative AI can serve as a virtual health assistant, providing accurate and personalized health advice to users. By making health information more accessible, individuals can make better-informed decisions about their health and well-being.” Different cultural backgrounds interact effectively with technology. Similarly, speech synthesis can provide text-to-speech capabilities in a variety of languages, enabling technology to communicate with users in their preferred language. This can often be done more rapidly and more efficiently than when having to locate and find an appropriate translator. Using generative AI can carry out multicultural expression, according to the product and service integration of multicultural expression. A variety of visual depictions, avatars, and characters of different racial, ethnic, and cultural backgrounds. Inclusion can be promoted through such technologies, allowing users to feel represented and valued. In addition, generative AI can help identify and correct potential instances of cultural insensitivity or technological bias. Moreover, since generative AI is iterative it allows for continuous improvement, ensuring that culturally appropriate language and experience are prioritized. Implications: While still in the developmental stages, generative AI can be one tool to assist healthcare organizations in delivering culturally competent care for patients. At its most fundamental level, generative AI models can provide translation services to help overcome language barriers and help communicate between healthcare providers and patients who speak different languages. It can enhance doctor-patient interaction, improve patient satisfaction and reduce misunderstanding. Although communications created with generative AI will likely lack nuance and will have difficulty creating empathetic emotional connections with patients, it will be a step in the right direction. Over time we expect generative AI and other generative AI models to be able to provide up-to-date information about medical conditions, treatment guidelines, medications, and research results. By collecting and analyzing patient data through remote monitoring, facilitating virtual consultations, and providing real-time information and guidance. Moreover, technologies like generative AI can contribute to remote monitoring and telemedicine initiatives to improve access to health care services, particularly for those living in remote areas or with limited mobility. Related Reading: Hurtling into the future’: The potential and thorny ethics of generative AI in healthcare Structural Racism In Historical And Modern US Health Care Policy Can Hospital Cultural Competency Reduce Disparities in Patient Experiences with Care? Racism and discrimination in health care: Providers and patients

  • Exploring Latine Culture-Reducing Disparities Begins with Cultural Sensitivity-The HSB Blog 5/5/23

    As we were preparing for this week’s “Our Take”, we had a little internal debate about Cinco De Mayo. Was it a Mexican holiday, or was it a Latino holiday? As we explored this, we came across an article from the Washington Post entitled “Cinco de Mayo is not a Mexican holiday. It's an American one” which argued “Cinco de Mayo is a celebration created by and for Latino communities in the United States. And the celebration of Cinco de Mayo is more about U.S. Latino history and culture than Mexican history.” As we read this, we realized there is often a tendency to view and categorize other cultures with monolithic and homogeneous labels that are often inadequate. Doing so can lead to broad generalizations that perpetuate the inequities in health care. Increasing cultural sensitization is a way to begin addressing and reducing those inequities. If we are to understand ethnic disparities in health care and deliver culturally appropriate and equitable care, we need to understand the nuances and idiosyncrasies of other cultures. Cinco de Mayo seemed a good place to start so we found the article “Ethnic Bias and the Latine Experience” in the American Counseling Association’s magazine, Counseling Today. While we don’t necessarily agree with everything in the article, we found this to be one of the most thorough and broad attempts to understand the Latine culture in the U.S. and as a result, we reprint an excerpt from it here with permission. Key Takeaways: Only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, and only 3% said they use Latinx to describe themselves (Pew Research Center) The U.S. Latine population was 62.1 million in 2020, or 19% of all Americans and is projected to increase to 111.2 million, or 28% of the U.S. population by 2060 (Pew Research Center & UCLA Latino Policy & Politics Institute). Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. It was created as a “super category” for the 1970 census after of Mexican American and other Hispanic organizations advocated for federal data collection. In 2019, 61.5% of Latines were of Mexican origin or heritage, while 9.7% were if Puerto Rican or of Puerto Rican heritage, and Cubans, Salvadorans, Dominicans, Guatemalans, Colombians, and Hondurans each equaling a million or more in 2019. Ethnic Bias and the Latine Experience “She looks Latina.” “He doesn’t look Black.” “They sound Hispanic.” “She doesn’t sound Asian.” “I think they’re mixed.” Conversations all around us bear witness to the inclination to classify people into groups. This categorization of people is built into the fabric of American life, a fabric not originally intended to cover everyone. Inherent advantages and dominance historically favored white male landowners (with the exception of Jewish or Catholic men). Like Indigenous and Black communities, people of Hispanic or Latine descent continue to navigate a system not created for them. (In the next section, we explain why we prefer to use the term Latine as a gender-neutral or nonbinary alternative to Latino.) The objective of this article is to enhance counselors’ cultural sensitivities when providing services to Latine communities. We will discuss the unique discrimination challenges faced by Latines and provide tips for counselor effectiveness. A culturally responsive discussion about the mental health effects of ethnic bias on the Latine experience begins with a definition of key terms. The American Psychological Association’s (APA) Dictionary of Psychology defines ethnic as “denoting or referring to a group of people having a shared social, cultural, linguistic, and usually racial background,” and it can sometimes include the religious background of a group of people. The U.S. census has only two ethnic categories: Hispanic (Latino) and non-Hispanic (non-Latino). Ethnic bias is discrimination against individuals based on their ethnic group, often resulting in inequities. Nomenclature is problematic and ever evolving in the U.S. system of categorizing people into racial and ethnic groups. Every racialized group in the United States has gone through numerous label adjustments from within and outside the group. For example, First Nation people have been called Indian, American Indian, Native American, and Indigenous American. People of African descent have been called Colored, Negro, Black, Afro-American, and African American. Similarly, the word choices for the collective Hispanic description have also evolved over the years: Hispanic, Latino/a, Latinx and Latine. These are pan-ethnic terms representing cultural origins — regardless of race — for people with an ancestral heritage from Latin American countries and territories, who according to the Pew Research Center, prefer to be identified by their ancestral land (e.g., Mexican, Cuban, Ecuadorian) rather than by a collective pan-ethnic label. The history, and the debate, of nomenclature for this collective group set the stage for understanding the ethnicization of a large and diverse population. Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. According to the Pew Research Center, the term Hispanic was first used after Mexican American and other Hispanic organizations advocated for federal data collection about U.S. residents of Mexico, Cuba, Puerto Rico, Central and South America, and other Spanish-speaking origins. The U.S. Census Bureau responded by creating the “super category” of Hispanic on the 1970 census. The term Latino/a gained popularity in the 1990s to represent communities of people who descend from or live in Latin American regions, regardless of their language of origin (excluding people from Spain). This allowed for gender separation, with Latina representing the female gender and Latino representing the male gender or combined male-female groups. The Pew Research Center noted that Latino first appeared on the U.S. census in 2000 alongside Hispanic, and the two terms are now used interchangeably. While the two terms often overlap, there are exceptions. People from Brazil, French Guiana, Surinam and Guyana, for example, are Latino because these countries are in Latin America, but they are not considered Hispanic because they’re not primarily Spanish-speaking. These regions were colonized by the French, Portuguese, and Italians, so their languages derive from other ancient Latin-based languages instead of Spanish. Latinx has been used as a more progressive, gender- neutral or nonbinary alternative to Latino. Latinx emerged as the preferred term for people who saw gender inclusivity and intersectionality represented through use of the letter “x.” Others, however, note that “x” is a letter forced into languages during colonial conquests, so they reject the imposing use of this colonizing letter. Interestingly, for the population it is intended to identify, only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, according to a Pew survey of U.S. Hispanic adults conducted in December 2019. And only 3% said they use Latinx to describe themselves. In this article, we use the term Latine, the newest word used by this population. Latine has the letter “e” to represent gender neutrality. We like this term because it comes from within the population rather than being assigned by others, and it is void of the controversial “x” introduced by colonists. Next, we look at the meaning and impact of ethnicization and ethnic bias toward Latines in the United States. We explore the ways that bias, and discrimination affect the nation’s largest group of minoritized people, and we recommend actionable solutions to enhance counselors’ cultural sensitivities when providing services to Latine communities. Latines come from more than 20 Latin American countries and several territories, including the U.S. territory of Puerto Rico. There are seven countries in Central America: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua, and Panama. The official language in six of these countries is Spanish, with English being the official language of much of the Caribbean coast including Belize (in addition to Indigenous languages spoken throughout the region). South America has three major territories and 12 countries: Brazil, Argentina, Paraguay, Uruguay, Chile, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname. The official language in most of these countries is Spanish, followed by Portuguese, although it is estimated that there are over a thousand different tribal languages and dialects spoken in many of these countries. The Pew Research Center reported that the U.S. Latine population reached 62.1 million in 2020, accounting for 19% of all Americans. In 2019, 61.5% of all Latines indicated they were of Mexican origin, either born in Mexico or with ancestral roots in Mexico. The next largest group, comprising 9.7% of the U.S. Latine population, are either Puerto Rican born or of Puerto Rican heritage. Cubans, Salvadorans, Dominicans, Guatemalans, Colombians and Hondurans each had a population of a million or more in 2019. Although there are notable similarities, the Latine population is not an ethnic monolith. Latine cultures are diverse, with different foods, folklore, Spanish dialect, religious nuances, rituals and cultural celebrations. Despite the varying cultural experiences, many of the issues facing Latine communities remain the same. Copyright Counseling Today, October 2022, American Counseling Association

  • Oshi Health: Virtual Care Comes to Digestive Health (An Update)

    (We originally profiled Oshi Health in October of 2021, please see our original Scouting Report “Scouting Report-Oshi Health: Virtual Care Comes to Digestive Health” dated 10/22/21 here). The Driver: Oshi Health, a digital gastrointestinal care startup, recently raised $30M in Series B funding in a round led by Koch Disruptive Technologies with participation from existing investors including Flare Capital Partners, Bessemer Venture Partners, Frist Cressey Ventures, CVS Health Ventures, and Takeda Digital Ventures. In addition to the institutional investors, individual investors who joined the Series A round included Jonathan Bush, founder and CEO of Zus Health, (and cofounder of Athenahealth), and Russell Glass, CEO of Headspace Health. According to the company, the ???? will be used to accelerate the next phase of Oshi’s growth, to scale its clinical team nationwide and forge relationships with health plans, employers, channel partners, and provider groups. Key Takeaways: Approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. A study sponsored by the company and presented at the Institute for Health Improvement in January 2023 found Osh’s program resulted in all-cause medical cost savings of approximately $11K per patient in just six months According to Vivante Health, $136B per year is spent on GI conditions in the US, that is more than heart disease ($113B), trauma ($103B), or mental health ($99B). Direct costs for IBS alone are as high as $10 billion and indirect costs can total almost $20 billion. The Story: CEO and founder Sam Holliday encountered GI issues while observing his mother and sister’s ordeal when managing their own IBS care. Holliday’s experience watching his family deal with the difficulties of managing IBS without clinical assistance prompted his interest in companies like Virta Health that use food as medicine for treating diabetes. Holliday saw the contrast between his family’s experience and Virta’s approach of supporting people in reversing the impact of diabetes on their daily lives. Holliday was fascinated with Virta’s holistic approach of care that prioritized the user's experience while saving cost and providing easy access to virtual care that leveraged technology and was data driven. The possibilities that Virta’s virtual model presented for GI spurred the idea of creating Oshi Health. Oshi Health’s encompassing platform supports patients by granting them access to GI specialists, prescriptions, and lab work from the comforts of their homes. Oshi Health provides comprehensive and patient-focused care to patients with GI conditions such as Irritable Bowel Syndrome (IBS), Crohn’s disease, Inflammatory Bowel Disease (IBD), and Gastroesophageal Reflux Disease (GERD). Oshi Health works by connecting patients with an integrated team of GI specialists including board-certified gastroenterologists and registered dieticians. Oshi Health has clinicians that assess symptoms and order lab tests and diagnostics if needed. In addition to the licensed GI doctors and dieticians, patients have the option of speaking with GI-specialized mental health clinicians and nurse practitioners as well. This allows patients to form a customized plan as many of these conditions often involve a mental health component in addition to a physical one. The customized plan attempts to capture the patient’s needs regarding anxiety, nutrition, or stress. The Oshi Health service extends beyond testing and planning by providing stand-by health coaches and care teams to support patients and help them stay on track. Oshi Health’s platform also offers an app designed to help patients take action and stay organized through their GI care journey. With the app, patients can record their symptoms, quality of life measurements, and other factors known to impact their diet, sleep, or exercise. The app also features useful educational materials and recipes to help patients learn more about their condition. The company states their products and services are currently available to over 20 million people as a preferred in-network virtual gastroenterology clinic for national and regional insurers, as well as their employer customers. In April of this year, Oshi and Aetna announced a partnership that provides Aetna commercial members with in-network access to Oshi’s integrated multidisciplinary care teams. In addition, in March of 2022, Firefly Health – a virtual-first healthcare company named Oshi as its preferred partner for digestive care. In January 2023, the company announced clinical trial results of a company-sponsored study at the Institute for Healthcare Improvement (IHI). The study demonstrated that virtual multidisciplinary care for gastrointestinal (GI) disorders from Oshi Health resulted in significantly higher levels of patient engagement, satisfaction, and symptom control resulting in all-cause medical cost savings of approximately $11K per patient in just six months. The Differentiators: About 1 in 4 people suffer from diagnosed GI conditions and many more suffer from chronic undiagnosed symptoms. “GI conditions are really stigmatized”. says Holliday. Integrated GI care is a missing piece of the healthcare infrastructure. There’s a huge group of people who don’t have anywhere to go to access care that’s proven to work.” By using this virtual-first model and increasing access to care at a lower cost for patients, Oshi Health is attempting to revolutionize GI care. Through its approach Oshi is attempting to tap into the large market for food-related health conditions. For example, according to a report entitled “Let Food Be Thy Medicine: Americans Use Diet to Manage Chronic Ailments,” approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. Oshi calculates that these conditions drive approximately $135 billion in annual healthcare costs and have a collective impact greater than diabetes, heart disease and mental health combined. Oshi Health plans to use a community of irritable bowel disease (IBD) patients for disease research, personalized insights, and a new digital therapeutic. Oshi Health is one of the first companies to address GI disorders and is easily accessible for patients through a virtual approach. It is the only virtual platform exclusively for GI patients. Being virtual helps patients receive treatment from the convenience of their home, reduces stigma, and saves them the time and hassle of commuting for GI treatment which can involve extensive and costly testing. Patients are in control of their care and have access to a team of medical professionals at their fingertips. The Big Picture: Oshi Health’s commitment to making GI care accessible, convenient, and affordable is more likely to lower the costs of treatments such as cognitive-behavioral therapy, colonoscopies, X-rays, sonograms, and more in healthcare centers. Oshi Health helps avoid preventable and expensive ER visits as well as unnecessary colonoscopies and endoscopies. For example, according to a study cited by the company, unmanaged digestive symptoms are the #1 cause of emergency department treat-and-release visits. By incorporating access to psychologists in the membership plan rather than requiring it to be paid out of pocket it allows patients to access the services of mental health professionals quickly and easily. Patients can schedule appointments within 3 days and there is always support between visits with care plan implementation. By incorporating both physical and mental health Oshi Health is attempting to treat not just the symptoms but the causes as well, helping to lower the costs versus receiving in-person which can lower the incidence of GI conditions and increase the quality of care they receive. As noted by the company “Oshi Health is able to intercept and change the trajectory of unmanaged symptom escalations” helping to drive improvements in outcomes and cost savings. Billions of dollars are saved annually on avoidable treatments and expenses because these patients are receiving the treatment they need rather than physicians ordering expensive tests without knowing the root cause of their problems. Due to the comprehensiveness of the virtual-first care model, physicians, dieticians, health coaches, care coordinators, and psychologists are all involved in the patient's journey. Under traditional models, , patients with GI issues see just a gastroenterologist. By contrast, in Oshi’s model, a holistic team is involved every step of the way that is customized for each patient’s lifestyle that creates a personalized care plan for their lifestyle. Oshi Health scores $30M to scale gastrointestinal care company, Virtual GI care startup Oshi Health takes up $30M backed by CVS, Takeda venture arms

  • What Sports Analytics Can Teach Us About Integrating AI Into Care

    Our Take: Last week, our founder Jeff Englander, had the pleasure of having dinner with Dr. David Rhew, Global Chief Medical Officer and V.P. of Healthcare for Microsoft. After a broad-ranging discussion about the application of A.I., A/R, V/R, and Cloud to healthcare they came to realize they were both big sports fans. David of his hometown Detroit teams and Jeff of his hometown Boston teams. Toward the end of dinner, Jeff referenced an article he had written in May of 2018 on “What Steph & LeBron Can Teach Business About Analytics” and how sports demonstrate many practical ways to gain acceptance and integrate analytics into an organization. Based on that discussion, we thought it would be timely to reprint it here: As I sat watching the NBA conference finals last night, I began thinking about what I had learned from the MIT Sloan Sports Analytics conferences I went to over the last several years. I thought about how successful sports had been and the NBA in particular in applying sports analytics and what lessons businesses could learn to help them apply analytics to their businesses. Five basic skills stood out that sports franchises had been able to apply to their organizations that were readily transferrable to the business world: Intensive focus; Deep integration Limited analytic “burden” Trust in the process Communication and alignment. 1) Intensive focus - when teams deploy sports analytics they bring incredible focus to the task. One player noted that they do not focus on how to stop LeBron, not even on how to stop LeBron from going left off the dribble, but instead how to stop LeBron from going left off the dribble coming off the pick and roll. This degree of pinpoint analysis and application of the data has contributed to the success and continued refinement of sports analytics on the court, field, rink, etc. 2) Deep integration - each of the analytics groups I spoke with attempted to informally integrate their interactions into the daily routines of players and coaches through natural interactions (the Warriors analytics guy used to rebound for Steph Curry at practice). Analytics groups worked to demystify what they were doing and make themselves approachable. The former St. Louis now L.A. Rams analytics group jokingly coined its office as the “nerds nest”. By integrating themselves into the player’s (and coach's) worlds they were able to break down stereotypes and barriers to acceptance of analytics. 3) Limited analytic “burden” - teams’ data science groups noted given the amount of data they generate it’s important to limit the number of insights they present at any one time. One group made it a rule to discuss or review no more than 3 analytical insights per week with players or coaches. This made their work more accessible, more tangible to players/coaches and helped them quantify the value to the front office. 4) Trust in the process - best illustrated by a player who told the story of working with an analytics group and coaches to design a game plan against an elite offensive player which he followed and executed to a tee. But that night the opposition player couldn’t be stopped and in the player’s words "he dropped 30 on me". The other panelists pointed out that you can’t go away from your system based on short-term results. As one coach noted ‘don’t fail the plan, let the plan fail you…Have faith in the process.” 5) Communication and alignment - last but not least teams stressed the need to be aligned and to communicate that concept clearly all throughout the organization. As Scott Brooks, at the time the coach of the Orlando Magic noted, “we are all in this together, we have to figure this out together”. Surprisingly, at times communication was paramount even for the most successful and highly compensated athletes. For example, at last year’s conference, Chris Bosh a 5x All-Star and 2x NBA Champion, making $18M a year at the time he was referring to, lamented the grueling Miami Heat practices during their near-record 27-game winning streak in 2013, seemingly despite their success (at the time the 2nd longest winning streak in NBA history). When I asked him, what would have made it more bearable, he said communication, just better communication on what they were trying to do. Clearly, professional sports have very successfully applied analytics to their craft and there are a number of lessons that businesses can copy as they seek to gain broader and more effective adoption of analytics throughout the value chain.

  • Viz.ai-Applying AI to Reduce Time to Life Saving Treatments

    The Driver: Viz.ai recently raised $40M in growth capital from CIBC Innovation Banking. The additional funding brings Viz’s total fundraising to approximately $292M. Via has developed a software platform based on artificial intelligence (AI) that is designed to improve communication between care teams handling emergency patients (first applied to stroke patients) by helping improve care coordination and dramatically improved response times. The company will use the funds to help increase expansion and power its expansion, including the possibility of acquisitions. Key Takeaways: While the total cost of strokes in the U.S. was approximately $220 billion, the cost due to under-employment was $38.1 billion, and $30.4 billion from premature mortality (Via.ai & Journal of the Neurological Sciences) The risk of having a first stroke is nearly twice as high for blacks as for whites and blacks have the highest rate of death due to stroke (American Stroke Association) Each one minute [delay in care for stroke victims] translates into 2 million brain cells that die (Viz.ai) Stroke is the number one cause of adult disability in the U.S. and the fifth leading cause of death (American Stroke Association ) The Story: Viz.ai was founded by, Dr. Chris Mansi and David Golan. While working as a neurosurgeon in the U.K. Dr. Mansi observed situations in which a successful surgery was performed yet patients would not survive because of extended time lapses between diagnosis and surgery, which was particularly true with strokes. For example, when doctors believe there has been a stroke, they typically would order x-rays and a series of CT scans, and while the scans themselves typically happened quickly, there often was a sizeable delay before the studies could be read by a competent professional. Once the readings were performed by a radiologist, there was often a further delay in care as clinicians had to inform a local stroke center of the diagnosis and then ensure the patient was transferred to that center for treatment. Dr. Mansi met Golan in graduate school at Stanford while studying for his M.B.A. Golan was suspected of having suffered a stroke prior to entering Stanford and the two classmates lamented the lack of available data for stroke treatment. As noted by the company, “Mansi learned how undertreated large vessel occlusion (LVO) strokes were and wanted to be an agent of change.” Mansi and Golan, collaborated on a plan to apply A.I. to increase the data for stroke treatment and viz.ai was born. As noted in a recent Forbes article, the company’s “software cross-references CT images of a patient’s brain with its database of scans to find early signs of LVO strokes. It then alerts doctors, who see [and communicate] about the images on their phones” and allows those clinicians to communicate with specialists at stroke centers and arrange for patients to be transferred there for care. According to the company, this leads to dramatic decreases in the time it takes for patients to go from diagnosis to procedure, commonly referred to as “door to groin puncture times.” Originally developed for LVO’s Viz has now received FDA approval for 7 A.I. imaging solutions and has extended treatment from LVOs to cerebral aneurysms (February 2022), subdural hemorrhage (July 2022) and most recently hypertrophic cardiomyopathy-HCM (pending). The Differentiators: As noted above, Viz.ai’s system enables it to automatically scan all images in a hospital system and scan for the noted conditions, then alerts clinicians if any are detected. In the case of LVOs the system then allows doctors to view images of patient scans on their phones, exchange messages, and cut crucial time off diagnosis and treatment. As the company notes, this is particularly important for smaller facilities which often lack specialists to interpret scans and arrange for transitions in care. For example, according to a study in the American Journal of Neuroradiology looking at stroke treatment at a facility using viz.ai technology, researchers found “robust improvement” in other stroke response metrics, including door-to-device and door-to-recanalization and a 22% overall decline in time to treatment. This is particularly important in the case of strokes which are the number one cause of disability and the fifth leading cause of death, as time is of the essence for stroke victims with each minute of delay adding one week of disability. Implications: Applying technology to help reduce delays in diagnosis and treatment is one of the most promising applications of artificial intelligence because of the vast amounts of data they can process in short periods of time. While over time, many hope and some fear that these types of technologies will be able to be “taught” how to diagnose and treat illness, in the near term their greatest use lies in augmenting the skills of clinicians by allowing them to focus their attention on areas most in need of an experienced, nuanced diagnosis. This is particularly true for brain injuries where literally every second and every minute count. For example, as noted by the company “every one minute [delay in care for stroke victims] translates into 2 million brain cells that die.” Given that the loss of brain cells results in loss of brain function, disability, or worse the costs to society can be quite high. According to the company, strokes cost the U.S. healthcare system about $220 billion annually and each LVO patient that you are able to treat with a timely thrombectomy” costs one-tenth, or almost $1 million less than those that who aren’t. It is practical examples of the clinical applications of A.I. such as this, which attack very concrete and tangible problems, that are likely to pave the way for acceptance of more complex applications in the healthcare delivery system. Viz.ai partners with Us2.ai to integrate echocardiogram analysis tool;Viz.ai secures Bristol Myers Squibb's backing for hypertrophic cardiomyopathy-spotting AI

  • Explainable AI-Making AI Understandable, Transparent, and Trustworthy-The HSB Blog 3/23/23

    Our Take: Explainable AI, or AI whose methodology, algorithms, and training data can be understood by humans, can address challenges surrounding AI implementation including lack of trust, bias, fairness, accountability, and lack of transparency, among others. For example, a common complaint about AI models is that they are biased and that if the data that AI systems are trained on is biased or incomplete, the resulting model will perpetuate and even amplify that bias. By providing transparency into how an AI model was trained and what factors went into producing a particular result explainable AI can help identify and mitigate bias and fairness issues. In addition, it can also increase accountability by making it easier for users and those impacted by models to trace some of the logic and basis for algorithmic decisions. Finally, by enabling humans to better understand AI models and their development, explainable AI can engender more trust in AI which could accelerate the adoption of AI technologies by helping to ensure these systems were developed with the highest ethical principles of healthcare in mind. Key Takeaways: AI algorithms continuously adjust the weight of inputs to improve prediction accuracy but that can make understanding how the model reaches its conclusions difficult. One way to address this problem is to design systems that explain how the algorithms reach their predictions. ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both of which are well in excess of what any human brain could process and break down. During the Pandemic, the University of Michigan hospital had to deactivate its AI sepsis-alerting model when differences in demographic data for patients affected by the pandemic created discrepancies and a series of false alerts. AI models used to supplement diagnostic practices have been effective in biosignal analyses and studies indicate physicians trust the results when understand how the AI came to its conclusion The Problem: The use of artificial intelligence (AI) in healthcare presents both opportunities and challenges. The complex and opaque nature of many AI algorithms, often referred to as "black boxes", can lead to difficulty in understanding the logical processes behind AI's conclusions. This not only poses a challenge for regulatory compliance and legal liability but also impacts users ability to ensure the systems were developed ethically, are auditable and eventually their ability to trust the conclusions and purpose of the model itself. However, the implementation of processes to make AI more transparent and explainable can be costly and time-consuming and could potentially result in a requirement or at least preference that model developers may need to disclose proprietary intellectual property that went into creating the systems. This process is made even more complex in the U.S. where the lack of general legislation regarding the fair use of personal data and information can hamper the use of AI in healthcare, particularly in clinical contexts where physicians must explain how AI works and how it is trained to reach conclusions. The Backdrop: The concept of explainable AI is to provide human beings involved with using, auditing, and interpreting models a methodology to systematically analyze what data a model was trained on, what predictive factors are more heavily weighted in the models as well as provide cursory insights into how algorithms in particular models arrived at their conclusions/recommendations. This in turn would allow the human beings interacting with the model to better comprehend and trust in the results of a particular AI model instead of the model being viewed as a so-called “black box” where there is limited insight into such factors. In general, many AI algorithms, such as those that utilize deep learning, are often referred to as “black boxes” because they are complex, can have multiple billions and even trillions of parameters upon which calculations are performed and consequently can be difficult to dissect and interpret. For example, ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both well in excess of what any human brain could process and break down. Moreover, because these systems are trained by feeding vast datasets into models which are then designed to learn, adapt and change as they process additional calculations the products of the algorithms are often different from their original design. As a result, the numbers of the parameters the models are working with and the adaptive nature of the machine learning models, engineers and data scientists building these systems cannot fully understand the “thought process” behind an AI’s conclusions or explain how these connections are made. However, as AI is increasingly applied to healthcare in a variety of contexts including medical diagnoses, risk stratification and anomaly detection, it is important that AI developers have methods to ensure they are operating efficiently, impartially, and lawfully in line with regulatory standards both at the model development stage and when models are being rolled into use. As noted in an article published in Nature Medicine, starting the AI development cycle with an interpretable system architecture is necessary because inherent explainability is more compatible with the ethics of healthcare itself than methods to retroactively approximate explainability from black box algorithms. Explainability, although more costly and time-consuming to implement in the development process, ultimately benefits both AI companies and the patients they will eventually serve far more than if they were to forgo it. Adopting a multiple stakeholder view, the layman will find it difficult to make sense of the litany of data that AI are trained on, and that AI recites as part of their generated results, especially if the individual interpreting these results lacks knowledge and training on computer science and programming. Through creating AI with transparency and explainability , developers also create responsible AI that may eventually give way to the larger-scale implementation of AI in a variety of industries, but especially healthcare where more digitization is generating more patient data than ever before along with the need to manage and protect this data in appropriate ways. Creating AI that is explainable ultimately increases end user trust, improves auditability and creates additional opportunities for constructive use of AI for healthcare solutions. This is one way to reduce the hesitation and risks associated with traditional “black box” AI by making legal and regulatory compliance easier, providing the ability for detailed documentation of operating practices, and allowing organizations to create or preserve their reputations for trust and transparency. While a large number of AI-enabled clinical decision support systems are predominantly used to provide supporting advice for physicians in making important diagnostic and triage decisions, a study from the Journal of Scientific Reports found that the this actually helped improve physicians’ diagnostic accuracy, with physician plus AI actually performing better than whey they received human advice concerning the interpretation of patient data (sometimes referred to as the “freestyle chess effect”). AI models used to supplement diagnostic practices have been effective in biosignal analyses such as that of electrocardiogram results to detect biosignal irregularities in patients as quickly and accurately as a human clinician can. For example, a study from the International Journal of Cardiology found that physicians are more inclined to trust the generated results when they can understand how the explainable AI came to its conclusion. As noted in the Columbia Law Journal, however, the most obvious way to make an AI model explainable would be to reveal the source code for the machine learning model, that actually “will often prove unsatisfactory (because of the way machine learning works and because most people will not be able to understand the code)” and because commercial organizations will not want to reveal their trade secrets. As the article notes another approach is to “create a second system alongside the original ‘black box’ model, sometimes called a ‘surrogate model.’” However, a surrogate model only closely approximates the model itself and does not use the same internal weights of the model itself. As such, given the limited risk tolerance in healthcare we doubt such a solution would be acceptable. Implications: As noted by all the buzz around ChatGPT with the recent introduction of ChatGPT4 and its integration into products such as Microsoft’s Copilot and Google’s integration of Bard with Google Workspace, AI products will increasingly become ubiquitous in all aspects of our lives including healthcare. As this happens, AI developers and companies will have to work hard to ensure that these products are transparent and do not purposely or inadvertently contain bias. Along those lines, when working in healthcare in particular, AI companies will have to ensure that they implement frameworks for responsible data use which include 1) ensuring the minimization of bias and discrimination for the benefit of marginalized groups by enforcing non-discrimination and consumer laws in data analysis; 2) providing insight into the factors affecting decision-making algorithms, and 3) requiring organizations to hold themselves accountable to fairness standards and conduct regular internal assessments. In addition as noted in an article form the Congress of Industrial Organizations, in Europe AI developers could be held to legal requirements surrounding transparency without risking IP concerns under Article 22 of the General Data Protection Regulation which codifies an individual’s right to not be subject to decisions based solely on automated processing and requires the supervision of a human in order to minimize overreliance and blind faith in such algorithms. In addition, one of the issues with AI models is due to data shifts, caused when machine learning systems underperform or yield false results due to mismatches between the datasets they were trained on and the real-world data they actually collect and process in practice. For example, as challenges to individuals’ health conditions continue to evolve and new issues emerge, it is important that care providers consider population shifts of disease and how various groups are affected differently. During the Pandemic, the University of Michigan Hospital had to deactivate its AI sepsis-alerting model when differences in demographic data gathered by patients affected by the pandemic created discrepancies with the data the AI system had been trained on, leading to a series of false alerts. As noted in an article in the New England Journal of Medicine this has fundamentally altered the way the AI viewed and understood the relationship between fevers and bacterial sepsis. Episodes like this underscore the need for high-quality, unbiased, and diverse data in order to train models. In addition, given that the regulation of machine learning models and neural networks in healthcare is continuing to evolve, developers must ensure that they continuously monitor and apply new regulations as they evolve, particularly with respect to adaptive AI and informed consent. In addition, developers must ensure that models are tested both in development and post-production to ensure that there is no model drift. With the use of AI models in health care there are special questions that repeatedly need to be asked and answered when using these models. Are AI properly trained to account for the personal aspects of care delivery and consider the individual effects of clinical decision-making, ethically balancing the needs of the many over the needs of the few? Is the data collected and processed by AI secure and safe from malicious actors, and is it accurate enough so that the potential for harm is properly mitigated, particularly against historically underserved or underrepresented groups? Finally, what does the use of these models and these particular algorithms mean with regard to the doctor-patient relationship and the trust vested in our medical professionals? How will decision-making and care be impacted when using AI that may not be sufficiently explainable and transparent enough for doctors themselves to understand the thought process behind and therefore trust the results that are generated? These questions will undoubtedly persist as long as the growth in AI usage continues and it is important that AI is adopted responsibly and with the necessary checks and balances to preserve justice and fairness for the patients they will serve. Related reading: Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram The Judicial Demand for Explainable Artificial Intelligence Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Enhancing trust in artificial intelligence: Audits and explanations can help Art. 22 GDPR – Automated individual decision-making, including profiling - General Data Protection Regulation (GDPR) The Clinician and Dataset Shift in Artificial Intelligence Dissecting racial bias in an algorithm used to manage the health of populations

  • Cognosos-Analyzing and Optimizing Asset Visibility and Management

    The Driver: Cognosos recently raised $25M in a recent venture round led by Riverwood Capital. As part of the funding Joe De Pinho and Eric Ma from Riverwood will join Cognosos’ Board. Cognosos has developed a cloud-based platform of real-time location services (RTLS) and process optimization software. The fundraising round brings Cognosos’ total funding to $38.1M. The proceeds will be used to allow Cognosos to double its staff from the current 50 to 100 as well as continue to expand in healthcare and automotive manufacturing, among other industries. Key Takeaways: Cognosos grew revenue by over 226% in 2022 according to the Atlanta Business Journal and expects to double it year-over-year in the coming year (Cognosos) Between 2020 and 2022 the average price of healthcare services increased at 2.4%, while the Producer Price Index (PPI) increased at 14.3%, leading to a further squeeze on healthcare supply chains as per-patient input costs have exploded (University of Arkansas) A single Cognosos gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets In 2021 it is estimated that at least one-third of healthcare providers operated under negative margins, with a combined loss of $54 billion in net income (Kaufman-Hall) The Story: Cognosos was founded in 2014 and is based on technology developed by the founders at Georgia Tech’s Smart Antenna lab for radio astronomy. According to the company’s website, while investigating techniques in radio astronomy to combine signals from multiple dishes, the founders of Cognosos discovered a way to use software-defined radio (SDR) and cloud-based signal processing. This dramatically lowered the cost and power requirements for wireless sensor transmitters. The result, called RadioCloud, was created with the belief that businesses of all sizes should be able to use ubiquitous, low-cost wireless technology to harness the power of the Internet of Things (IoT). The name Cognosos is derived from the Latin verb cognoscere, meaning “to become aware of, to find.” The company states that they have over 100 customers, 4000 registered users and grew revenue by over 226% in 2022 according to the Atlanta Business Journal. The Differentiators: Cognosos real-time location services (RTLS) technology and process optimization software allows companies to track their assets’ movement, improve operational visibility and safely increase and maximize productivity. The company’s RTLS combines Low Energy Bluetooth technology with AI and its proprietary long-range wireless networking technology to help reduce costs of deployment and improve tracking ability. The company has 10 U.S. patents and states that a single gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets. Their technology is easily deployable without having to pull wires, move tiling or other fixtures minimizing any disruptions to operations and allowing them to leverage existing Bluetooth infrastructure, including Bluetooth-enabled fixtures. The company states that this technology will allow them to bring critical assets on-line much more rapidly than competitors (ex: weeks vs. months). The company’s major clients are in healthcare, automobile, logistics and manufacturing. Implications: For years hospitals and healthcare industries have struggled to effectively manage and optimize their management of assets. The industry is rife with stories of nurses, aides, and others squirreling monitors and other devices away in secret hiding places so that they don't have to spend time and unnecessary energy tracking down what they need. Now more than ever, with hospitals facing increased financial pressures, workforce shortages and employee dissatisfaction, a solution that improves and eases the management of such assets is needed. Given Cognosos' ability to limit the disruption to clinical operations and patient care while helping to reduce costs and employee productivity, they should be positioned well going forward. In addition, since Cognosos’s product is not hardware-based but instead relies on the Cloud and AI software to manage assets, they can provide real-time asset tracking with lower infrastructure costs. Our belief is that solutions like Cognosos’s which leverage both the Cloud and AI in helping to address healthcare’s back-office and supply chain challenges will become the first to be widely adopted and hasten the adoption of these technologies within the industry and pave the way for broader adoption or clinical technology down the road.

  • R-Zero-Making Intelligent Air Disinfection More Economical and Efficient

    The Driver: R-Zero recently raised $105M in a Series C financing led by investment firm CDPQ with participation from BMO Financial Group, Qualcomm Ventures, Upfront Ventures, DBL Partners, World Innovation Lab, Mayo Clinic, Bedrock Capital, SOSV and legendary venture capital investor John Doerr. The Series C financing brings the total amount raised by R-Zero to more than $170M since its founding in 2020. The company will use the funds to scale deployments of its disinfection and risk modeling technology to meet growing demand across public and private sectors, including hospitals, senior care communities, parks and recreation, other government facilities, and college and corporate campuses. Key Takeaways: According to the company, R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms R-Zero’s UV-C products cost anywhere from $3K to $28K compared with traditional institutional UV-C technology which can cost anywhere from $60K to $125K Using R-Zero's products results in more than 90% fewer greenhouse gas emissions (GHG) and waste compared to HVAC and chemical approaches For every dollar employers spend on health care, they’re spending 61 cents on illness-related absences and reduced productivity (Integrated Benefits Institute) The Story: R-Zero was co-founded by Grant Morgan, Eli Harris and Ben Boyer. Morgan, who has an engineering background, had worked briefly as CTO of GIST and had been V.P. of product and engineering at iCracked and was previously in R&D in medical devices. Harris co-founded EcoFlow (an energy solutions company) and has been in partnerships and BD at drone company DJI, while Boyer had been an MD at an early-growth stage VC Tenaya Capital. According to the company, the co-founders applied their experience to innovating an outdated legacy industry to make hospital-grade UVC technology accessible to small and medium-sized businesses. Prior to R-Zero these units could cost anywhere from $60-$125K and often lacked the connected infrastructure and analytics necessary to optimize performance and provide risk analytics for its users (ex: how frequently and heavily rooms are being used and when to use disinfection to help mitigate risk). The Differentiators: As noted above, typical institutional ultraviolet (UV) disinfectant lighting technology can be expensive and has the potential to be harmful (hihigh-poweredVC lights can cause eye injuries if people are exposed to them for long periods of time), however, R-Zero has found ways to mitigate these issues. First, as noted in Forbes, its products run anywhere from $28K for their most expensive device the Arc, to the Beam at $5K and the Vive at $3K. Moreover, while the Arc can only be used to disinfect an empty room due to the wavelength of UVC light, the Beam creates a disinfection zone above people in a room while the Vive can be used to combat harmful microorganisms when people are in a room. In addition, according to the company R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms and does so with 90% fewer greenhouse gas emissions and waste compared to HVAC and chemical approaches. As a result, R-Zero can help improve indoor air quality in hospitals and other medical facilities, factories, warehouses, and other workplaces more efficiently and effectively than outdated technologies. Implications: As noted above, R-Zero’s technology will help hospitals and senior care facilities cost effectively sanitize treatment spaces which can’t necessarily be done with current technology. Moreover, as medical care increasingly moves to outpatient settings the ongoing workplace shortage will challenge these facilities to find ways to keep themselves clean and disinfected and avoid disease transmission. For example, the company claims that their customers have been reducing labor costs by 30%-40%, a number which will likely only get higher given the current labor situation. In addition, even in facilities that have the necessary workforce, it is often difficult to optimize staff time to ensure that offices are sanitized and used to maximum capacity. Utilizing devices like the R-Zero Beam or Vive can allow medium-to-small size facilities to constantly and efficiently be disinfecting rooms, making them immediately available for use.. Also, by removing the burdensome task of having already overworked clinical or janitorial staff spend time sanitizing the rooms, R-Zero’s technology can help improve employee productivity and satisfaction at a time when both are stretched thin. This Startup Wants To Bring Disinfecting UV Light Into “Every Physical Space”, R-Zero Raises $105 Million Series C to Improve the Indoor Air We Breathe, This startup built an ultraviolet device that can disinfect a restaurant in minutes

  • Prescribe Fit – Attacking the Root Cause of MSK Issues

    The Driver: Prescribe fit is a virtual/telehealth-based orthopedic health startup. It is specified for patients that are dealing with orthopedic bone and muscle injuries. Prescribe fit raised 4 million in seed funding. The round was led by Tamarind Hill with participation from the Grote Family as well as Mike Kaufman, the former CEO of Cardinal Health. According to the company, proceeds of the funding will be used to aggressively expand the company, as well as broaden and accelerate product development. Key Takeaways: According to the Bone and Joint initiative USA, 124 million Americans suffer from a musculoskeletal disorder On average, 1 out of 4 elderly adults fall each year and over 800,000 people end up in the hospital due to a fall injury per the U.S. CDC Patients who experienced falls had longer hospital stays and were more frequently discharged to other healthcare facilities, instead of their primary residence according to a study by the Hospital for Special Surgery According to an article in the Journal of Medicine, fear of falling often develops after experiencing a fall and developing a fear of falling can cause older adults to avoid physical activity, experience more difficulty with activities of daily living, and become less able to perform exercises. The Story: Originally started as a weight loss coaching startup in January 2020, Prescribe Fit was only able to secure only one client after enduring the shutdown of all non-essential health services during the Pandemic. Co-founded by CEO, Brock Leonti, who previously owned a home health agency for approximately six years, the company worked at that time to help treat obesity and served primary care doctors. While the company was limited to just one client during the Pandemic they were able to test and refine their model as well as a number of treatment models. As part of that the company gleaned a number of insights including how to successfully use remote patient monitoring technology and the need for limited administrative burden on physicians. The Differentiator: Based on its experience and what it had learned during the Pandemic in August 2022 Prescribe Fit transitioned its business model to focus solely on orthopedic practices and the treatment of the root causes of MSK issues. According to Leonti, this includes helping orthopedic patients reduce blood pressure, blood sugar & weight at-home and partnering with orthopedic practices to improve their patients mental acuity, flexibility & endurance. As noted in the Columbus Business Journal, “Prescribe Fit has a team of nurses and care coordinators who meet remotely with patients and “edit” their daily routines so their behavior changes stick.” This includes having patients take pictures of their meals and then having coordinators indicate where they may be able to reduce portion sizes or substitute healthier items in their diets. According to Leonti, this has allowed orthopedic patients to obtain 5.4% average weight loss in just 16 weeks and create personalized at-home health plans resulting in 80%+ of patients staying engaged for 9+ months, both of which help improve MSK issues. Implications: According to the Bone and Joint initiative USA, 124 million Americans suffer from an MSK disorder but will often end up treating the symptoms and not addressing the root cause. In part this is due to the limited availability of orthopedic specialists and other clinicians to address these issues. By connecting these patients via specialists offices with nurses and other case managers who can address specific dietary and behavior issues that are contributing to these conditions (ex: lack of exercise or inappropriate exercise routines) Prescribe Fit is helping improve the quality of care while lowering the cost. Moreover, since patients are being monitored by clinicians using remote patient monitoring (RPM) and chronic disease management tools, physicians are able to create an additional reimbursement stream (while paying Prescribe Fit a management fee). As the U.S. gets older demographically a larger proportion of the population will have to deal with MSK issues that can lead to falls and injuries which can often compound into other issues. By addressing these issues and helping patients strengthen bones and improve muscle tone Prescribe Fit may help reduce the incidence (and cost) of such issues. Health tech startup Prescribe FIT raises $4M in oversubscribed seed round, Weight-loss coaching startup Prescribe Fit doubles with focus on orthopedics

  • ChatGPT in Healthcare: Where it is Now and A Roadmap for Where it is Going-The HSB Blog 2/2/23

    Our Take: AI chatbots such as ChatGPT and tools that are being developed like it have significant promise in revolutionizing the way care is delivered and the way that patients and care providers can connect with each other. Due to their ease of use and equity of access, patients from all backgrounds can receive effective care, particularly in the fields of medical administration & diagnosis, mental health treatment, patient monitoring, and a variety of other clinical contexts. However as ChatGPT, the most advanced publicly available AI yet is still in its beta stage, it is important to keep in mind that these chatbots operate on a statistical basis and lack real knowledge that leads them to frequently give inaccurate information, and make up solutions via inferences based on the data they are trained on, raising concerns as to whether they can be trusted to deliver correct information. This is especially true for patients with chronic conditions who may be putting their health in danger by following chatbots’ advice that could be seriously flawed. Key Takeaways: Chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to $200-360 billion in savings (NBER) In a study evaluating the performance of virtual assistants in helping patients maintain physical activity, diet, and track medication 79% of participants reported virtual assistants had the potential to change their lifestyle (International Journal of Environmental Research and Public Health) Artificial intelligence solutions in healthcare are easier to access than ever before and care providers are quickly adopting AI chatbots to solve deficiencies in manpower and equity of access for tasks they see as easy to automate. As noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care” The Problem: With new developments in advanced medicine and technology including artificial intelligence and machine learning tools, care providers are working hard to adopt these systems to healthcare particularly where they have the potential to address an ongoing workforce shortage. Moreover, as populations age, the gap between incidence of disease and treatment options broadens. For example, according to the Journal of Preventing Chronic Disease, in 2018, an estimated 51.8% of US adults had at least one out of the ten most commonly diagnosed chronic conditions, and 27.2% of adults had multiple chronic conditions. As a result, hospitals and physicians (providers) are seeing greater levels of care utilization and a need to connect these patients with care resources that address their conditions and/or prevent the conditions from becoming more severe. In addition, given the inefficiencies and disparities in the delivery of care in the U.S. healthcare system (ex: lack of providers in certain rural areas, long wait time for specialists) technology may be best positioned to address these deficiencies and improve outcomes. Over time as these tools become more sophisticated they can be used for initial triage escalation to clinicians for a high level of care increasing the use and application of human intelligence/experience where it may be most needed. The Backdrop: The advent of AI chatbots holds great promise in changing the way we deliver and manage care, especially for practices that lack the resources to handle large numbers of patients and the amount of data they generate. According to Salesforce, chatbots (coined from the term “chat robot”) is a computer program that simulates human conversation either by voice or text communication, and is designed to solve a problem. Early versions of chatbots were used to engage customers alongside the classic customer service channels like phone, email, and social media. Current chatbots such as ChatGPT, leverage machine learning to continually refine and improve their performances using data provided and analyzed by an algorithm. As noted in WIRED magazine, the technology at the core of ChatGPT is not new “it is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.” What makes ChatGPT stand out is “it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5 (which provides an intuitive interface for users to have a conversation with AI). This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use.“ After creating an account with OpenAI (the open source developers behind ChatGPT), all users have to do is type their query into a search bar to begin using ChatGPT’s services. Although ChatGPT is still in beta, its capabilities are impressive and it has surpassed any previously publicly available AI chatbot to date. Using ChatGPT makes it easy to learn as it can quickly and easily summarize any topic the user wishes, saving hours of research and digging through links to understand a certain topic. It can help people compose written materials on anything they wish, including essays, stories, speeches, resumes and more. It is also good at helping people to come up with ideas, and since AI is particularly good at dealing with volume it can provide a litany of possible solutions to humans looking for those solutions. Perhaps the largest change it will bring lies in computer programming. ChatGPT and other AI chatbots have been found to be particularly good at writing and fixing computer code, and there is evidence that using AI assistance in coding could cut total programming time in half according to research conducted by GitHub. For certain healthcare administrative takes, chatbots seem to have a bright future and can connect patients to their care providers in ways that weren’t possible before. Access to healthcare services is one of the most apparent ways, particularly for those living in rural and remote areas far away from the nearest hospital. Disparities among According to the Journal of Public Health, evidence clearly shows that Americans living in rural areas have higher levels of chronic disease, worse health outcomes, and poorer access to digital health solutions as compared with urban and suburban areas. Not only do individuals living in rural areas live much farther away from their nearest hospital, but the facilities themselves often lack the medical personnel and outpatient services common at more urban hospitals which contributes to the inconsistencies in care and outcomes. Similarly, given the increased administrative burden that accompanies the digitization of healthcare and healthcare records, doctors are increasingly occupied by the deluge of tasks that are more suited to automation than others. For example, certain tasks like appointment scheduling, medical records management, and responding to routine and frequently asked patient questions aren’t always the most effective use of medical professionals’ time and could be handled in a consumer friendly and efficient manner by chatbots like ChatGPT. Given the easy way that users are able to interact with ChatGPT there is also the potential to eliminate some of the traditional barriers to the delivery of healthcare, particularly the one-to-many issue given clinician shortages. However, this will not happen near term and will require some refinement. First, as noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…even if the company is able to regularly update the vast swaths of information the tool considers across the internet, it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care uses.” In addition, the article quoted Elliot Bolton from Stanford’s Center for Research on Foundation Models, who noted that ChatGPT is “susceptibe to inventing facts and inventing things, and so text might look [plausible], but may have factual errors.” Bearing that in mind, should ChatGPT follow the path of other chatbots in medicine it does have potential in a number of clinical settings, particularly in the field of mental health. Here it is important to note that neither ChatGPT nor chatbots possess the skills of a licensed and trained mental health professional or the ability to make a nuanced diagnosis so should not be used for diagnosis, drug therapy or treatment of patients in severe distress. That said, the study of chatbots in healthcare has been most extensive around mental health with most systems designed to “empower or improve mental health, perform mental illness screening systems, perform behavior change techniques and in programs to reduce/treat smoking and/or alcohol dependence.” [Review of AI in QAS]. For example, a study from the Journal of Medical Internet Research reported that chatbots have seen promising results in mental health treatment, particularly for depression and anxiety. Among other things “participants reported that chatbots are useful for 1) practicing conversations in a private place; 2) learning, 3) making usersfeel better, 4) preparing users for interactions with health care providers, 5) implementing the learned skills in daily life; 6) facilitating a sense of accountability from daily check-in, and, 7) keeping the learned skills more prominently in users’ minds. Similarly, in a literature review published in the Canadian Journal of Psychiatry assessing the impact of chatbots in a variety of psychiatric studies, numerous applications were found, including the efficacy of chatbots in helping patients recall details from traumatic experiences, decreasing self-reported anxiety or depression with the use of cognitive behavioral therapy, decreasing alcohol consumption, and helping people who may be reluctant to share their experiences with others to talk through their trauma in a healthy way. However, as pointed out by Mashable, ChatGPT wasn’t designed to provide therapeutic care and “while the chatbot is knowledgeable about mental health and may respond with empathy, it can’t diagnose users with a specific mental condition, nor can it reliably and accurately provide treatment details.” In addition to the general cautions about ChatGPT noted previously (it was only trained on data through 2021 and it may invent facts and things), Mashable notes three additional cautions when using ChatGPT for help with mental illness: 1) It was not designed to function as a therapist and can’t diagnose, noting “therapists may frequently acknowledge when they don’t know an answer to a client’s questions, in contrast to a seemingly all-knowing chatbot” in order to get patients to reflect on their circumstances and discover their own insights; 2) ChatGPT may be knowledgeable about mental health, but it is not always comprehensive or right, pointing out that ChatGPT responses can provide incorrect information and that it was unclear what clinical information or treatment protocols it had been trained on; 3) there are [existing] alternatives to using ChatGPT for mental health help, these include chatbots which are specifically designed for mental health like Woebot and Wysa which offer AI guided therapy for a fee. While it is important to keep these cautions and challenges in mind, they also provide a roadmap of areas that ChatGPT is likely to be most effective once these issues are addressed. Similarly, chatbots are also good for monitoring patients and tracking symptoms and behaviors, and many are used as virtual assistants in order to ensure patients’ well-being is positive while ensuring they are adhering to their treatment schedule. A study published in the International Journal of Environmental Research and Public Health evaluated the performance of a virtual health assistant to help ensure patients maintain physical activity, a healthy diet, and track their medication. Results revealed that 79% of participants believed that virtual health assistants have the potential to help change their lifestyles. However, some common complaints were that the chatbot didn’t have as much personality as a real human would, it performed poorly when participants initiated spontaneous communication outside of pre-programmed “check-in” times and that it lacked the ability to provide more personalized feedback. Implications: AI-based chatbots like ChatGPT have the potential to address many of the challenges facing the medical community and help alleviate issues faced due to the workforce shortage. As many have noted, a report by the National Bureau of Economic Research stated that chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to an estimated $200-360 billion in savings. In addition, due to their 24/7 availability, chatbots provide the ability to respond to questions and concerns of patients at any hour addressing pressing medical issues and reaching people in a non-intrusive way. Moreover, as AI technology continues to develop, an increasing number of healthcare providers are beginning to leverage these solutions to solve persistent industry problems such as high costs, medical personnel shortages, and equity in care delivery. Chatbots are perfectly poised to fill these roles and increase efficiency in the process given they can perform at a similar level to humans. Generally, assessments of physician opinions on the use of chatbots in healthcare indicate the willingness to continue their use, and a study published in the Journal of Medical Internet Research reported that of 100 physicians surveyed regarding the effectiveness of chatbots, 78% believed then to be effective for scheduling appointments, 76% believed them to be helpful in locating nearby health clinics, and 71% believed they could aid in providing medication information. Given current workforce shortages, chatbots can act as virtual assistants to medical professionals and have the potential to greatly expand a physician’s capabilities and free up the need for auxiliary staff to attend to such matters. Although AI chatbot platforms and algorithm solutions show great promise in optimizing routine work tasks, current technology is not yet sufficient to allow independent operation as there are certain nuances that are best addressed by humans. Also, as one research review found “acceptance of new technology among clinicians is limited, which possibly can be explained by misconceptions about the accuracy of the technology.” Along with the opportunities for ChatGPT in healthcare, there are a number of challenges to implementing the technology. According to a study from the Journal of Medicine, Health Care, and Philosophy, since chatbots lack the lived experience, empathy, and understanding of unique situations that real-world medical professionals have they should not be trusted to provide detailed patient assessments, analyze emergency health situations, or triage patients because they may inadvertently give false or flawed advice without the knowledge of the personal factors affecting patients’ health conditions. Some clinicians are worried that they may one day be replaced by AI-powered machines or chatbots which lack the personal touch and often the specific facts and data that make in-person consultations significantly more effective. . While over time AI may be able to mimic human responses, chatbots will still need to be developed so they can effectively react to unexpected and unusual user inputs, provide detailed and factual responses, and deliver a wide range of variability in their responses so that they can have a future in clinical practices. This will ultimately require further developer input and more experience interacting with patients in order to adequately personalize chatbot conversations. Additionally, despite safeguards put in place by developers like the many pre-programmed controls to decline requests that it cannot handle, and the ability to block “unsafe:” and “sensitive” content, an article published in WIRED Magazine noted that ChatGPT will sometimes fabricate facts, restate incorrect information, and exhibit hateful statements & bias that previous users may have expressed to it, leading to unfair treatment of certain groups. The article noted that the safeguards put in place by ChatGPT’s developers can easily be bypassed using different wording to ask a question and emphasized the need for strong and comprehensive protections to prevent abuse of these systems. In addition, there is also the need for data security to ensure patient privacy as all of this data will be fed to private companies developing these tools As the aforementioned Mashable article noted about using ChatGPT for mental health advice, “therapists are prohibited by law from sharing client information, people who use ChatGPT…do not have the same privacy protections.” Related Reading: ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw A Process Evaluation Examining the Performance, Adherence, and Acceptability of a Physical Activity and Diet Artificial Intelligence Virtual Health Assistant The Potential Impact of Artificial Intelligence on Healthcare Spending Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape

bottom of page