top of page


181 items found for ""

  • Strive Health: At-Home & Virtual Value-Based Kidney Care

    The Driver: Strive Health, a provider of technology-enabled value-based at-home and virtual kidney care, recently raised $166M in Series C funding in a round led by NEA and included participation from CVS Health Ventures, CapitalG, Echo Health Ventures, Town Hall Ventures, Ascension Ventures, and Redpoint. This brings the company’s total funding to $386M since its founding. According to the company the funding will be used to expand partnerships with Medicare Advantage and commercial payers, as well as expanding into new and existing markets. Key Takeaways: More than 1 in 7 adults in the U.S. (15% of the adult population) have kidney disease and approximately 90% of those with kidney disease don't know they have it (National Kidney Foundation) Strive is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care and a 42% reduction in hospitalizations (Strive Health) Approximately 34M people have early undiagnosed, early-stage (stage 1-3) kidney disease accounting for over 50% of the cost of kidney disease (U.S. Renal Data System) McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home accounting for about $5B in savings. The Story: Strive Health was co-founded by CEO Chris Riopelle and Bob Badal, both for (from?) DaVita. As noted by the Denver Business Journal, Riopelle and Badal realized that only a small percentage of those with chronic kidney disease, about 10%, are aware of it and have been diagnosed. [In fact] about 80% of patients start on dialysis by crashing into it: getting so sick that they end up in the hospital, and learning there that their kidneys have shut down and they need to start treatment.” Riopell and Badal realized there had to be a better way and understood that finding a way to diagnose and intervene with these patients earlier in the process could improve the experience and the quality of care. As noted on the company’s website, Riopelle wants to help those who deserve a better patient journey and is deeply committed to fundamentally changing how kidney care works for the almost 40M patients in need in America. Currently, “kidney care focuses almost entirely on ESKD and in-center dialysis, missing the primary drivers of high cost and poor outcomes. Strive is [attempting to change] the kidney care paradigm to identify patients earlier, prioritize the right care at the right time and drive better outcomes – all while lowering costs.” The Differentiators: Strive’s care model looks to combine technology such as Artificial Intelligence to empower caregivers to, in their words, “methodically reinvent kidney care”. According to the company, Strive’s Care Multiplier uses machine learning models that are trained on over 100 million patient records to calculate risk scores, end-stage kidney disease crash predictions, admission and readmission predictions as well as disease progression predictions. In addition, the company states that its artificial intelligence algorithms can identify patients whose kidney disease is undiagnosed thereby allowing earlier interventions. Strive then uses this data to “form an integrated care delivery system that supports the entire patient journey from chronic kidney disease (CKD) to end-stage kidney disease (ESKD), As highlighted by MedCity News “The company provides at-home and virtual support for chronic kidney disease, end-stage kidney disease, dialysis, and kidney transplant and connects the patients with a care team, nurse practitioner, registered nurse, case manager, and care coordinator.” Strive states that it is currently responsible for $2.5B of medical spending and has realized a 20% reduction in the total cost of kidney care, a 42% reduction in hospitalizations, and has achieved a 94% overall patient satisfaction rate. The company serves 80,000 CKD and ESRD patients in 30 states and has established partnerships with 600 nephrology providers across 10 states. The Big Picture: According to the National Kidney Foundation, approximately 90% of those with kidney disease don’t know they have it and Medicare costs for all people with all stages of kidney disease were $130 billion. For patients in need of a kidney transplant, this amounts to over $80K per person per year spending, much of which could have been prevented with early detection and treatment. Moreover, over 50% of the cost of untreated kidney disease is for people with early-stage disease (stage 1-3 disease) where companies like Strive Health and its competitors (Monogram, Somatus) could make a meaningful impact on the cost of care and patient quality of life. By helping to move kidney care out of facilities and improving its effectiveness. This would also have a significant financial impact as McKinsey estimates that between 15-25% of dialysis spending could be shifted to the home, amounting to about $5B in spending. Thus Strive and its competitors are significantly reducing the total cost of care. This is also propelled by regulatory moves such as the 21st Century Cures Act which now allows dialysis patients to become enrolled in Medicare Advantage plans. By combining the power of improved data and analytics with personalized care, companies like Strive may be able to change the progression of this very costly chronic disease. Strive Health Rakes In $166M for Value-based Kidney Care, Strive Health grabs $166M to provide end-to-end kidney care

  • Generative AI as Enabler of Culturally Competent Care-The HSB Blog 5/26/23

    Our Take: Differences in language,culture and inaccurate descriptions can cause patients to refuse help from other races (ex: black patients/white doctors, Chinese patients/English-speaking doctors) technology like generative AI may help by employing appropriate language which would help narrow the gap. Using generative AI to process natural language, developing language models and chatbots to understand and respond to patients of all backgrounds in a culturally appropriate way, generative AI can be trained on data sets containing cultural backgrounds, dialects, and linguistic nuances allowing it to understand a variety of accents and dialects. This would enable individuals from different cultural backgrounds to interact effectively with the technology. Giving patients the ability to engage with technology in a way that respects their cultural traditions and language preferences promotes a sense of dignity, respect, and empowerment. By working towards cultural inclusion, technology has the potential not only to reduce differences, but to promote understanding, empathy, and harmony in our increasingly interconnected world. Key Takeaways: African Americans and Latinos experience 30% to 40% poorer health outcomes than White Americans Research shows poor care for the underserved is because of fear, lack of access to quality healthcare, distrust of doctors, and often dismissed symptoms and pains One study found that black patients were significantly less likely than white patients to receive analgesics for extremity fractures in the emergency room (57% vs. 74%), despite having similar self-reports of pain Each additional standard deviation improvement score that hospitals received in cultural competency, translated into an increase of 0.9% in nurse communication and 1.3% in staff responsiveness on patient satisfaction surveys The Problem: While culturally appropriate language is important to promote inclusiveness and reduce disparities, there are challenges and potential problems with its implementation. Because of the cultural diversity of the United States, cultural norms and practices do not ensure that everyone living in the United States will be culturally respected and vary greatly between communities and regions. Because of differences in tradition, religion, society, language, and socialization, individuals within various communities may not feel respected or secure. With literally thousands of languages and dialects in the world, each having its own unique cultural background, dealing with linguistic diversity and ensuring culturally appropriate language can become quite a complex task. In health care this has been shown to lead to neglect or underservice in certain communities through intentional and unintentional slights. As society continues to change, social inclusiveness cannot be overlooked as an integral component of care. Inclusiveness is not only an important but necessary element of care so that patients feel respected and valued in a system that recognizes the cultural practices and identities of different communities. Research has demonstrated that this leads to improved clinician-patient interactions, compliance, and data sharing by patients. More recently the health care system has come to recognize the impact the unique cultural needs of often overlooked groups such as people with disabilities, lesbian, gay, bisexual, and transgender (LGBT) people have on the quality and effectiveness of care as well (while not a racial or ethnic group these provide further evidence of the need for culturally competent care). The Backdrop: According to “Understanding Cultural Differences in the U.S.”, from the U.S.A.Hello, website cultures can differ in 18 different ways including, communication, physical contact (shaking hands, personal space), manners, political correctness, family, treatment of women & girls (men and women going to school/work together, sharing tasks), elders (multigenerational homes), marriage (traditions, views on same-sex marriage), health, education, work, time, money, tips religion, holidays, names and language. Recognizing and understanding cultural differences is important to create trust and security with patients. Generative AI can help bridge language and cultural barriers that often prevent non-English speakers from accessing essential health services. According to Marcin Frąckiewicz, “generative AI can serve as a virtual health assistant, providing accurate and personalized health advice to users. By making health information more accessible, individuals can make better-informed decisions about their health and well-being.” Different cultural backgrounds interact effectively with technology. Similarly, speech synthesis can provide text-to-speech capabilities in a variety of languages, enabling technology to communicate with users in their preferred language. This can often be done more rapidly and more efficiently than when having to locate and find an appropriate translator. Using generative AI can carry out multicultural expression, according to the product and service integration of multicultural expression. A variety of visual depictions, avatars, and characters of different racial, ethnic, and cultural backgrounds. Inclusion can be promoted through such technologies, allowing users to feel represented and valued. In addition, generative AI can help identify and correct potential instances of cultural insensitivity or technological bias. Moreover, since generative AI is iterative it allows for continuous improvement, ensuring that culturally appropriate language and experience are prioritized. Implications: While still in the developmental stages, generative AI can be one tool to assist healthcare organizations in delivering culturally competent care for patients. At its most fundamental level, generative AI models can provide translation services to help overcome language barriers and help communicate between healthcare providers and patients who speak different languages. It can enhance doctor-patient interaction, improve patient satisfaction and reduce misunderstanding. Although communications created with generative AI will likely lack nuance and will have difficulty creating empathetic emotional connections with patients, it will be a step in the right direction. Over time we expect generative AI and other generative AI models to be able to provide up-to-date information about medical conditions, treatment guidelines, medications, and research results. By collecting and analyzing patient data through remote monitoring, facilitating virtual consultations, and providing real-time information and guidance. Moreover, technologies like generative AI can contribute to remote monitoring and telemedicine initiatives to improve access to health care services, particularly for those living in remote areas or with limited mobility. Related Reading: Hurtling into the future’: The potential and thorny ethics of generative AI in healthcare Structural Racism In Historical And Modern US Health Care Policy Can Hospital Cultural Competency Reduce Disparities in Patient Experiences with Care? Racism and discrimination in health care: Providers and patients

  • Exploring Latine Culture-Reducing Disparities Begins with Cultural Sensitivity-The HSB Blog 5/5/23

    As we were preparing for this week’s “Our Take”, we had a little internal debate about Cinco De Mayo. Was it a Mexican holiday, or was it a Latino holiday? As we explored this, we came across an article from the Washington Post entitled “Cinco de Mayo is not a Mexican holiday. It's an American one” which argued “Cinco de Mayo is a celebration created by and for Latino communities in the United States. And the celebration of Cinco de Mayo is more about U.S. Latino history and culture than Mexican history.” As we read this, we realized there is often a tendency to view and categorize other cultures with monolithic and homogeneous labels that are often inadequate. Doing so can lead to broad generalizations that perpetuate the inequities in health care. Increasing cultural sensitization is a way to begin addressing and reducing those inequities. If we are to understand ethnic disparities in health care and deliver culturally appropriate and equitable care, we need to understand the nuances and idiosyncrasies of other cultures. Cinco de Mayo seemed a good place to start so we found the article “Ethnic Bias and the Latine Experience” in the American Counseling Association’s magazine, Counseling Today. While we don’t necessarily agree with everything in the article, we found this to be one of the most thorough and broad attempts to understand the Latine culture in the U.S. and as a result, we reprint an excerpt from it here with permission. Key Takeaways: Only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, and only 3% said they use Latinx to describe themselves (Pew Research Center) The U.S. Latine population was 62.1 million in 2020, or 19% of all Americans and is projected to increase to 111.2 million, or 28% of the U.S. population by 2060 (Pew Research Center & UCLA Latino Policy & Politics Institute). Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. It was created as a “super category” for the 1970 census after of Mexican American and other Hispanic organizations advocated for federal data collection. In 2019, 61.5% of Latines were of Mexican origin or heritage, while 9.7% were if Puerto Rican or of Puerto Rican heritage, and Cubans, Salvadorans, Dominicans, Guatemalans, Colombians, and Hondurans each equaling a million or more in 2019. Ethnic Bias and the Latine Experience “She looks Latina.” “He doesn’t look Black.” “They sound Hispanic.” “She doesn’t sound Asian.” “I think they’re mixed.” Conversations all around us bear witness to the inclination to classify people into groups. This categorization of people is built into the fabric of American life, a fabric not originally intended to cover everyone. Inherent advantages and dominance historically favored white male landowners (with the exception of Jewish or Catholic men). Like Indigenous and Black communities, people of Hispanic or Latine descent continue to navigate a system not created for them. (In the next section, we explain why we prefer to use the term Latine as a gender-neutral or nonbinary alternative to Latino.) The objective of this article is to enhance counselors’ cultural sensitivities when providing services to Latine communities. We will discuss the unique discrimination challenges faced by Latines and provide tips for counselor effectiveness. A culturally responsive discussion about the mental health effects of ethnic bias on the Latine experience begins with a definition of key terms. The American Psychological Association’s (APA) Dictionary of Psychology defines ethnic as “denoting or referring to a group of people having a shared social, cultural, linguistic, and usually racial background,” and it can sometimes include the religious background of a group of people. The U.S. census has only two ethnic categories: Hispanic (Latino) and non-Hispanic (non-Latino). Ethnic bias is discrimination against individuals based on their ethnic group, often resulting in inequities. Nomenclature is problematic and ever evolving in the U.S. system of categorizing people into racial and ethnic groups. Every racialized group in the United States has gone through numerous label adjustments from within and outside the group. For example, First Nation people have been called Indian, American Indian, Native American, and Indigenous American. People of African descent have been called Colored, Negro, Black, Afro-American, and African American. Similarly, the word choices for the collective Hispanic description have also evolved over the years: Hispanic, Latino/a, Latinx and Latine. These are pan-ethnic terms representing cultural origins — regardless of race — for people with an ancestral heritage from Latin American countries and territories, who according to the Pew Research Center, prefer to be identified by their ancestral land (e.g., Mexican, Cuban, Ecuadorian) rather than by a collective pan-ethnic label. The history, and the debate, of nomenclature for this collective group set the stage for understanding the ethnicization of a large and diverse population. Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. According to the Pew Research Center, the term Hispanic was first used after Mexican American and other Hispanic organizations advocated for federal data collection about U.S. residents of Mexico, Cuba, Puerto Rico, Central and South America, and other Spanish-speaking origins. The U.S. Census Bureau responded by creating the “super category” of Hispanic on the 1970 census. The term Latino/a gained popularity in the 1990s to represent communities of people who descend from or live in Latin American regions, regardless of their language of origin (excluding people from Spain). This allowed for gender separation, with Latina representing the female gender and Latino representing the male gender or combined male-female groups. The Pew Research Center noted that Latino first appeared on the U.S. census in 2000 alongside Hispanic, and the two terms are now used interchangeably. While the two terms often overlap, there are exceptions. People from Brazil, French Guiana, Surinam and Guyana, for example, are Latino because these countries are in Latin America, but they are not considered Hispanic because they’re not primarily Spanish-speaking. These regions were colonized by the French, Portuguese, and Italians, so their languages derive from other ancient Latin-based languages instead of Spanish. Latinx has been used as a more progressive, gender- neutral or nonbinary alternative to Latino. Latinx emerged as the preferred term for people who saw gender inclusivity and intersectionality represented through use of the letter “x.” Others, however, note that “x” is a letter forced into languages during colonial conquests, so they reject the imposing use of this colonizing letter. Interestingly, for the population it is intended to identify, only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, according to a Pew survey of U.S. Hispanic adults conducted in December 2019. And only 3% said they use Latinx to describe themselves. In this article, we use the term Latine, the newest word used by this population. Latine has the letter “e” to represent gender neutrality. We like this term because it comes from within the population rather than being assigned by others, and it is void of the controversial “x” introduced by colonists. Next, we look at the meaning and impact of ethnicization and ethnic bias toward Latines in the United States. We explore the ways that bias, and discrimination affect the nation’s largest group of minoritized people, and we recommend actionable solutions to enhance counselors’ cultural sensitivities when providing services to Latine communities. Latines come from more than 20 Latin American countries and several territories, including the U.S. territory of Puerto Rico. There are seven countries in Central America: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua, and Panama. The official language in six of these countries is Spanish, with English being the official language of much of the Caribbean coast including Belize (in addition to Indigenous languages spoken throughout the region). South America has three major territories and 12 countries: Brazil, Argentina, Paraguay, Uruguay, Chile, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname. The official language in most of these countries is Spanish, followed by Portuguese, although it is estimated that there are over a thousand different tribal languages and dialects spoken in many of these countries. The Pew Research Center reported that the U.S. Latine population reached 62.1 million in 2020, accounting for 19% of all Americans. In 2019, 61.5% of all Latines indicated they were of Mexican origin, either born in Mexico or with ancestral roots in Mexico. The next largest group, comprising 9.7% of the U.S. Latine population, are either Puerto Rican born or of Puerto Rican heritage. Cubans, Salvadorans, Dominicans, Guatemalans, Colombians and Hondurans each had a population of a million or more in 2019. Although there are notable similarities, the Latine population is not an ethnic monolith. Latine cultures are diverse, with different foods, folklore, Spanish dialect, religious nuances, rituals and cultural celebrations. Despite the varying cultural experiences, many of the issues facing Latine communities remain the same. Copyright Counseling Today, October 2022, American Counseling Association

  • Oshi Health: Virtual Care Comes to Digestive Health (An Update)

    (We originally profiled Oshi Health in October of 2021, please see our original Scouting Report “Scouting Report-Oshi Health: Virtual Care Comes to Digestive Health” dated 10/22/21 here). The Driver: Oshi Health, a digital gastrointestinal care startup, recently raised $30M in Series B funding in a round led by Koch Disruptive Technologies with participation from existing investors including Flare Capital Partners, Bessemer Venture Partners, Frist Cressey Ventures, CVS Health Ventures, and Takeda Digital Ventures. In addition to the institutional investors, individual investors who joined the Series A round included Jonathan Bush, founder and CEO of Zus Health, (and cofounder of Athenahealth), and Russell Glass, CEO of Headspace Health. According to the company, the ???? will be used to accelerate the next phase of Oshi’s growth, to scale its clinical team nationwide and forge relationships with health plans, employers, channel partners, and provider groups. Key Takeaways: Approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. A study sponsored by the company and presented at the Institute for Health Improvement in January 2023 found Osh’s program resulted in all-cause medical cost savings of approximately $11K per patient in just six months According to Vivante Health, $136B per year is spent on GI conditions in the US, that is more than heart disease ($113B), trauma ($103B), or mental health ($99B). Direct costs for IBS alone are as high as $10 billion and indirect costs can total almost $20 billion. The Story: CEO and founder Sam Holliday encountered GI issues while observing his mother and sister’s ordeal when managing their own IBS care. Holliday’s experience watching his family deal with the difficulties of managing IBS without clinical assistance prompted his interest in companies like Virta Health that use food as medicine for treating diabetes. Holliday saw the contrast between his family’s experience and Virta’s approach of supporting people in reversing the impact of diabetes on their daily lives. Holliday was fascinated with Virta’s holistic approach of care that prioritized the user's experience while saving cost and providing easy access to virtual care that leveraged technology and was data driven. The possibilities that Virta’s virtual model presented for GI spurred the idea of creating Oshi Health. Oshi Health’s encompassing platform supports patients by granting them access to GI specialists, prescriptions, and lab work from the comforts of their homes. Oshi Health provides comprehensive and patient-focused care to patients with GI conditions such as Irritable Bowel Syndrome (IBS), Crohn’s disease, Inflammatory Bowel Disease (IBD), and Gastroesophageal Reflux Disease (GERD). Oshi Health works by connecting patients with an integrated team of GI specialists including board-certified gastroenterologists and registered dieticians. Oshi Health has clinicians that assess symptoms and order lab tests and diagnostics if needed. In addition to the licensed GI doctors and dieticians, patients have the option of speaking with GI-specialized mental health clinicians and nurse practitioners as well. This allows patients to form a customized plan as many of these conditions often involve a mental health component in addition to a physical one. The customized plan attempts to capture the patient’s needs regarding anxiety, nutrition, or stress. The Oshi Health service extends beyond testing and planning by providing stand-by health coaches and care teams to support patients and help them stay on track. Oshi Health’s platform also offers an app designed to help patients take action and stay organized through their GI care journey. With the app, patients can record their symptoms, quality of life measurements, and other factors known to impact their diet, sleep, or exercise. The app also features useful educational materials and recipes to help patients learn more about their condition. The company states their products and services are currently available to over 20 million people as a preferred in-network virtual gastroenterology clinic for national and regional insurers, as well as their employer customers. In April of this year, Oshi and Aetna announced a partnership that provides Aetna commercial members with in-network access to Oshi’s integrated multidisciplinary care teams. In addition, in March of 2022, Firefly Health – a virtual-first healthcare company named Oshi as its preferred partner for digestive care. In January 2023, the company announced clinical trial results of a company-sponsored study at the Institute for Healthcare Improvement (IHI). The study demonstrated that virtual multidisciplinary care for gastrointestinal (GI) disorders from Oshi Health resulted in significantly higher levels of patient engagement, satisfaction, and symptom control resulting in all-cause medical cost savings of approximately $11K per patient in just six months. The Differentiators: About 1 in 4 people suffer from diagnosed GI conditions and many more suffer from chronic undiagnosed symptoms. “GI conditions are really stigmatized”. says Holliday. Integrated GI care is a missing piece of the healthcare infrastructure. There’s a huge group of people who don’t have anywhere to go to access care that’s proven to work.” By using this virtual-first model and increasing access to care at a lower cost for patients, Oshi Health is attempting to revolutionize GI care. Through its approach Oshi is attempting to tap into the large market for food-related health conditions. For example, according to a report entitled “Let Food Be Thy Medicine: Americans Use Diet to Manage Chronic Ailments,” approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. Oshi calculates that these conditions drive approximately $135 billion in annual healthcare costs and have a collective impact greater than diabetes, heart disease and mental health combined. Oshi Health plans to use a community of irritable bowel disease (IBD) patients for disease research, personalized insights, and a new digital therapeutic. Oshi Health is one of the first companies to address GI disorders and is easily accessible for patients through a virtual approach. It is the only virtual platform exclusively for GI patients. Being virtual helps patients receive treatment from the convenience of their home, reduces stigma, and saves them the time and hassle of commuting for GI treatment which can involve extensive and costly testing. Patients are in control of their care and have access to a team of medical professionals at their fingertips. The Big Picture: Oshi Health’s commitment to making GI care accessible, convenient, and affordable is more likely to lower the costs of treatments such as cognitive-behavioral therapy, colonoscopies, X-rays, sonograms, and more in healthcare centers. Oshi Health helps avoid preventable and expensive ER visits as well as unnecessary colonoscopies and endoscopies. For example, according to a study cited by the company, unmanaged digestive symptoms are the #1 cause of emergency department treat-and-release visits. By incorporating access to psychologists in the membership plan rather than requiring it to be paid out of pocket it allows patients to access the services of mental health professionals quickly and easily. Patients can schedule appointments within 3 days and there is always support between visits with care plan implementation. By incorporating both physical and mental health Oshi Health is attempting to treat not just the symptoms but the causes as well, helping to lower the costs versus receiving in-person which can lower the incidence of GI conditions and increase the quality of care they receive. As noted by the company “Oshi Health is able to intercept and change the trajectory of unmanaged symptom escalations” helping to drive improvements in outcomes and cost savings. Billions of dollars are saved annually on avoidable treatments and expenses because these patients are receiving the treatment they need rather than physicians ordering expensive tests without knowing the root cause of their problems. Due to the comprehensiveness of the virtual-first care model, physicians, dieticians, health coaches, care coordinators, and psychologists are all involved in the patient's journey. Under traditional models, , patients with GI issues see just a gastroenterologist. By contrast, in Oshi’s model, a holistic team is involved every step of the way that is customized for each patient’s lifestyle that creates a personalized care plan for their lifestyle. Oshi Health scores $30M to scale gastrointestinal care company, Virtual GI care startup Oshi Health takes up $30M backed by CVS, Takeda venture arms

  • What Sports Analytics Can Teach Us About Integrating AI Into Care

    Our Take: Last week, our founder Jeff Englander, had the pleasure of having dinner with Dr. David Rhew, Global Chief Medical Officer and V.P. of Healthcare for Microsoft. After a broad-ranging discussion about the application of A.I., A/R, V/R, and Cloud to healthcare they came to realize they were both big sports fans. David of his hometown Detroit teams and Jeff of his hometown Boston teams. Toward the end of dinner, Jeff referenced an article he had written in May of 2018 on “What Steph & LeBron Can Teach Business About Analytics” and how sports demonstrate many practical ways to gain acceptance and integrate analytics into an organization. Based on that discussion, we thought it would be timely to reprint it here: As I sat watching the NBA conference finals last night, I began thinking about what I had learned from the MIT Sloan Sports Analytics conferences I went to over the last several years. I thought about how successful sports had been and the NBA in particular in applying sports analytics and what lessons businesses could learn to help them apply analytics to their businesses. Five basic skills stood out that sports franchises had been able to apply to their organizations that were readily transferrable to the business world: Intensive focus; Deep integration Limited analytic “burden” Trust in the process Communication and alignment. 1) Intensive focus - when teams deploy sports analytics they bring incredible focus to the task. One player noted that they do not focus on how to stop LeBron, not even on how to stop LeBron from going left off the dribble, but instead how to stop LeBron from going left off the dribble coming off the pick and roll. This degree of pinpoint analysis and application of the data has contributed to the success and continued refinement of sports analytics on the court, field, rink, etc. 2) Deep integration - each of the analytics groups I spoke with attempted to informally integrate their interactions into the daily routines of players and coaches through natural interactions (the Warriors analytics guy used to rebound for Steph Curry at practice). Analytics groups worked to demystify what they were doing and make themselves approachable. The former St. Louis now L.A. Rams analytics group jokingly coined its office as the “nerds nest”. By integrating themselves into the player’s (and coach's) worlds they were able to break down stereotypes and barriers to acceptance of analytics. 3) Limited analytic “burden” - teams’ data science groups noted given the amount of data they generate it’s important to limit the number of insights they present at any one time. One group made it a rule to discuss or review no more than 3 analytical insights per week with players or coaches. This made their work more accessible, more tangible to players/coaches and helped them quantify the value to the front office. 4) Trust in the process - best illustrated by a player who told the story of working with an analytics group and coaches to design a game plan against an elite offensive player which he followed and executed to a tee. But that night the opposition player couldn’t be stopped and in the player’s words "he dropped 30 on me". The other panelists pointed out that you can’t go away from your system based on short-term results. As one coach noted ‘don’t fail the plan, let the plan fail you…Have faith in the process.” 5) Communication and alignment - last but not least teams stressed the need to be aligned and to communicate that concept clearly all throughout the organization. As Scott Brooks, at the time the coach of the Orlando Magic noted, “we are all in this together, we have to figure this out together”. Surprisingly, at times communication was paramount even for the most successful and highly compensated athletes. For example, at last year’s conference, Chris Bosh a 5x All-Star and 2x NBA Champion, making $18M a year at the time he was referring to, lamented the grueling Miami Heat practices during their near-record 27-game winning streak in 2013, seemingly despite their success (at the time the 2nd longest winning streak in NBA history). When I asked him, what would have made it more bearable, he said communication, just better communication on what they were trying to do. Clearly, professional sports have very successfully applied analytics to their craft and there are a number of lessons that businesses can copy as they seek to gain broader and more effective adoption of analytics throughout the value chain.

  • AI to Reduce Time to Life Saving Treatments

    The Driver: recently raised $40M in growth capital from CIBC Innovation Banking. The additional funding brings Viz’s total fundraising to approximately $292M. Via has developed a software platform based on artificial intelligence (AI) that is designed to improve communication between care teams handling emergency patients (first applied to stroke patients) by helping improve care coordination and dramatically improved response times. The company will use the funds to help increase expansion and power its expansion, including the possibility of acquisitions. Key Takeaways: While the total cost of strokes in the U.S. was approximately $220 billion, the cost due to under-employment was $38.1 billion, and $30.4 billion from premature mortality ( & Journal of the Neurological Sciences) The risk of having a first stroke is nearly twice as high for blacks as for whites and blacks have the highest rate of death due to stroke (American Stroke Association) Each one minute [delay in care for stroke victims] translates into 2 million brain cells that die ( Stroke is the number one cause of adult disability in the U.S. and the fifth leading cause of death (American Stroke Association ) The Story: was founded by, Dr. Chris Mansi and David Golan. While working as a neurosurgeon in the U.K. Dr. Mansi observed situations in which a successful surgery was performed yet patients would not survive because of extended time lapses between diagnosis and surgery, which was particularly true with strokes. For example, when doctors believe there has been a stroke, they typically would order x-rays and a series of CT scans, and while the scans themselves typically happened quickly, there often was a sizeable delay before the studies could be read by a competent professional. Once the readings were performed by a radiologist, there was often a further delay in care as clinicians had to inform a local stroke center of the diagnosis and then ensure the patient was transferred to that center for treatment. Dr. Mansi met Golan in graduate school at Stanford while studying for his M.B.A. Golan was suspected of having suffered a stroke prior to entering Stanford and the two classmates lamented the lack of available data for stroke treatment. As noted by the company, “Mansi learned how undertreated large vessel occlusion (LVO) strokes were and wanted to be an agent of change.” Mansi and Golan, collaborated on a plan to apply A.I. to increase the data for stroke treatment and was born. As noted in a recent Forbes article, the company’s “software cross-references CT images of a patient’s brain with its database of scans to find early signs of LVO strokes. It then alerts doctors, who see [and communicate] about the images on their phones” and allows those clinicians to communicate with specialists at stroke centers and arrange for patients to be transferred there for care. According to the company, this leads to dramatic decreases in the time it takes for patients to go from diagnosis to procedure, commonly referred to as “door to groin puncture times.” Originally developed for LVO’s Viz has now received FDA approval for 7 A.I. imaging solutions and has extended treatment from LVOs to cerebral aneurysms (February 2022), subdural hemorrhage (July 2022) and most recently hypertrophic cardiomyopathy-HCM (pending). The Differentiators: As noted above,’s system enables it to automatically scan all images in a hospital system and scan for the noted conditions, then alerts clinicians if any are detected. In the case of LVOs the system then allows doctors to view images of patient scans on their phones, exchange messages, and cut crucial time off diagnosis and treatment. As the company notes, this is particularly important for smaller facilities which often lack specialists to interpret scans and arrange for transitions in care. For example, according to a study in the American Journal of Neuroradiology looking at stroke treatment at a facility using technology, researchers found “robust improvement” in other stroke response metrics, including door-to-device and door-to-recanalization and a 22% overall decline in time to treatment. This is particularly important in the case of strokes which are the number one cause of disability and the fifth leading cause of death, as time is of the essence for stroke victims with each minute of delay adding one week of disability. Implications: Applying technology to help reduce delays in diagnosis and treatment is one of the most promising applications of artificial intelligence because of the vast amounts of data they can process in short periods of time. While over time, many hope and some fear that these types of technologies will be able to be “taught” how to diagnose and treat illness, in the near term their greatest use lies in augmenting the skills of clinicians by allowing them to focus their attention on areas most in need of an experienced, nuanced diagnosis. This is particularly true for brain injuries where literally every second and every minute count. For example, as noted by the company “every one minute [delay in care for stroke victims] translates into 2 million brain cells that die.” Given that the loss of brain cells results in loss of brain function, disability, or worse the costs to society can be quite high. According to the company, strokes cost the U.S. healthcare system about $220 billion annually and each LVO patient that you are able to treat with a timely thrombectomy” costs one-tenth, or almost $1 million less than those that who aren’t. It is practical examples of the clinical applications of A.I. such as this, which attack very concrete and tangible problems, that are likely to pave the way for acceptance of more complex applications in the healthcare delivery system. partners with to integrate echocardiogram analysis tool; secures Bristol Myers Squibb's backing for hypertrophic cardiomyopathy-spotting AI

  • Explainable AI-Making AI Understandable, Transparent, and Trustworthy-The HSB Blog 3/23/23

    Our Take: Explainable AI, or AI whose methodology, algorithms, and training data can be understood by humans, can address challenges surrounding AI implementation including lack of trust, bias, fairness, accountability, and lack of transparency, among others. For example, a common complaint about AI models is that they are biased and that if the data that AI systems are trained on is biased or incomplete, the resulting model will perpetuate and even amplify that bias. By providing transparency into how an AI model was trained and what factors went into producing a particular result explainable AI can help identify and mitigate bias and fairness issues. In addition, it can also increase accountability by making it easier for users and those impacted by models to trace some of the logic and basis for algorithmic decisions. Finally, by enabling humans to better understand AI models and their development, explainable AI can engender more trust in AI which could accelerate the adoption of AI technologies by helping to ensure these systems were developed with the highest ethical principles of healthcare in mind. Key Takeaways: AI algorithms continuously adjust the weight of inputs to improve prediction accuracy but that can make understanding how the model reaches its conclusions difficult. One way to address this problem is to design systems that explain how the algorithms reach their predictions. ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both of which are well in excess of what any human brain could process and break down. During the Pandemic, the University of Michigan hospital had to deactivate its AI sepsis-alerting model when differences in demographic data for patients affected by the pandemic created discrepancies and a series of false alerts. AI models used to supplement diagnostic practices have been effective in biosignal analyses and studies indicate physicians trust the results when understand how the AI came to its conclusion The Problem: The use of artificial intelligence (AI) in healthcare presents both opportunities and challenges. The complex and opaque nature of many AI algorithms, often referred to as "black boxes", can lead to difficulty in understanding the logical processes behind AI's conclusions. This not only poses a challenge for regulatory compliance and legal liability but also impacts users ability to ensure the systems were developed ethically, are auditable and eventually their ability to trust the conclusions and purpose of the model itself. However, the implementation of processes to make AI more transparent and explainable can be costly and time-consuming and could potentially result in a requirement or at least preference that model developers may need to disclose proprietary intellectual property that went into creating the systems. This process is made even more complex in the U.S. where the lack of general legislation regarding the fair use of personal data and information can hamper the use of AI in healthcare, particularly in clinical contexts where physicians must explain how AI works and how it is trained to reach conclusions. The Backdrop: The concept of explainable AI is to provide human beings involved with using, auditing, and interpreting models a methodology to systematically analyze what data a model was trained on, what predictive factors are more heavily weighted in the models as well as provide cursory insights into how algorithms in particular models arrived at their conclusions/recommendations. This in turn would allow the human beings interacting with the model to better comprehend and trust in the results of a particular AI model instead of the model being viewed as a so-called “black box” where there is limited insight into such factors. In general, many AI algorithms, such as those that utilize deep learning, are often referred to as “black boxes” because they are complex, can have multiple billions and even trillions of parameters upon which calculations are performed and consequently can be difficult to dissect and interpret. For example, ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both well in excess of what any human brain could process and break down. Moreover, because these systems are trained by feeding vast datasets into models which are then designed to learn, adapt and change as they process additional calculations the products of the algorithms are often different from their original design. As a result, the numbers of the parameters the models are working with and the adaptive nature of the machine learning models, engineers and data scientists building these systems cannot fully understand the “thought process” behind an AI’s conclusions or explain how these connections are made. However, as AI is increasingly applied to healthcare in a variety of contexts including medical diagnoses, risk stratification and anomaly detection, it is important that AI developers have methods to ensure they are operating efficiently, impartially, and lawfully in line with regulatory standards both at the model development stage and when models are being rolled into use. As noted in an article published in Nature Medicine, starting the AI development cycle with an interpretable system architecture is necessary because inherent explainability is more compatible with the ethics of healthcare itself than methods to retroactively approximate explainability from black box algorithms. Explainability, although more costly and time-consuming to implement in the development process, ultimately benefits both AI companies and the patients they will eventually serve far more than if they were to forgo it. Adopting a multiple stakeholder view, the layman will find it difficult to make sense of the litany of data that AI are trained on, and that AI recites as part of their generated results, especially if the individual interpreting these results lacks knowledge and training on computer science and programming. Through creating AI with transparency and explainability , developers also create responsible AI that may eventually give way to the larger-scale implementation of AI in a variety of industries, but especially healthcare where more digitization is generating more patient data than ever before along with the need to manage and protect this data in appropriate ways. Creating AI that is explainable ultimately increases end user trust, improves auditability and creates additional opportunities for constructive use of AI for healthcare solutions. This is one way to reduce the hesitation and risks associated with traditional “black box” AI by making legal and regulatory compliance easier, providing the ability for detailed documentation of operating practices, and allowing organizations to create or preserve their reputations for trust and transparency. While a large number of AI-enabled clinical decision support systems are predominantly used to provide supporting advice for physicians in making important diagnostic and triage decisions, a study from the Journal of Scientific Reports found that the this actually helped improve physicians’ diagnostic accuracy, with physician plus AI actually performing better than whey they received human advice concerning the interpretation of patient data (sometimes referred to as the “freestyle chess effect”). AI models used to supplement diagnostic practices have been effective in biosignal analyses such as that of electrocardiogram results to detect biosignal irregularities in patients as quickly and accurately as a human clinician can. For example, a study from the International Journal of Cardiology found that physicians are more inclined to trust the generated results when they can understand how the explainable AI came to its conclusion. As noted in the Columbia Law Journal, however, the most obvious way to make an AI model explainable would be to reveal the source code for the machine learning model, that actually “will often prove unsatisfactory (because of the way machine learning works and because most people will not be able to understand the code)” and because commercial organizations will not want to reveal their trade secrets. As the article notes another approach is to “create a second system alongside the original ‘black box’ model, sometimes called a ‘surrogate model.’” However, a surrogate model only closely approximates the model itself and does not use the same internal weights of the model itself. As such, given the limited risk tolerance in healthcare we doubt such a solution would be acceptable. Implications: As noted by all the buzz around ChatGPT with the recent introduction of ChatGPT4 and its integration into products such as Microsoft’s Copilot and Google’s integration of Bard with Google Workspace, AI products will increasingly become ubiquitous in all aspects of our lives including healthcare. As this happens, AI developers and companies will have to work hard to ensure that these products are transparent and do not purposely or inadvertently contain bias. Along those lines, when working in healthcare in particular, AI companies will have to ensure that they implement frameworks for responsible data use which include 1) ensuring the minimization of bias and discrimination for the benefit of marginalized groups by enforcing non-discrimination and consumer laws in data analysis; 2) providing insight into the factors affecting decision-making algorithms, and 3) requiring organizations to hold themselves accountable to fairness standards and conduct regular internal assessments. In addition as noted in an article form the Congress of Industrial Organizations, in Europe AI developers could be held to legal requirements surrounding transparency without risking IP concerns under Article 22 of the General Data Protection Regulation which codifies an individual’s right to not be subject to decisions based solely on automated processing and requires the supervision of a human in order to minimize overreliance and blind faith in such algorithms. In addition, one of the issues with AI models is due to data shifts, caused when machine learning systems underperform or yield false results due to mismatches between the datasets they were trained on and the real-world data they actually collect and process in practice. For example, as challenges to individuals’ health conditions continue to evolve and new issues emerge, it is important that care providers consider population shifts of disease and how various groups are affected differently. During the Pandemic, the University of Michigan Hospital had to deactivate its AI sepsis-alerting model when differences in demographic data gathered by patients affected by the pandemic created discrepancies with the data the AI system had been trained on, leading to a series of false alerts. As noted in an article in the New England Journal of Medicine this has fundamentally altered the way the AI viewed and understood the relationship between fevers and bacterial sepsis. Episodes like this underscore the need for high-quality, unbiased, and diverse data in order to train models. In addition, given that the regulation of machine learning models and neural networks in healthcare is continuing to evolve, developers must ensure that they continuously monitor and apply new regulations as they evolve, particularly with respect to adaptive AI and informed consent. In addition, developers must ensure that models are tested both in development and post-production to ensure that there is no model drift. With the use of AI models in health care there are special questions that repeatedly need to be asked and answered when using these models. Are AI properly trained to account for the personal aspects of care delivery and consider the individual effects of clinical decision-making, ethically balancing the needs of the many over the needs of the few? Is the data collected and processed by AI secure and safe from malicious actors, and is it accurate enough so that the potential for harm is properly mitigated, particularly against historically underserved or underrepresented groups? Finally, what does the use of these models and these particular algorithms mean with regard to the doctor-patient relationship and the trust vested in our medical professionals? How will decision-making and care be impacted when using AI that may not be sufficiently explainable and transparent enough for doctors themselves to understand the thought process behind and therefore trust the results that are generated? These questions will undoubtedly persist as long as the growth in AI usage continues and it is important that AI is adopted responsibly and with the necessary checks and balances to preserve justice and fairness for the patients they will serve. Related reading: Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram The Judicial Demand for Explainable Artificial Intelligence Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Enhancing trust in artificial intelligence: Audits and explanations can help Art. 22 GDPR – Automated individual decision-making, including profiling - General Data Protection Regulation (GDPR) The Clinician and Dataset Shift in Artificial Intelligence Dissecting racial bias in an algorithm used to manage the health of populations

  • Cognosos-Analyzing and Optimizing Asset Visibility and Management

    The Driver: Cognosos recently raised $25M in a recent venture round led by Riverwood Capital. As part of the funding Joe De Pinho and Eric Ma from Riverwood will join Cognosos’ Board. Cognosos has developed a cloud-based platform of real-time location services (RTLS) and process optimization software. The fundraising round brings Cognosos’ total funding to $38.1M. The proceeds will be used to allow Cognosos to double its staff from the current 50 to 100 as well as continue to expand in healthcare and automotive manufacturing, among other industries. Key Takeaways: Cognosos grew revenue by over 226% in 2022 according to the Atlanta Business Journal and expects to double it year-over-year in the coming year (Cognosos) Between 2020 and 2022 the average price of healthcare services increased at 2.4%, while the Producer Price Index (PPI) increased at 14.3%, leading to a further squeeze on healthcare supply chains as per-patient input costs have exploded (University of Arkansas) A single Cognosos gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets In 2021 it is estimated that at least one-third of healthcare providers operated under negative margins, with a combined loss of $54 billion in net income (Kaufman-Hall) The Story: Cognosos was founded in 2014 and is based on technology developed by the founders at Georgia Tech’s Smart Antenna lab for radio astronomy. According to the company’s website, while investigating techniques in radio astronomy to combine signals from multiple dishes, the founders of Cognosos discovered a way to use software-defined radio (SDR) and cloud-based signal processing. This dramatically lowered the cost and power requirements for wireless sensor transmitters. The result, called RadioCloud, was created with the belief that businesses of all sizes should be able to use ubiquitous, low-cost wireless technology to harness the power of the Internet of Things (IoT). The name Cognosos is derived from the Latin verb cognoscere, meaning “to become aware of, to find.” The company states that they have over 100 customers, 4000 registered users and grew revenue by over 226% in 2022 according to the Atlanta Business Journal. The Differentiators: Cognosos real-time location services (RTLS) technology and process optimization software allows companies to track their assets’ movement, improve operational visibility and safely increase and maximize productivity. The company’s RTLS combines Low Energy Bluetooth technology with AI and its proprietary long-range wireless networking technology to help reduce costs of deployment and improve tracking ability. The company has 10 U.S. patents and states that a single gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets. Their technology is easily deployable without having to pull wires, move tiling or other fixtures minimizing any disruptions to operations and allowing them to leverage existing Bluetooth infrastructure, including Bluetooth-enabled fixtures. The company states that this technology will allow them to bring critical assets on-line much more rapidly than competitors (ex: weeks vs. months). The company’s major clients are in healthcare, automobile, logistics and manufacturing. Implications: For years hospitals and healthcare industries have struggled to effectively manage and optimize their management of assets. The industry is rife with stories of nurses, aides, and others squirreling monitors and other devices away in secret hiding places so that they don't have to spend time and unnecessary energy tracking down what they need. Now more than ever, with hospitals facing increased financial pressures, workforce shortages and employee dissatisfaction, a solution that improves and eases the management of such assets is needed. Given Cognosos' ability to limit the disruption to clinical operations and patient care while helping to reduce costs and employee productivity, they should be positioned well going forward. In addition, since Cognosos’s product is not hardware-based but instead relies on the Cloud and AI software to manage assets, they can provide real-time asset tracking with lower infrastructure costs. Our belief is that solutions like Cognosos’s which leverage both the Cloud and AI in helping to address healthcare’s back-office and supply chain challenges will become the first to be widely adopted and hasten the adoption of these technologies within the industry and pave the way for broader adoption or clinical technology down the road.

  • What Clinicians and Administrators Need to Know When Implementing AI-The HSB Blog (repost)

    Our Take: There are several basic issues and challenges in deploying AI that all clinicians and administrators should be aware of and inquire about to ensure that they are being properly being considered when AI is being implemented in their organization. Applications of artificial intelligence in healthcare hold great promise to increase both the scale of medical discoveries and the efficiency of healthcare infrastructure. As such healthcare-related research and investment have exploded over the last several years. For example, according to the State of AI Report 2020, academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017. In addition, 99% of healthcare institutions surveyed by CB Insights are either currently deploying (38%) or planning to deploy AI (61%) in the near future. However, as witnessed by recent errors discovered surrounding the application of an AI-based Sepsis model, while AI can improve the quality of care, improve access and reduce costs, models must be implemented correctly or they will be of questionable value and even dangerous. Key Takeaways: According to Forrester's "The Cloud, Data, and AI Imperative for Healthcare" Report the 3 greatest challenges to implementing AI are: 1) integrating insights into existing clinical workflows; consolidating fragmented data; and, achieving clinically reliable clean data Researchers working to uncover insights into prescribing patterns for certain antipsychotic medications found that approximately 27% of prescriptions were missing dosages Even after doing work to standardize and label patient data, in at least one broad study almost 10% of items in the data repository didn’t have proper identifiers Academic publications in biology around AI technologies such as deep learning, natural language processing (NLP), and computer vision have grown over 50% a year since 2017 The Problem: While it is commonly accepted that computers can outperform humans in terms of computational speed, in its current state many would argue that artificial intelligence is really “augmented intelligence” defined by the IEEE as “a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it.” Current AI models are still highly dependent upon the quantity and quality of data available for them to be trained on, the inherent assumptions underlying the models as well as the human biases (intentional and unintentional) of those developing the models along with a number of other factors. As noted in a recent review of the book “I, Warbot” about computational warfare by Kings College, AI lecturer Kenneth Payne, “these gizmos exhibit ‘exploratory creativity'-essentially a brute force calculation of probabilities. That is fundamentally different from ‘transformational creativity”, which entails the ability to consider a problem in a wholly new way and requires playfulness, imagination and a sense of meaning.” As such, those creating AI models for healthcare need to ensure they set the guardrails for its use and audit its models both pre and post-development to ensure they conform to existing laws and best practices. The Backdrop: When implementing an AI project there are a number of steps and considerations that should be taken into account to ensure its success. While it is important to identify the best use and type with any kind of project, given the cost of the technical talent involved, the level of computational infrastructure typically needed (if done internally) and the potential to influence leadership attitudes towards the use and viability of AI as an organizational tool, it is even more important here. As noted above one of the most important keys to implementing an AI project is the quantity and quality of data resources available to the firm. Data should be looked at with respect to both quality (to ensure that it is free of missing, incoherent, unreliable, or incorrect values) and quantity. In terms of data quality, as noted in “Artificial Intelligence: A Non-Technical Introduction”, data can be: 1) noisy (have data sets with conflicting data), 2) dirty (have data sets with inconsistent and erroneous data), 3) sparse (have data with missing or no values at all, or, 4) inadequate (have data sets that have contained inadequate or biased data). As noted in an article in “Extracting and Utilizing Electronic Health Data from Epic for Research”, “to provide the cleanest and most robust datasets for statistical analysis, numerous statistical techniques including similarity calculations and fuzzy matching are used to clean, parse, map, and validate the raw EHR data.” which is generally the largest source of healthcare data for AI research. When looking to implement AI it is important to consider and understand the levels of data loss and the ability to correct for it. For example, researchers looking to apply AI to uncover insights into prescribing patterns into second-generation antipsychotic medications (SGAs) found that approximately 27% of the prescriptions in their data set were missing dosages and even after undertaking a 3-step correction procedure, 1% were missing dosages. While this may be deemed an acceptable number it is important to be aware of the data loss and know this information in order to properly evaluate if it is within tolerable limits. In terms of inadequate data, ensuring that data is free of bias is extremely important. While we have all recently been made keenly aware of the impact of racial and ethnic bias on models (ex: facial recognition models trained only on Caucasians) there are a number of other biases which models should be evaluated for. According to “7 Types of Data Bias in Machine Learning” these include: 1) sample bias (not representing the desired population accurately), 2) exclusion bias (the intentional or unintentional exclusion or certain variables from data prior to processing), 3) measurement bias (ex: due to poorly chosen measurements that create systematic distortions of data, like poorly phrased surveys); 4) recall bias (when similar data is inconsistently labeled), 5) observer bias ( when the labelers of data let their personal views influence data classification/annotation), 6) racial bias (when data samples skew in favor of or against certain ethnic or demographic groups), 7) association bias (when a machine learning model reinforces a bias present in a model). In addition to data quality, data quantity is as imperative. For example, in order to properly train machine learning models, you need to have a sufficiently large number of observations to create an accurate predictor of the parameters you’re trying to forecast. While the precise number of observations needed will vary based on the complexity of the data you’re using, the complexity of the model you want to build, and the impact of the amount of “statistical noise” generated by the data itself, an article in the Journal of Machine Learning Research suggested that at least 100,000 observations are needed to train a regression or classification model. Moreover, it is important that numerous data points are not captured or sufficiently documented in healthcare. For example, as noted in the above-referenced article on extracting and utilizing Epic EHR data for study based on research at the Cleveland Clinic in 2018, even after doing significant work to standardize and label patient data, “approximately 9% [1,000 out of 32,000 data points per patient] of columns in the data repository” were not using the assigned identifiers. While it is likely that methods have improved since this research was performed, given the size and resources that an institution like the Cleveland Clinic had to bear on the problem, it indicates the larger size of the problem. Once the model has been developed there should be a process in place to ensure that the model is transparent and explainable by creating a mechanism that allows non-technologists to understand and assess the factors the model used and what parameters it relied most heavily upon in coming to its conclusions. For example, as noted by the State of AI Report 2020, “AI research is less open than you think, only 15% of papers publish their [algorithmic] code” used to weight and create models. In addition, there should be a system of controls, policies, and audits in place that provide feedback as to the potential errors in the application of the model as well as disparate impact or bias in its conclusions. Implications: As noted in “Artificial Intelligence Basics: A Non-Technical Introduction” it’s important to have realistic expectations for what can be accomplished by an AI project and how to plan for it. In the book, the author Andrew Taulli references Andrew Ng, the former Head of Google Brain, who suggests the following parameters; an AI project should take between 6-12 months to complete, have an industry-specific focus, should notably help the company, doesn’t have to be transformative, and, have high-quality data points. In our opinion, it is particularly important to form collaborative, cross-platform teams of data scientists, physicians, and other front-line clinicians (particularly those closest to patients like nurses) to get as broad input on the problem as possible. While AI holds great promise, proponents will have to prove themselves by running targeted pilots and should be careful not to overreach at the risk of poisoning the well of opportunity. As so astutely pointed out in “5 Steps for Planning A Healthcare Artificial Intelligence Project: “artificial intelligence isn’t something that can be passively infused into an organization like a teabag into a cup of hot water. AI must be deployed carefully, piece by piece, in a measured and measurable way.” Data scientists need to ensure that the models they create produce relevant output that provide context and the ability for clinicians to have a meaningful impact upon the results and not just generate additional alerts that will go unheeded. For example, as Rob Bart, Chief Medical Information Officer at UPMC noted in a recent presentation at HIMSS, data should provide “personalized health information, personalized data” and should have “situational awareness in order to turn data into better consumable information for clinical decision making” in healthcare. Along those lines, it is important to take a realistic assessment of “where your organization lies on the maturity curve”, how good is your data, how deep is your bench of data scientists and clinicians available to work on an AI project in order to inventory, clean and prepare your data. AI talent is highly compensated and in heavy demand. Do you have the resources necessary to build and sustain a team internally or will you need to hire external consultants? How will you select and manage those consultants, etc.? All of these are questions that need to be carefully considered and answered before undertaking the project. In addition, healthcare providers need to consider the special relationship between clinician and patient and the need to preserve trust, transparency, and privacy. While AI holds a tremendous allure for healthcare and the potential for it to overcome, and in fact make up for its underinvestment in information technology relative to other industries, all of this needs to be done with a well-thought-out, coherent and justified strategy as its foundation. Related Readings: Artificial Intelligence Basics: A Non-Technical Introduction. Tom Taulli (publishers site) Artificial Intelligence (AI): Healthcare’s New Nervous System An Interdisciplinary Approach to Reducing Errors in Extracted Electronic Health Record Data for Research 5 Steps for Planning a Healthcare Artificial Intelligence Project

  • R-Zero-Making Intelligent Air Disinfection More Economical and Efficient

    The Driver: R-Zero recently raised $105M in a Series C financing led by investment firm CDPQ with participation from BMO Financial Group, Qualcomm Ventures, Upfront Ventures, DBL Partners, World Innovation Lab, Mayo Clinic, Bedrock Capital, SOSV and legendary venture capital investor John Doerr. The Series C financing brings the total amount raised by R-Zero to more than $170M since its founding in 2020. The company will use the funds to scale deployments of its disinfection and risk modeling technology to meet growing demand across public and private sectors, including hospitals, senior care communities, parks and recreation, other government facilities, and college and corporate campuses. Key Takeaways: According to the company, R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms R-Zero’s UV-C products cost anywhere from $3K to $28K compared with traditional institutional UV-C technology which can cost anywhere from $60K to $125K Using R-Zero's products results in more than 90% fewer greenhouse gas emissions (GHG) and waste compared to HVAC and chemical approaches For every dollar employers spend on health care, they’re spending 61 cents on illness-related absences and reduced productivity (Integrated Benefits Institute) The Story: R-Zero was co-founded by Grant Morgan, Eli Harris and Ben Boyer. Morgan, who has an engineering background, had worked briefly as CTO of GIST and had been V.P. of product and engineering at iCracked and was previously in R&D in medical devices. Harris co-founded EcoFlow (an energy solutions company) and has been in partnerships and BD at drone company DJI, while Boyer had been an MD at an early-growth stage VC Tenaya Capital. According to the company, the co-founders applied their experience to innovating an outdated legacy industry to make hospital-grade UVC technology accessible to small and medium-sized businesses. Prior to R-Zero these units could cost anywhere from $60-$125K and often lacked the connected infrastructure and analytics necessary to optimize performance and provide risk analytics for its users (ex: how frequently and heavily rooms are being used and when to use disinfection to help mitigate risk). The Differentiators: As noted above, typical institutional ultraviolet (UV) disinfectant lighting technology can be expensive and has the potential to be harmful (hihigh-poweredVC lights can cause eye injuries if people are exposed to them for long periods of time), however, R-Zero has found ways to mitigate these issues. First, as noted in Forbes, its products run anywhere from $28K for their most expensive device the Arc, to the Beam at $5K and the Vive at $3K. Moreover, while the Arc can only be used to disinfect an empty room due to the wavelength of UVC light, the Beam creates a disinfection zone above people in a room while the Vive can be used to combat harmful microorganisms when people are in a room. In addition, according to the company R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms and does so with 90% fewer greenhouse gas emissions and waste compared to HVAC and chemical approaches. As a result, R-Zero can help improve indoor air quality in hospitals and other medical facilities, factories, warehouses, and other workplaces more efficiently and effectively than outdated technologies. Implications: As noted above, R-Zero’s technology will help hospitals and senior care facilities cost effectively sanitize treatment spaces which can’t necessarily be done with current technology. Moreover, as medical care increasingly moves to outpatient settings the ongoing workplace shortage will challenge these facilities to find ways to keep themselves clean and disinfected and avoid disease transmission. For example, the company claims that their customers have been reducing labor costs by 30%-40%, a number which will likely only get higher given the current labor situation. In addition, even in facilities that have the necessary workforce, it is often difficult to optimize staff time to ensure that offices are sanitized and used to maximum capacity. Utilizing devices like the R-Zero Beam or Vive can allow medium-to-small size facilities to constantly and efficiently be disinfecting rooms, making them immediately available for use.. Also, by removing the burdensome task of having already overworked clinical or janitorial staff spend time sanitizing the rooms, R-Zero’s technology can help improve employee productivity and satisfaction at a time when both are stretched thin. This Startup Wants To Bring Disinfecting UV Light Into “Every Physical Space”, R-Zero Raises $105 Million Series C to Improve the Indoor Air We Breathe, This startup built an ultraviolet device that can disinfect a restaurant in minutes

  • 4 Ways AI Could Revolutionize The Future of Drug Development…No Chat Needed-The HSB Blog 2/16/23

    Our Take: The drug development process is a time-consuming and expensive endeavor fraught with failures even with proper planning and execution. With an average of $1-2 billion spent per successful drug and a development period of 10-15 years, the high cost and lengthy timeline are barriers to entry for many drug manufacturers. However, the integration of artificial intelligence (AI) into the drug development process has the potential to modernize the industry. AI can help researchers in a variety of ways, such as by analyzing large datasets, predicting biological processes, identifying new drug targets, and assisting in the design of new drug molecules. Furthermore, AI can assist in data mining, generating regulatory documents, and identifying suitable candidates for clinical trials. The implications of these developments are significant, as AI has the potential to improve the speed and efficiency of drug development, ultimately leading to the production of more effective treatments, although care must be taken to ensure its factual accuracy and validity as an increasing number of companies adopt AI solutions. Key Takeaways: Most drugs take between 10-15 years to be developed at an average cost of $1-2B before receiving [U.S.] approval for clinical use (Chinese Academy of Medical Sciences and the Chinese Pharmaceutical Association) It is estimated that 85% of the human proteome is considered undruggable and finding effective pharmaceuticals to target these proteins is considered exceptionally hard, or impossible (The Cambridge Crystallographic Data Centre) Machine learning methods such as eToxPred correctly predict synthetic accessibility and toxicity of drug compounds with accuracy as high as 72% (BMC Pharmacology and Toxicology) The use of Machine Learning in drug discovery could save approximately $300-400M per drug (U.S. General Accounting Office) The Problem: Drug development is a time-consuming, costly process rife with failures even with strong strategic planning and execution of the process. For example, as noted in a recent article in Acta Pharmaceutica Sinica B (the journal of the Chinese Academy of Medical Sciences and the Chinese Pharmaceutical Association), most drugs take between 10-15 years to be developed, with an average cost of $1-2 billion spent before finally receiving federal approval for clinical use. While in theory during “clinical drug development, a delicate balance needs to be achieved among clinical dose, efficacy, and toxicity to optimize the benefit/risk ratios in patients. [Ideally] a drug candidate would have high potency and specificity to inhibit its molecular target [supplying] high drug exposure in disease-targeted tissues to achieve adequate efficacy at an optimal dose (ideally at low doses), and minimal drug exposure in healthy tissues to avoid toxicity at optimal doses (even at high doses).” However, while this is easy to specify in theory, in practice it becomes difficult to execute. For example, according to the article “analyses of clinical trial data from 2010 to 2017 show four possible reasons attributed to the 90% clinical failures of drug development: lack of clinical efficacy (40%–50%), unmanageable toxicity (30%), poor drug-like properties (10%–15%), and lack of commercial needs and poor strategic planning (10%). Consequently, each of the five stages of drug development 1) discovery & development, 2) preclinical research, 3) clinical research, 4) FDA review & approval, and 5) FDA post-approval drug safety monitoring, require significant funding and resources yet can have precarious returns. As noted in an article in Nature Reviews Drug Discovery, while the probability of going from phase III to launch has risen from 49% to 62% in the periods from 2010-2012 to 2015 to 2017, the probability of a compound going from phase II trials to phase III trials has remained essentially the same at about 25% during this same period. The process is voluminous and requires analysis of large amounts of varying types of data. As noted in a report from the GAO on the benefits and challenges of machine learning in drug development, there are multiple types of data relevant to drug development, including data from biomedical research to better understand the biology of diseases, the pharmacology of potential drugs, the toxicity of known compounds as well as the various forms of patient data necessary to conduct the trial and analyze efficacy. Asa result, drug developers are faced with the task of analyzing ever increasing amounts of data to produce similar declining returns on their research leading them to seek new ways to search for and analyze potential candidates, such as the application of AI to the drug discovery process. The Backdrop: AI has the potential to solve a variety of industry problems and is being used in drug development to rapidly speed up the process of creating and assessing the effects of these novel compounds. While some authors have identified at least 10 ways AI can help in the drug discovery process (see Machine Learning in Drug Discovery: A Review) some of the more common uses that researchers associate with the use of AI in drug development include: 1) helping to find promising new drug candidates in lead and biomarker discovery, 2) data analytics and prediction (ex: classification, clustering, and prediction) of effective candidates for further analysis, 3) using AI (and capabilities like digital twins) to improve the speed and efficacy of preclinical development, 4) the detection and understanding of the potential for adverse effects. For example, by feeding this data to AI tools, which find associations between patients’ genotypes and phenotypes, researchers are able to discover new biomarkers that allow for patient stratification as well as the identification of biochemically active genomic regions that respond best to certain drugs. As noted in the Journal of Signal Transduction and Targeted Therapy, not only can AI transform and interpret this data into potential biological processes that could be utilized in the pharmacodynamics of a certain drug compound more rapidly than human researchers, it can do so far more accurately than human researchers given AI’s ability to discover patterns and relationships. In terms of designing the drugs themselves, AI tools can be used to save researchers a lot of time. AI that has been trained using advanced biology and chemistry data is assisting in identifying new drug targets and helping to build applicable new drug molecules. A significant problem in the process of drug discovery is the proper identification of genomic regions that could be useful in regard to potential drug targets, and an estimated 80% of the human genome is yet untested or simply undruggable. Understanding and examining large volumes of biological data resulting from the genomics, proteomics and experimental interpretation of a certain drug target is a lofty task to complete with traditional methods, and the complex biological networks are difficult to fully break down and map completely. By analyzing a target’s gene expression, protein-protein interactions, results from clinical trials and disease biology, these AI algorithms can predict if the target is suitable for drug interactions and build molecules with specific properties, activities and toxicities that can help identify suitable candidates as per Research and Markets’ report on AI in drug target discovery and validation. In the preclinical and clinical spheres, AI is rapidly adapting to the needs of researchers in order to set up and analyze the data from necessary experimental trials needed for a drug to receive approval from the FDA and prove its efficacy so that pharmaceutical companies can create a product that works. The development and testing of new drugs creates terabytes to petabytes of biological data at each stage of development, which is ideally suited to AI tools’ ability to work with large datasets. Pfizer, one of the largest pharmaceutical companies in the world, which has been utilizing AI for data mining purposes, have reported that AI runs much faster and more accurately than any human researchers are capable of and provides the added benefit of helping the company to meet regulatory and quality control requirements such as generating the reams of materials necessary to be submitted during the development process. Moreover, as noted in a recent article in Trends in Pharmacological Sciences, outside of the drug development process itself, AI can be used to identify and access the patient records of those who are most likely to benefit from clinical trials, reducing the time to identify suitable trial candidates and improving success rates. This is extremely important to the success and speed of trials given that approximately 48% of trials miss enrollment targets and 49% of patients drop out of trials before completion (thereby making the identification of suitable candidates key to enrolling sufficient numbers to account for this). Additionally, the use of AI in remote patient monitoring solutions such as wearable devices, virtual outpatient services, and more can help to monitor patients and predict adverse health events thereby making pharmacovigilance more effective and cheaper. According to “Artificial Intelligence in Health Care Benefits and Challenges of Machine Learning” from the U.S. General Accounting Office in Drug Development, the use of Machine Learning in drug discovery could save approximately $300-400M per drug. Implications: As noted above, AI has the potential to dramatically speed up the development of drug discovery while simultaneously helping to reduce the cost and improve the efficiency compared to traditional technologies currently being leveraged to find new drug molecules. With increased public interest and popularity concerning AI solutions including but not limited to issues in healthcare, new tools are being developed that are already showing great promise. MIT researchers created a geometric deep-learning model called EquiBind that is an estimated 1,200 times faster than one of the fastest, state-of-the-art computational models. EquiBind outperformed the current state-of-the-art model, QuickVina2-W in successfully simulating the binding of drug molecules to protein-coding genes and saved significant amounts of time that are usually spent in computation using cutting-edge geometric reasoning. This advancement will ultimately allow AI to better understand and apply concepts of molecular physics, leading to better predictions and generalizations fueled by the vast amounts of collected information that is difficult and time-consuming for humans to accurately sift through. EquiBind is only one of the multitude of AI tools being developed for drug research, and as AI continues to improve on previous iterations and synthesize increasingly larger volumes of data, this will translate into far greater efficiency and time savings than can be achieved with current industry standards. In addition, AI will have applications in quality control as machine learning methods are used to evaluate drug candidates for toxicity and side effects. For example, according to an article in BMC Pharmacology and Toxicology, a technology called eToxPred can correctly predict the synthetic accessibility and toxicity of drug compounds with accuracy as high as 72%. Over time as the adoption of AI accelerates in drug development there is the potential for the development of even more personalized medicines tailored to the specific needs and genome of patients. Given the vast amounts of patient data collected and stored by hospitals, insurers, and others in healthcare, and as the industry increasingly digitizes, there is a significant volume of data that is underutilized and which could be informing better care practices, including drug discovery. However, as with the application of AI in any industry, this must be done with ethical considerations in mind and specific policies and protocols in place in terms of data privacy, algorithmic bias, and transparency. AI can identify patients that are most likely to respond positively to a particular drug which could lead to treating individuals sooner, rather than possibly having to wait to participate in clinical trials (once again under the right safety protocols). The idea of more targeted and personalized healthcare remains intriguing, but it must be done with accountability and transparency in mind, so that clinicians and patients understand how and why the algorithms work the way they do. If so, there is the potential to fundamentally change the way we develop new drugs and get these experimental treatments to those who desperately need them more quickly and cheaper than ever before. Related Reading: Artificial Intelligence in Health Care Benefits and Challenges of Machine Learning in Drug Development Why 90% of clinical drug development fails and how to improve it? Artificial Intelligence: On a mission to Make Clinical Drug Development Faster and Smarter Artificial Intelligence for Clinical Trial Design Artificial intelligence model finds potential drug molecules a thousand times faster eToxPred: a machine learning-based approach to estimate the toxicity of drug candidates

  • Prescribe Fit – Attacking the Root Cause of MSK Issues

    The Driver: Prescribe fit is a virtual/telehealth-based orthopedic health startup. It is specified for patients that are dealing with orthopedic bone and muscle injuries. Prescribe fit raised 4 million in seed funding. The round was led by Tamarind Hill with participation from the Grote Family as well as Mike Kaufman, the former CEO of Cardinal Health. According to the company, proceeds of the funding will be used to aggressively expand the company, as well as broaden and accelerate product development. Key Takeaways: According to the Bone and Joint initiative USA, 124 million Americans suffer from a musculoskeletal disorder On average, 1 out of 4 elderly adults fall each year and over 800,000 people end up in the hospital due to a fall injury per the U.S. CDC Patients who experienced falls had longer hospital stays and were more frequently discharged to other healthcare facilities, instead of their primary residence according to a study by the Hospital for Special Surgery According to an article in the Journal of Medicine, fear of falling often develops after experiencing a fall and developing a fear of falling can cause older adults to avoid physical activity, experience more difficulty with activities of daily living, and become less able to perform exercises. The Story: Originally started as a weight loss coaching startup in January 2020, Prescribe Fit was only able to secure only one client after enduring the shutdown of all non-essential health services during the Pandemic. Co-founded by CEO, Brock Leonti, who previously owned a home health agency for approximately six years, the company worked at that time to help treat obesity and served primary care doctors. While the company was limited to just one client during the Pandemic they were able to test and refine their model as well as a number of treatment models. As part of that the company gleaned a number of insights including how to successfully use remote patient monitoring technology and the need for limited administrative burden on physicians. The Differentiator: Based on its experience and what it had learned during the Pandemic in August 2022 Prescribe Fit transitioned its business model to focus solely on orthopedic practices and the treatment of the root causes of MSK issues. According to Leonti, this includes helping orthopedic patients reduce blood pressure, blood sugar & weight at-home and partnering with orthopedic practices to improve their patients mental acuity, flexibility & endurance. As noted in the Columbus Business Journal, “Prescribe Fit has a team of nurses and care coordinators who meet remotely with patients and “edit” their daily routines so their behavior changes stick.” This includes having patients take pictures of their meals and then having coordinators indicate where they may be able to reduce portion sizes or substitute healthier items in their diets. According to Leonti, this has allowed orthopedic patients to obtain 5.4% average weight loss in just 16 weeks and create personalized at-home health plans resulting in 80%+ of patients staying engaged for 9+ months, both of which help improve MSK issues. Implications: According to the Bone and Joint initiative USA, 124 million Americans suffer from an MSK disorder but will often end up treating the symptoms and not addressing the root cause. In part this is due to the limited availability of orthopedic specialists and other clinicians to address these issues. By connecting these patients via specialists offices with nurses and other case managers who can address specific dietary and behavior issues that are contributing to these conditions (ex: lack of exercise or inappropriate exercise routines) Prescribe Fit is helping improve the quality of care while lowering the cost. Moreover, since patients are being monitored by clinicians using remote patient monitoring (RPM) and chronic disease management tools, physicians are able to create an additional reimbursement stream (while paying Prescribe Fit a management fee). As the U.S. gets older demographically a larger proportion of the population will have to deal with MSK issues that can lead to falls and injuries which can often compound into other issues. By addressing these issues and helping patients strengthen bones and improve muscle tone Prescribe Fit may help reduce the incidence (and cost) of such issues. Health tech startup Prescribe FIT raises $4M in oversubscribed seed round, Weight-loss coaching startup Prescribe Fit doubles with focus on orthopedics

bottom of page