Search
213 items found for ""
- Generative AI as Enabler of Culturally Competent Care-The HSB Blog 5/26/23
Our Take: Differences in language,culture and inaccurate descriptions can cause patients to refuse help from other races (ex: black patients/white doctors, Chinese patients/English-speaking doctors) technology like generative AI may help by employing appropriate language which would help narrow the gap. Using generative AI to process natural language, developing language models and chatbots to understand and respond to patients of all backgrounds in a culturally appropriate way, generative AI can be trained on data sets containing cultural backgrounds, dialects, and linguistic nuances allowing it to understand a variety of accents and dialects. This would enable individuals from different cultural backgrounds to interact effectively with the technology. Giving patients the ability to engage with technology in a way that respects their cultural traditions and language preferences promotes a sense of dignity, respect, and empowerment. By working towards cultural inclusion, technology has the potential not only to reduce differences, but to promote understanding, empathy, and harmony in our increasingly interconnected world. Key Takeaways: African Americans and Latinos experience 30% to 40% poorer health outcomes than White Americans Research shows poor care for the underserved is because of fear, lack of access to quality healthcare, distrust of doctors, and often dismissed symptoms and pains One study found that black patients were significantly less likely than white patients to receive analgesics for extremity fractures in the emergency room (57% vs. 74%), despite having similar self-reports of pain Each additional standard deviation improvement score that hospitals received in cultural competency, translated into an increase of 0.9% in nurse communication and 1.3% in staff responsiveness on patient satisfaction surveys The Problem: While culturally appropriate language is important to promote inclusiveness and reduce disparities, there are challenges and potential problems with its implementation. Because of the cultural diversity of the United States, cultural norms and practices do not ensure that everyone living in the United States will be culturally respected and vary greatly between communities and regions. Because of differences in tradition, religion, society, language, and socialization, individuals within various communities may not feel respected or secure. With literally thousands of languages and dialects in the world, each having its own unique cultural background, dealing with linguistic diversity and ensuring culturally appropriate language can become quite a complex task. In health care this has been shown to lead to neglect or underservice in certain communities through intentional and unintentional slights. As society continues to change, social inclusiveness cannot be overlooked as an integral component of care. Inclusiveness is not only an important but necessary element of care so that patients feel respected and valued in a system that recognizes the cultural practices and identities of different communities. Research has demonstrated that this leads to improved clinician-patient interactions, compliance, and data sharing by patients. More recently the health care system has come to recognize the impact the unique cultural needs of often overlooked groups such as people with disabilities, lesbian, gay, bisexual, and transgender (LGBT) people have on the quality and effectiveness of care as well (while not a racial or ethnic group these provide further evidence of the need for culturally competent care). The Backdrop: According to “Understanding Cultural Differences in the U.S.”, from the U.S.A.Hello, website cultures can differ in 18 different ways including, communication, physical contact (shaking hands, personal space), manners, political correctness, family, treatment of women & girls (men and women going to school/work together, sharing tasks), elders (multigenerational homes), marriage (traditions, views on same-sex marriage), health, education, work, time, money, tips religion, holidays, names and language. Recognizing and understanding cultural differences is important to create trust and security with patients. Generative AI can help bridge language and cultural barriers that often prevent non-English speakers from accessing essential health services. According to Marcin Frąckiewicz, “generative AI can serve as a virtual health assistant, providing accurate and personalized health advice to users. By making health information more accessible, individuals can make better-informed decisions about their health and well-being.” Different cultural backgrounds interact effectively with technology. Similarly, speech synthesis can provide text-to-speech capabilities in a variety of languages, enabling technology to communicate with users in their preferred language. This can often be done more rapidly and more efficiently than when having to locate and find an appropriate translator. Using generative AI can carry out multicultural expression, according to the product and service integration of multicultural expression. A variety of visual depictions, avatars, and characters of different racial, ethnic, and cultural backgrounds. Inclusion can be promoted through such technologies, allowing users to feel represented and valued. In addition, generative AI can help identify and correct potential instances of cultural insensitivity or technological bias. Moreover, since generative AI is iterative it allows for continuous improvement, ensuring that culturally appropriate language and experience are prioritized. Implications: While still in the developmental stages, generative AI can be one tool to assist healthcare organizations in delivering culturally competent care for patients. At its most fundamental level, generative AI models can provide translation services to help overcome language barriers and help communicate between healthcare providers and patients who speak different languages. It can enhance doctor-patient interaction, improve patient satisfaction and reduce misunderstanding. Although communications created with generative AI will likely lack nuance and will have difficulty creating empathetic emotional connections with patients, it will be a step in the right direction. Over time we expect generative AI and other generative AI models to be able to provide up-to-date information about medical conditions, treatment guidelines, medications, and research results. By collecting and analyzing patient data through remote monitoring, facilitating virtual consultations, and providing real-time information and guidance. Moreover, technologies like generative AI can contribute to remote monitoring and telemedicine initiatives to improve access to health care services, particularly for those living in remote areas or with limited mobility. Related Reading: Hurtling into the future’: The potential and thorny ethics of generative AI in healthcare Structural Racism In Historical And Modern US Health Care Policy Can Hospital Cultural Competency Reduce Disparities in Patient Experiences with Care? Racism and discrimination in health care: Providers and patients
- Exploring Latine Culture-Reducing Disparities Begins with Cultural Sensitivity-The HSB Blog 5/5/23
As we were preparing for this week’s “Our Take”, we had a little internal debate about Cinco De Mayo. Was it a Mexican holiday, or was it a Latino holiday? As we explored this, we came across an article from the Washington Post entitled “Cinco de Mayo is not a Mexican holiday. It's an American one” which argued “Cinco de Mayo is a celebration created by and for Latino communities in the United States. And the celebration of Cinco de Mayo is more about U.S. Latino history and culture than Mexican history.” As we read this, we realized there is often a tendency to view and categorize other cultures with monolithic and homogeneous labels that are often inadequate. Doing so can lead to broad generalizations that perpetuate the inequities in health care. Increasing cultural sensitization is a way to begin addressing and reducing those inequities. If we are to understand ethnic disparities in health care and deliver culturally appropriate and equitable care, we need to understand the nuances and idiosyncrasies of other cultures. Cinco de Mayo seemed a good place to start so we found the article “Ethnic Bias and the Latine Experience” in the American Counseling Association’s magazine, Counseling Today. While we don’t necessarily agree with everything in the article, we found this to be one of the most thorough and broad attempts to understand the Latine culture in the U.S. and as a result, we reprint an excerpt from it here with permission. Key Takeaways: Only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, and only 3% said they use Latinx to describe themselves (Pew Research Center) The U.S. Latine population was 62.1 million in 2020, or 19% of all Americans and is projected to increase to 111.2 million, or 28% of the U.S. population by 2060 (Pew Research Center & UCLA Latino Policy & Politics Institute). Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. It was created as a “super category” for the 1970 census after of Mexican American and other Hispanic organizations advocated for federal data collection. In 2019, 61.5% of Latines were of Mexican origin or heritage, while 9.7% were if Puerto Rican or of Puerto Rican heritage, and Cubans, Salvadorans, Dominicans, Guatemalans, Colombians, and Hondurans each equaling a million or more in 2019. Ethnic Bias and the Latine Experience “She looks Latina.” “He doesn’t look Black.” “They sound Hispanic.” “She doesn’t sound Asian.” “I think they’re mixed.” Conversations all around us bear witness to the inclination to classify people into groups. This categorization of people is built into the fabric of American life, a fabric not originally intended to cover everyone. Inherent advantages and dominance historically favored white male landowners (with the exception of Jewish or Catholic men). Like Indigenous and Black communities, people of Hispanic or Latine descent continue to navigate a system not created for them. (In the next section, we explain why we prefer to use the term Latine as a gender-neutral or nonbinary alternative to Latino.) The objective of this article is to enhance counselors’ cultural sensitivities when providing services to Latine communities. We will discuss the unique discrimination challenges faced by Latines and provide tips for counselor effectiveness. A culturally responsive discussion about the mental health effects of ethnic bias on the Latine experience begins with a definition of key terms. The American Psychological Association’s (APA) Dictionary of Psychology defines ethnic as “denoting or referring to a group of people having a shared social, cultural, linguistic, and usually racial background,” and it can sometimes include the religious background of a group of people. The U.S. census has only two ethnic categories: Hispanic (Latino) and non-Hispanic (non-Latino). Ethnic bias is discrimination against individuals based on their ethnic group, often resulting in inequities. Nomenclature is problematic and ever evolving in the U.S. system of categorizing people into racial and ethnic groups. Every racialized group in the United States has gone through numerous label adjustments from within and outside the group. For example, First Nation people have been called Indian, American Indian, Native American, and Indigenous American. People of African descent have been called Colored, Negro, Black, Afro-American, and African American. Similarly, the word choices for the collective Hispanic description have also evolved over the years: Hispanic, Latino/a, Latinx and Latine. These are pan-ethnic terms representing cultural origins — regardless of race — for people with an ancestral heritage from Latin American countries and territories, who according to the Pew Research Center, prefer to be identified by their ancestral land (e.g., Mexican, Cuban, Ecuadorian) rather than by a collective pan-ethnic label. The history, and the debate, of nomenclature for this collective group set the stage for understanding the ethnicization of a large and diverse population. Hispanic is the oldest and most widely used term to describe Spanish-speaking communities. According to the Pew Research Center, the term Hispanic was first used after Mexican American and other Hispanic organizations advocated for federal data collection about U.S. residents of Mexico, Cuba, Puerto Rico, Central and South America, and other Spanish-speaking origins. The U.S. Census Bureau responded by creating the “super category” of Hispanic on the 1970 census. The term Latino/a gained popularity in the 1990s to represent communities of people who descend from or live in Latin American regions, regardless of their language of origin (excluding people from Spain). This allowed for gender separation, with Latina representing the female gender and Latino representing the male gender or combined male-female groups. The Pew Research Center noted that Latino first appeared on the U.S. census in 2000 alongside Hispanic, and the two terms are now used interchangeably. While the two terms often overlap, there are exceptions. People from Brazil, French Guiana, Surinam and Guyana, for example, are Latino because these countries are in Latin America, but they are not considered Hispanic because they’re not primarily Spanish-speaking. These regions were colonized by the French, Portuguese, and Italians, so their languages derive from other ancient Latin-based languages instead of Spanish. Latinx has been used as a more progressive, gender- neutral or nonbinary alternative to Latino. Latinx emerged as the preferred term for people who saw gender inclusivity and intersectionality represented through use of the letter “x.” Others, however, note that “x” is a letter forced into languages during colonial conquests, so they reject the imposing use of this colonizing letter. Interestingly, for the population it is intended to identify, only 23% of U.S. adults who self-identify as Hispanic or Latino had heard of the term Latinx, according to a Pew survey of U.S. Hispanic adults conducted in December 2019. And only 3% said they use Latinx to describe themselves. In this article, we use the term Latine, the newest word used by this population. Latine has the letter “e” to represent gender neutrality. We like this term because it comes from within the population rather than being assigned by others, and it is void of the controversial “x” introduced by colonists. Next, we look at the meaning and impact of ethnicization and ethnic bias toward Latines in the United States. We explore the ways that bias, and discrimination affect the nation’s largest group of minoritized people, and we recommend actionable solutions to enhance counselors’ cultural sensitivities when providing services to Latine communities. Latines come from more than 20 Latin American countries and several territories, including the U.S. territory of Puerto Rico. There are seven countries in Central America: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua, and Panama. The official language in six of these countries is Spanish, with English being the official language of much of the Caribbean coast including Belize (in addition to Indigenous languages spoken throughout the region). South America has three major territories and 12 countries: Brazil, Argentina, Paraguay, Uruguay, Chile, Bolivia, Peru, Ecuador, Colombia, Venezuela, Guyana and Suriname. The official language in most of these countries is Spanish, followed by Portuguese, although it is estimated that there are over a thousand different tribal languages and dialects spoken in many of these countries. The Pew Research Center reported that the U.S. Latine population reached 62.1 million in 2020, accounting for 19% of all Americans. In 2019, 61.5% of all Latines indicated they were of Mexican origin, either born in Mexico or with ancestral roots in Mexico. The next largest group, comprising 9.7% of the U.S. Latine population, are either Puerto Rican born or of Puerto Rican heritage. Cubans, Salvadorans, Dominicans, Guatemalans, Colombians and Hondurans each had a population of a million or more in 2019. Although there are notable similarities, the Latine population is not an ethnic monolith. Latine cultures are diverse, with different foods, folklore, Spanish dialect, religious nuances, rituals and cultural celebrations. Despite the varying cultural experiences, many of the issues facing Latine communities remain the same. Copyright Counseling Today, October 2022, American Counseling Association
- Oshi Health: Virtual Care Comes to Digestive Health (An Update)
(We originally profiled Oshi Health in October of 2021, please see our original Scouting Report “Scouting Report-Oshi Health: Virtual Care Comes to Digestive Health” dated 10/22/21 here). The Driver: Oshi Health, a digital gastrointestinal care startup, recently raised $30M in Series B funding in a round led by Koch Disruptive Technologies with participation from existing investors including Flare Capital Partners, Bessemer Venture Partners, Frist Cressey Ventures, CVS Health Ventures, and Takeda Digital Ventures. In addition to the institutional investors, individual investors who joined the Series A round included Jonathan Bush, founder and CEO of Zus Health, (and cofounder of Athenahealth), and Russell Glass, CEO of Headspace Health. According to the company, the ???? will be used to accelerate the next phase of Oshi’s growth, to scale its clinical team nationwide and forge relationships with health plans, employers, channel partners, and provider groups. Key Takeaways: Approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. A study sponsored by the company and presented at the Institute for Health Improvement in January 2023 found Osh’s program resulted in all-cause medical cost savings of approximately $11K per patient in just six months According to Vivante Health, $136B per year is spent on GI conditions in the US, that is more than heart disease ($113B), trauma ($103B), or mental health ($99B). Direct costs for IBS alone are as high as $10 billion and indirect costs can total almost $20 billion. The Story: CEO and founder Sam Holliday encountered GI issues while observing his mother and sister’s ordeal when managing their own IBS care. Holliday’s experience watching his family deal with the difficulties of managing IBS without clinical assistance prompted his interest in companies like Virta Health that use food as medicine for treating diabetes. Holliday saw the contrast between his family’s experience and Virta’s approach of supporting people in reversing the impact of diabetes on their daily lives. Holliday was fascinated with Virta’s holistic approach of care that prioritized the user's experience while saving cost and providing easy access to virtual care that leveraged technology and was data driven. The possibilities that Virta’s virtual model presented for GI spurred the idea of creating Oshi Health. Oshi Health’s encompassing platform supports patients by granting them access to GI specialists, prescriptions, and lab work from the comforts of their homes. Oshi Health provides comprehensive and patient-focused care to patients with GI conditions such as Irritable Bowel Syndrome (IBS), Crohn’s disease, Inflammatory Bowel Disease (IBD), and Gastroesophageal Reflux Disease (GERD). Oshi Health works by connecting patients with an integrated team of GI specialists including board-certified gastroenterologists and registered dieticians. Oshi Health has clinicians that assess symptoms and order lab tests and diagnostics if needed. In addition to the licensed GI doctors and dieticians, patients have the option of speaking with GI-specialized mental health clinicians and nurse practitioners as well. This allows patients to form a customized plan as many of these conditions often involve a mental health component in addition to a physical one. The customized plan attempts to capture the patient’s needs regarding anxiety, nutrition, or stress. The Oshi Health service extends beyond testing and planning by providing stand-by health coaches and care teams to support patients and help them stay on track. Oshi Health’s platform also offers an app designed to help patients take action and stay organized through their GI care journey. With the app, patients can record their symptoms, quality of life measurements, and other factors known to impact their diet, sleep, or exercise. The app also features useful educational materials and recipes to help patients learn more about their condition. The company states their products and services are currently available to over 20 million people as a preferred in-network virtual gastroenterology clinic for national and regional insurers, as well as their employer customers. In April of this year, Oshi and Aetna announced a partnership that provides Aetna commercial members with in-network access to Oshi’s integrated multidisciplinary care teams. In addition, in March of 2022, Firefly Health – a virtual-first healthcare company named Oshi as its preferred partner for digestive care. In January 2023, the company announced clinical trial results of a company-sponsored study at the Institute for Healthcare Improvement (IHI). The study demonstrated that virtual multidisciplinary care for gastrointestinal (GI) disorders from Oshi Health resulted in significantly higher levels of patient engagement, satisfaction, and symptom control resulting in all-cause medical cost savings of approximately $11K per patient in just six months. The Differentiators: About 1 in 4 people suffer from diagnosed GI conditions and many more suffer from chronic undiagnosed symptoms. “GI conditions are really stigmatized”. says Holliday. Integrated GI care is a missing piece of the healthcare infrastructure. There’s a huge group of people who don’t have anywhere to go to access care that’s proven to work.” By using this virtual-first model and increasing access to care at a lower cost for patients, Oshi Health is attempting to revolutionize GI care. Through its approach Oshi is attempting to tap into the large market for food-related health conditions. For example, according to a report entitled “Let Food Be Thy Medicine: Americans Use Diet to Manage Chronic Ailments,” approximately 15% of U.S. households have a person in their household who uses diet to manage a health condition. Oshi calculates that these conditions drive approximately $135 billion in annual healthcare costs and have a collective impact greater than diabetes, heart disease and mental health combined. Oshi Health plans to use a community of irritable bowel disease (IBD) patients for disease research, personalized insights, and a new digital therapeutic. Oshi Health is one of the first companies to address GI disorders and is easily accessible for patients through a virtual approach. It is the only virtual platform exclusively for GI patients. Being virtual helps patients receive treatment from the convenience of their home, reduces stigma, and saves them the time and hassle of commuting for GI treatment which can involve extensive and costly testing. Patients are in control of their care and have access to a team of medical professionals at their fingertips. The Big Picture: Oshi Health’s commitment to making GI care accessible, convenient, and affordable is more likely to lower the costs of treatments such as cognitive-behavioral therapy, colonoscopies, X-rays, sonograms, and more in healthcare centers. Oshi Health helps avoid preventable and expensive ER visits as well as unnecessary colonoscopies and endoscopies. For example, according to a study cited by the company, unmanaged digestive symptoms are the #1 cause of emergency department treat-and-release visits. By incorporating access to psychologists in the membership plan rather than requiring it to be paid out of pocket it allows patients to access the services of mental health professionals quickly and easily. Patients can schedule appointments within 3 days and there is always support between visits with care plan implementation. By incorporating both physical and mental health Oshi Health is attempting to treat not just the symptoms but the causes as well, helping to lower the costs versus receiving in-person which can lower the incidence of GI conditions and increase the quality of care they receive. As noted by the company “Oshi Health is able to intercept and change the trajectory of unmanaged symptom escalations” helping to drive improvements in outcomes and cost savings. Billions of dollars are saved annually on avoidable treatments and expenses because these patients are receiving the treatment they need rather than physicians ordering expensive tests without knowing the root cause of their problems. Due to the comprehensiveness of the virtual-first care model, physicians, dieticians, health coaches, care coordinators, and psychologists are all involved in the patient's journey. Under traditional models, , patients with GI issues see just a gastroenterologist. By contrast, in Oshi’s model, a holistic team is involved every step of the way that is customized for each patient’s lifestyle that creates a personalized care plan for their lifestyle. Oshi Health scores $30M to scale gastrointestinal care company, Virtual GI care startup Oshi Health takes up $30M backed by CVS, Takeda venture arms
- What Sports Analytics Can Teach Us About Integrating AI Into Care
Our Take: Last week, our founder Jeff Englander, had the pleasure of having dinner with Dr. David Rhew, Global Chief Medical Officer and V.P. of Healthcare for Microsoft. After a broad-ranging discussion about the application of A.I., A/R, V/R, and Cloud to healthcare they came to realize they were both big sports fans. David of his hometown Detroit teams and Jeff of his hometown Boston teams. Toward the end of dinner, Jeff referenced an article he had written in May of 2018 on “What Steph & LeBron Can Teach Business About Analytics” and how sports demonstrate many practical ways to gain acceptance and integrate analytics into an organization. Based on that discussion, we thought it would be timely to reprint it here: As I sat watching the NBA conference finals last night, I began thinking about what I had learned from the MIT Sloan Sports Analytics conferences I went to over the last several years. I thought about how successful sports had been and the NBA in particular in applying sports analytics and what lessons businesses could learn to help them apply analytics to their businesses. Five basic skills stood out that sports franchises had been able to apply to their organizations that were readily transferrable to the business world: Intensive focus; Deep integration Limited analytic “burden” Trust in the process Communication and alignment. 1) Intensive focus - when teams deploy sports analytics they bring incredible focus to the task. One player noted that they do not focus on how to stop LeBron, not even on how to stop LeBron from going left off the dribble, but instead how to stop LeBron from going left off the dribble coming off the pick and roll. This degree of pinpoint analysis and application of the data has contributed to the success and continued refinement of sports analytics on the court, field, rink, etc. 2) Deep integration - each of the analytics groups I spoke with attempted to informally integrate their interactions into the daily routines of players and coaches through natural interactions (the Warriors analytics guy used to rebound for Steph Curry at practice). Analytics groups worked to demystify what they were doing and make themselves approachable. The former St. Louis now L.A. Rams analytics group jokingly coined its office as the “nerds nest”. By integrating themselves into the player’s (and coach's) worlds they were able to break down stereotypes and barriers to acceptance of analytics. 3) Limited analytic “burden” - teams’ data science groups noted given the amount of data they generate it’s important to limit the number of insights they present at any one time. One group made it a rule to discuss or review no more than 3 analytical insights per week with players or coaches. This made their work more accessible, more tangible to players/coaches and helped them quantify the value to the front office. 4) Trust in the process - best illustrated by a player who told the story of working with an analytics group and coaches to design a game plan against an elite offensive player which he followed and executed to a tee. But that night the opposition player couldn’t be stopped and in the player’s words "he dropped 30 on me". The other panelists pointed out that you can’t go away from your system based on short-term results. As one coach noted ‘don’t fail the plan, let the plan fail you…Have faith in the process.” 5) Communication and alignment - last but not least teams stressed the need to be aligned and to communicate that concept clearly all throughout the organization. As Scott Brooks, at the time the coach of the Orlando Magic noted, “we are all in this together, we have to figure this out together”. Surprisingly, at times communication was paramount even for the most successful and highly compensated athletes. For example, at last year’s conference, Chris Bosh a 5x All-Star and 2x NBA Champion, making $18M a year at the time he was referring to, lamented the grueling Miami Heat practices during their near-record 27-game winning streak in 2013, seemingly despite their success (at the time the 2nd longest winning streak in NBA history). When I asked him, what would have made it more bearable, he said communication, just better communication on what they were trying to do. Clearly, professional sports have very successfully applied analytics to their craft and there are a number of lessons that businesses can copy as they seek to gain broader and more effective adoption of analytics throughout the value chain.
- Viz.ai-Applying AI to Reduce Time to Life Saving Treatments
The Driver: Viz.ai recently raised $40M in growth capital from CIBC Innovation Banking. The additional funding brings Viz’s total fundraising to approximately $292M. Via has developed a software platform based on artificial intelligence (AI) that is designed to improve communication between care teams handling emergency patients (first applied to stroke patients) by helping improve care coordination and dramatically improved response times. The company will use the funds to help increase expansion and power its expansion, including the possibility of acquisitions. Key Takeaways: While the total cost of strokes in the U.S. was approximately $220 billion, the cost due to under-employment was $38.1 billion, and $30.4 billion from premature mortality (Via.ai & Journal of the Neurological Sciences) The risk of having a first stroke is nearly twice as high for blacks as for whites and blacks have the highest rate of death due to stroke (American Stroke Association) Each one minute [delay in care for stroke victims] translates into 2 million brain cells that die (Viz.ai) Stroke is the number one cause of adult disability in the U.S. and the fifth leading cause of death (American Stroke Association ) The Story: Viz.ai was founded by, Dr. Chris Mansi and David Golan. While working as a neurosurgeon in the U.K. Dr. Mansi observed situations in which a successful surgery was performed yet patients would not survive because of extended time lapses between diagnosis and surgery, which was particularly true with strokes. For example, when doctors believe there has been a stroke, they typically would order x-rays and a series of CT scans, and while the scans themselves typically happened quickly, there often was a sizeable delay before the studies could be read by a competent professional. Once the readings were performed by a radiologist, there was often a further delay in care as clinicians had to inform a local stroke center of the diagnosis and then ensure the patient was transferred to that center for treatment. Dr. Mansi met Golan in graduate school at Stanford while studying for his M.B.A. Golan was suspected of having suffered a stroke prior to entering Stanford and the two classmates lamented the lack of available data for stroke treatment. As noted by the company, “Mansi learned how undertreated large vessel occlusion (LVO) strokes were and wanted to be an agent of change.” Mansi and Golan, collaborated on a plan to apply A.I. to increase the data for stroke treatment and viz.ai was born. As noted in a recent Forbes article, the company’s “software cross-references CT images of a patient’s brain with its database of scans to find early signs of LVO strokes. It then alerts doctors, who see [and communicate] about the images on their phones” and allows those clinicians to communicate with specialists at stroke centers and arrange for patients to be transferred there for care. According to the company, this leads to dramatic decreases in the time it takes for patients to go from diagnosis to procedure, commonly referred to as “door to groin puncture times.” Originally developed for LVO’s Viz has now received FDA approval for 7 A.I. imaging solutions and has extended treatment from LVOs to cerebral aneurysms (February 2022), subdural hemorrhage (July 2022) and most recently hypertrophic cardiomyopathy-HCM (pending). The Differentiators: As noted above, Viz.ai’s system enables it to automatically scan all images in a hospital system and scan for the noted conditions, then alerts clinicians if any are detected. In the case of LVOs the system then allows doctors to view images of patient scans on their phones, exchange messages, and cut crucial time off diagnosis and treatment. As the company notes, this is particularly important for smaller facilities which often lack specialists to interpret scans and arrange for transitions in care. For example, according to a study in the American Journal of Neuroradiology looking at stroke treatment at a facility using viz.ai technology, researchers found “robust improvement” in other stroke response metrics, including door-to-device and door-to-recanalization and a 22% overall decline in time to treatment. This is particularly important in the case of strokes which are the number one cause of disability and the fifth leading cause of death, as time is of the essence for stroke victims with each minute of delay adding one week of disability. Implications: Applying technology to help reduce delays in diagnosis and treatment is one of the most promising applications of artificial intelligence because of the vast amounts of data they can process in short periods of time. While over time, many hope and some fear that these types of technologies will be able to be “taught” how to diagnose and treat illness, in the near term their greatest use lies in augmenting the skills of clinicians by allowing them to focus their attention on areas most in need of an experienced, nuanced diagnosis. This is particularly true for brain injuries where literally every second and every minute count. For example, as noted by the company “every one minute [delay in care for stroke victims] translates into 2 million brain cells that die.” Given that the loss of brain cells results in loss of brain function, disability, or worse the costs to society can be quite high. According to the company, strokes cost the U.S. healthcare system about $220 billion annually and each LVO patient that you are able to treat with a timely thrombectomy” costs one-tenth, or almost $1 million less than those that who aren’t. It is practical examples of the clinical applications of A.I. such as this, which attack very concrete and tangible problems, that are likely to pave the way for acceptance of more complex applications in the healthcare delivery system. Viz.ai partners with Us2.ai to integrate echocardiogram analysis tool;Viz.ai secures Bristol Myers Squibb's backing for hypertrophic cardiomyopathy-spotting AI
- Explainable AI-Making AI Understandable, Transparent, and Trustworthy-The HSB Blog 3/23/23
Our Take: Explainable AI, or AI whose methodology, algorithms, and training data can be understood by humans, can address challenges surrounding AI implementation including lack of trust, bias, fairness, accountability, and lack of transparency, among others. For example, a common complaint about AI models is that they are biased and that if the data that AI systems are trained on is biased or incomplete, the resulting model will perpetuate and even amplify that bias. By providing transparency into how an AI model was trained and what factors went into producing a particular result explainable AI can help identify and mitigate bias and fairness issues. In addition, it can also increase accountability by making it easier for users and those impacted by models to trace some of the logic and basis for algorithmic decisions. Finally, by enabling humans to better understand AI models and their development, explainable AI can engender more trust in AI which could accelerate the adoption of AI technologies by helping to ensure these systems were developed with the highest ethical principles of healthcare in mind. Key Takeaways: AI algorithms continuously adjust the weight of inputs to improve prediction accuracy but that can make understanding how the model reaches its conclusions difficult. One way to address this problem is to design systems that explain how the algorithms reach their predictions. ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both of which are well in excess of what any human brain could process and break down. During the Pandemic, the University of Michigan hospital had to deactivate its AI sepsis-alerting model when differences in demographic data for patients affected by the pandemic created discrepancies and a series of false alerts. AI models used to supplement diagnostic practices have been effective in biosignal analyses and studies indicate physicians trust the results when understand how the AI came to its conclusion The Problem: The use of artificial intelligence (AI) in healthcare presents both opportunities and challenges. The complex and opaque nature of many AI algorithms, often referred to as "black boxes", can lead to difficulty in understanding the logical processes behind AI's conclusions. This not only poses a challenge for regulatory compliance and legal liability but also impacts users ability to ensure the systems were developed ethically, are auditable and eventually their ability to trust the conclusions and purpose of the model itself. However, the implementation of processes to make AI more transparent and explainable can be costly and time-consuming and could potentially result in a requirement or at least preference that model developers may need to disclose proprietary intellectual property that went into creating the systems. This process is made even more complex in the U.S. where the lack of general legislation regarding the fair use of personal data and information can hamper the use of AI in healthcare, particularly in clinical contexts where physicians must explain how AI works and how it is trained to reach conclusions. The Backdrop: The concept of explainable AI is to provide human beings involved with using, auditing, and interpreting models a methodology to systematically analyze what data a model was trained on, what predictive factors are more heavily weighted in the models as well as provide cursory insights into how algorithms in particular models arrived at their conclusions/recommendations. This in turn would allow the human beings interacting with the model to better comprehend and trust in the results of a particular AI model instead of the model being viewed as a so-called “black box” where there is limited insight into such factors. In general, many AI algorithms, such as those that utilize deep learning, are often referred to as “black boxes” because they are complex, can have multiple billions and even trillions of parameters upon which calculations are performed and consequently can be difficult to dissect and interpret. For example, ChatGPT4 is rumored to have around 1 trillion parameters compared to the 175 billion parameters in ChatGPT3 both well in excess of what any human brain could process and break down. Moreover, because these systems are trained by feeding vast datasets into models which are then designed to learn, adapt and change as they process additional calculations the products of the algorithms are often different from their original design. As a result, the numbers of the parameters the models are working with and the adaptive nature of the machine learning models, engineers and data scientists building these systems cannot fully understand the “thought process” behind an AI’s conclusions or explain how these connections are made. However, as AI is increasingly applied to healthcare in a variety of contexts including medical diagnoses, risk stratification and anomaly detection, it is important that AI developers have methods to ensure they are operating efficiently, impartially, and lawfully in line with regulatory standards both at the model development stage and when models are being rolled into use. As noted in an article published in Nature Medicine, starting the AI development cycle with an interpretable system architecture is necessary because inherent explainability is more compatible with the ethics of healthcare itself than methods to retroactively approximate explainability from black box algorithms. Explainability, although more costly and time-consuming to implement in the development process, ultimately benefits both AI companies and the patients they will eventually serve far more than if they were to forgo it. Adopting a multiple stakeholder view, the layman will find it difficult to make sense of the litany of data that AI are trained on, and that AI recites as part of their generated results, especially if the individual interpreting these results lacks knowledge and training on computer science and programming. Through creating AI with transparency and explainability , developers also create responsible AI that may eventually give way to the larger-scale implementation of AI in a variety of industries, but especially healthcare where more digitization is generating more patient data than ever before along with the need to manage and protect this data in appropriate ways. Creating AI that is explainable ultimately increases end user trust, improves auditability and creates additional opportunities for constructive use of AI for healthcare solutions. This is one way to reduce the hesitation and risks associated with traditional “black box” AI by making legal and regulatory compliance easier, providing the ability for detailed documentation of operating practices, and allowing organizations to create or preserve their reputations for trust and transparency. While a large number of AI-enabled clinical decision support systems are predominantly used to provide supporting advice for physicians in making important diagnostic and triage decisions, a study from the Journal of Scientific Reports found that the this actually helped improve physicians’ diagnostic accuracy, with physician plus AI actually performing better than whey they received human advice concerning the interpretation of patient data (sometimes referred to as the “freestyle chess effect”). AI models used to supplement diagnostic practices have been effective in biosignal analyses such as that of electrocardiogram results to detect biosignal irregularities in patients as quickly and accurately as a human clinician can. For example, a study from the International Journal of Cardiology found that physicians are more inclined to trust the generated results when they can understand how the explainable AI came to its conclusion. As noted in the Columbia Law Journal, however, the most obvious way to make an AI model explainable would be to reveal the source code for the machine learning model, that actually “will often prove unsatisfactory (because of the way machine learning works and because most people will not be able to understand the code)” and because commercial organizations will not want to reveal their trade secrets. As the article notes another approach is to “create a second system alongside the original ‘black box’ model, sometimes called a ‘surrogate model.’” However, a surrogate model only closely approximates the model itself and does not use the same internal weights of the model itself. As such, given the limited risk tolerance in healthcare we doubt such a solution would be acceptable. Implications: As noted by all the buzz around ChatGPT with the recent introduction of ChatGPT4 and its integration into products such as Microsoft’s Copilot and Google’s integration of Bard with Google Workspace, AI products will increasingly become ubiquitous in all aspects of our lives including healthcare. As this happens, AI developers and companies will have to work hard to ensure that these products are transparent and do not purposely or inadvertently contain bias. Along those lines, when working in healthcare in particular, AI companies will have to ensure that they implement frameworks for responsible data use which include 1) ensuring the minimization of bias and discrimination for the benefit of marginalized groups by enforcing non-discrimination and consumer laws in data analysis; 2) providing insight into the factors affecting decision-making algorithms, and 3) requiring organizations to hold themselves accountable to fairness standards and conduct regular internal assessments. In addition as noted in an article form the Congress of Industrial Organizations, in Europe AI developers could be held to legal requirements surrounding transparency without risking IP concerns under Article 22 of the General Data Protection Regulation which codifies an individual’s right to not be subject to decisions based solely on automated processing and requires the supervision of a human in order to minimize overreliance and blind faith in such algorithms. In addition, one of the issues with AI models is due to data shifts, caused when machine learning systems underperform or yield false results due to mismatches between the datasets they were trained on and the real-world data they actually collect and process in practice. For example, as challenges to individuals’ health conditions continue to evolve and new issues emerge, it is important that care providers consider population shifts of disease and how various groups are affected differently. During the Pandemic, the University of Michigan Hospital had to deactivate its AI sepsis-alerting model when differences in demographic data gathered by patients affected by the pandemic created discrepancies with the data the AI system had been trained on, leading to a series of false alerts. As noted in an article in the New England Journal of Medicine this has fundamentally altered the way the AI viewed and understood the relationship between fevers and bacterial sepsis. Episodes like this underscore the need for high-quality, unbiased, and diverse data in order to train models. In addition, given that the regulation of machine learning models and neural networks in healthcare is continuing to evolve, developers must ensure that they continuously monitor and apply new regulations as they evolve, particularly with respect to adaptive AI and informed consent. In addition, developers must ensure that models are tested both in development and post-production to ensure that there is no model drift. With the use of AI models in health care there are special questions that repeatedly need to be asked and answered when using these models. Are AI properly trained to account for the personal aspects of care delivery and consider the individual effects of clinical decision-making, ethically balancing the needs of the many over the needs of the few? Is the data collected and processed by AI secure and safe from malicious actors, and is it accurate enough so that the potential for harm is properly mitigated, particularly against historically underserved or underrepresented groups? Finally, what does the use of these models and these particular algorithms mean with regard to the doctor-patient relationship and the trust vested in our medical professionals? How will decision-making and care be impacted when using AI that may not be sufficiently explainable and transparent enough for doctors themselves to understand the thought process behind and therefore trust the results that are generated? These questions will undoubtedly persist as long as the growth in AI usage continues and it is important that AI is adopted responsibly and with the necessary checks and balances to preserve justice and fairness for the patients they will serve. Related reading: Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead Non-task expert physicians benefit from correct explainable AI advice when reviewing X-rays Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram The Judicial Demand for Explainable Artificial Intelligence Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Enhancing trust in artificial intelligence: Audits and explanations can help Art. 22 GDPR – Automated individual decision-making, including profiling - General Data Protection Regulation (GDPR) The Clinician and Dataset Shift in Artificial Intelligence Dissecting racial bias in an algorithm used to manage the health of populations
- Cognosos-Analyzing and Optimizing Asset Visibility and Management
The Driver: Cognosos recently raised $25M in a recent venture round led by Riverwood Capital. As part of the funding Joe De Pinho and Eric Ma from Riverwood will join Cognosos’ Board. Cognosos has developed a cloud-based platform of real-time location services (RTLS) and process optimization software. The fundraising round brings Cognosos’ total funding to $38.1M. The proceeds will be used to allow Cognosos to double its staff from the current 50 to 100 as well as continue to expand in healthcare and automotive manufacturing, among other industries. Key Takeaways: Cognosos grew revenue by over 226% in 2022 according to the Atlanta Business Journal and expects to double it year-over-year in the coming year (Cognosos) Between 2020 and 2022 the average price of healthcare services increased at 2.4%, while the Producer Price Index (PPI) increased at 14.3%, leading to a further squeeze on healthcare supply chains as per-patient input costs have exploded (University of Arkansas) A single Cognosos gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets In 2021 it is estimated that at least one-third of healthcare providers operated under negative margins, with a combined loss of $54 billion in net income (Kaufman-Hall) The Story: Cognosos was founded in 2014 and is based on technology developed by the founders at Georgia Tech’s Smart Antenna lab for radio astronomy. According to the company’s website, while investigating techniques in radio astronomy to combine signals from multiple dishes, the founders of Cognosos discovered a way to use software-defined radio (SDR) and cloud-based signal processing. This dramatically lowered the cost and power requirements for wireless sensor transmitters. The result, called RadioCloud, was created with the belief that businesses of all sizes should be able to use ubiquitous, low-cost wireless technology to harness the power of the Internet of Things (IoT). The name Cognosos is derived from the Latin verb cognoscere, meaning “to become aware of, to find.” The company states that they have over 100 customers, 4000 registered users and grew revenue by over 226% in 2022 according to the Atlanta Business Journal. The Differentiators: Cognosos real-time location services (RTLS) technology and process optimization software allows companies to track their assets’ movement, improve operational visibility and safely increase and maximize productivity. The company’s RTLS combines Low Energy Bluetooth technology with AI and its proprietary long-range wireless networking technology to help reduce costs of deployment and improve tracking ability. The company has 10 U.S. patents and states that a single gateway can provide support for up to 100,000 square feet and manage location information for up to 10,000 assets. Their technology is easily deployable without having to pull wires, move tiling or other fixtures minimizing any disruptions to operations and allowing them to leverage existing Bluetooth infrastructure, including Bluetooth-enabled fixtures. The company states that this technology will allow them to bring critical assets on-line much more rapidly than competitors (ex: weeks vs. months). The company’s major clients are in healthcare, automobile, logistics and manufacturing. Implications: For years hospitals and healthcare industries have struggled to effectively manage and optimize their management of assets. The industry is rife with stories of nurses, aides, and others squirreling monitors and other devices away in secret hiding places so that they don't have to spend time and unnecessary energy tracking down what they need. Now more than ever, with hospitals facing increased financial pressures, workforce shortages and employee dissatisfaction, a solution that improves and eases the management of such assets is needed. Given Cognosos' ability to limit the disruption to clinical operations and patient care while helping to reduce costs and employee productivity, they should be positioned well going forward. In addition, since Cognosos’s product is not hardware-based but instead relies on the Cloud and AI software to manage assets, they can provide real-time asset tracking with lower infrastructure costs. Our belief is that solutions like Cognosos’s which leverage both the Cloud and AI in helping to address healthcare’s back-office and supply chain challenges will become the first to be widely adopted and hasten the adoption of these technologies within the industry and pave the way for broader adoption or clinical technology down the road.
- R-Zero-Making Intelligent Air Disinfection More Economical and Efficient
The Driver: R-Zero recently raised $105M in a Series C financing led by investment firm CDPQ with participation from BMO Financial Group, Qualcomm Ventures, Upfront Ventures, DBL Partners, World Innovation Lab, Mayo Clinic, Bedrock Capital, SOSV and legendary venture capital investor John Doerr. The Series C financing brings the total amount raised by R-Zero to more than $170M since its founding in 2020. The company will use the funds to scale deployments of its disinfection and risk modeling technology to meet growing demand across public and private sectors, including hospitals, senior care communities, parks and recreation, other government facilities, and college and corporate campuses. Key Takeaways: According to the company, R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms R-Zero’s UV-C products cost anywhere from $3K to $28K compared with traditional institutional UV-C technology which can cost anywhere from $60K to $125K Using R-Zero's products results in more than 90% fewer greenhouse gas emissions (GHG) and waste compared to HVAC and chemical approaches For every dollar employers spend on health care, they’re spending 61 cents on illness-related absences and reduced productivity (Integrated Benefits Institute) The Story: R-Zero was co-founded by Grant Morgan, Eli Harris and Ben Boyer. Morgan, who has an engineering background, had worked briefly as CTO of GIST and had been V.P. of product and engineering at iCracked and was previously in R&D in medical devices. Harris co-founded EcoFlow (an energy solutions company) and has been in partnerships and BD at drone company DJI, while Boyer had been an MD at an early-growth stage VC Tenaya Capital. According to the company, the co-founders applied their experience to innovating an outdated legacy industry to make hospital-grade UVC technology accessible to small and medium-sized businesses. Prior to R-Zero these units could cost anywhere from $60-$125K and often lacked the connected infrastructure and analytics necessary to optimize performance and provide risk analytics for its users (ex: how frequently and heavily rooms are being used and when to use disinfection to help mitigate risk). The Differentiators: As noted above, typical institutional ultraviolet (UV) disinfectant lighting technology can be expensive and has the potential to be harmful (hihigh-poweredVC lights can cause eye injuries if people are exposed to them for long periods of time), however, R-Zero has found ways to mitigate these issues. First, as noted in Forbes, its products run anywhere from $28K for their most expensive device the Arc, to the Beam at $5K and the Vive at $3K. Moreover, while the Arc can only be used to disinfect an empty room due to the wavelength of UVC light, the Beam creates a disinfection zone above people in a room while the Vive can be used to combat harmful microorganisms when people are in a room. In addition, according to the company R-Zero’s technology neutralizes 99.9% of airborne and surface microorganisms and does so with 90% fewer greenhouse gas emissions and waste compared to HVAC and chemical approaches. As a result, R-Zero can help improve indoor air quality in hospitals and other medical facilities, factories, warehouses, and other workplaces more efficiently and effectively than outdated technologies. Implications: As noted above, R-Zero’s technology will help hospitals and senior care facilities cost effectively sanitize treatment spaces which can’t necessarily be done with current technology. Moreover, as medical care increasingly moves to outpatient settings the ongoing workplace shortage will challenge these facilities to find ways to keep themselves clean and disinfected and avoid disease transmission. For example, the company claims that their customers have been reducing labor costs by 30%-40%, a number which will likely only get higher given the current labor situation. In addition, even in facilities that have the necessary workforce, it is often difficult to optimize staff time to ensure that offices are sanitized and used to maximum capacity. Utilizing devices like the R-Zero Beam or Vive can allow medium-to-small size facilities to constantly and efficiently be disinfecting rooms, making them immediately available for use.. Also, by removing the burdensome task of having already overworked clinical or janitorial staff spend time sanitizing the rooms, R-Zero’s technology can help improve employee productivity and satisfaction at a time when both are stretched thin. This Startup Wants To Bring Disinfecting UV Light Into “Every Physical Space”, R-Zero Raises $105 Million Series C to Improve the Indoor Air We Breathe, This startup built an ultraviolet device that can disinfect a restaurant in minutes
- Prescribe Fit – Attacking the Root Cause of MSK Issues
The Driver: Prescribe fit is a virtual/telehealth-based orthopedic health startup. It is specified for patients that are dealing with orthopedic bone and muscle injuries. Prescribe fit raised 4 million in seed funding. The round was led by Tamarind Hill with participation from the Grote Family as well as Mike Kaufman, the former CEO of Cardinal Health. According to the company, proceeds of the funding will be used to aggressively expand the company, as well as broaden and accelerate product development. Key Takeaways: According to the Bone and Joint initiative USA, 124 million Americans suffer from a musculoskeletal disorder On average, 1 out of 4 elderly adults fall each year and over 800,000 people end up in the hospital due to a fall injury per the U.S. CDC Patients who experienced falls had longer hospital stays and were more frequently discharged to other healthcare facilities, instead of their primary residence according to a study by the Hospital for Special Surgery According to an article in the Journal of Medicine, fear of falling often develops after experiencing a fall and developing a fear of falling can cause older adults to avoid physical activity, experience more difficulty with activities of daily living, and become less able to perform exercises. The Story: Originally started as a weight loss coaching startup in January 2020, Prescribe Fit was only able to secure only one client after enduring the shutdown of all non-essential health services during the Pandemic. Co-founded by CEO, Brock Leonti, who previously owned a home health agency for approximately six years, the company worked at that time to help treat obesity and served primary care doctors. While the company was limited to just one client during the Pandemic they were able to test and refine their model as well as a number of treatment models. As part of that the company gleaned a number of insights including how to successfully use remote patient monitoring technology and the need for limited administrative burden on physicians. The Differentiator: Based on its experience and what it had learned during the Pandemic in August 2022 Prescribe Fit transitioned its business model to focus solely on orthopedic practices and the treatment of the root causes of MSK issues. According to Leonti, this includes helping orthopedic patients reduce blood pressure, blood sugar & weight at-home and partnering with orthopedic practices to improve their patients mental acuity, flexibility & endurance. As noted in the Columbus Business Journal, “Prescribe Fit has a team of nurses and care coordinators who meet remotely with patients and “edit” their daily routines so their behavior changes stick.” This includes having patients take pictures of their meals and then having coordinators indicate where they may be able to reduce portion sizes or substitute healthier items in their diets. According to Leonti, this has allowed orthopedic patients to obtain 5.4% average weight loss in just 16 weeks and create personalized at-home health plans resulting in 80%+ of patients staying engaged for 9+ months, both of which help improve MSK issues. Implications: According to the Bone and Joint initiative USA, 124 million Americans suffer from an MSK disorder but will often end up treating the symptoms and not addressing the root cause. In part this is due to the limited availability of orthopedic specialists and other clinicians to address these issues. By connecting these patients via specialists offices with nurses and other case managers who can address specific dietary and behavior issues that are contributing to these conditions (ex: lack of exercise or inappropriate exercise routines) Prescribe Fit is helping improve the quality of care while lowering the cost. Moreover, since patients are being monitored by clinicians using remote patient monitoring (RPM) and chronic disease management tools, physicians are able to create an additional reimbursement stream (while paying Prescribe Fit a management fee). As the U.S. gets older demographically a larger proportion of the population will have to deal with MSK issues that can lead to falls and injuries which can often compound into other issues. By addressing these issues and helping patients strengthen bones and improve muscle tone Prescribe Fit may help reduce the incidence (and cost) of such issues. Health tech startup Prescribe FIT raises $4M in oversubscribed seed round, Weight-loss coaching startup Prescribe Fit doubles with focus on orthopedics
- ChatGPT in Healthcare: Where it is Now and A Roadmap for Where it is Going-The HSB Blog 2/2/23
Our Take: AI chatbots such as ChatGPT and tools that are being developed like it have significant promise in revolutionizing the way care is delivered and the way that patients and care providers can connect with each other. Due to their ease of use and equity of access, patients from all backgrounds can receive effective care, particularly in the fields of medical administration & diagnosis, mental health treatment, patient monitoring, and a variety of other clinical contexts. However as ChatGPT, the most advanced publicly available AI yet is still in its beta stage, it is important to keep in mind that these chatbots operate on a statistical basis and lack real knowledge that leads them to frequently give inaccurate information, and make up solutions via inferences based on the data they are trained on, raising concerns as to whether they can be trusted to deliver correct information. This is especially true for patients with chronic conditions who may be putting their health in danger by following chatbots’ advice that could be seriously flawed. Key Takeaways: Chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to $200-360 billion in savings (NBER) In a study evaluating the performance of virtual assistants in helping patients maintain physical activity, diet, and track medication 79% of participants reported virtual assistants had the potential to change their lifestyle (International Journal of Environmental Research and Public Health) Artificial intelligence solutions in healthcare are easier to access than ever before and care providers are quickly adopting AI chatbots to solve deficiencies in manpower and equity of access for tasks they see as easy to automate. As noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care” The Problem: With new developments in advanced medicine and technology including artificial intelligence and machine learning tools, care providers are working hard to adopt these systems to healthcare particularly where they have the potential to address an ongoing workforce shortage. Moreover, as populations age, the gap between incidence of disease and treatment options broadens. For example, according to the Journal of Preventing Chronic Disease, in 2018, an estimated 51.8% of US adults had at least one out of the ten most commonly diagnosed chronic conditions, and 27.2% of adults had multiple chronic conditions. As a result, hospitals and physicians (providers) are seeing greater levels of care utilization and a need to connect these patients with care resources that address their conditions and/or prevent the conditions from becoming more severe. In addition, given the inefficiencies and disparities in the delivery of care in the U.S. healthcare system (ex: lack of providers in certain rural areas, long wait time for specialists) technology may be best positioned to address these deficiencies and improve outcomes. Over time as these tools become more sophisticated they can be used for initial triage escalation to clinicians for a high level of care increasing the use and application of human intelligence/experience where it may be most needed. The Backdrop: The advent of AI chatbots holds great promise in changing the way we deliver and manage care, especially for practices that lack the resources to handle large numbers of patients and the amount of data they generate. According to Salesforce, chatbots (coined from the term “chat robot”) is a computer program that simulates human conversation either by voice or text communication, and is designed to solve a problem. Early versions of chatbots were used to engage customers alongside the classic customer service channels like phone, email, and social media. Current chatbots such as ChatGPT, leverage machine learning to continually refine and improve their performances using data provided and analyzed by an algorithm. As noted in WIRED magazine, the technology at the core of ChatGPT is not new “it is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web.” What makes ChatGPT stand out is “it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5 (which provides an intuitive interface for users to have a conversation with AI). This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use.“ After creating an account with OpenAI (the open source developers behind ChatGPT), all users have to do is type their query into a search bar to begin using ChatGPT’s services. Although ChatGPT is still in beta, its capabilities are impressive and it has surpassed any previously publicly available AI chatbot to date. Using ChatGPT makes it easy to learn as it can quickly and easily summarize any topic the user wishes, saving hours of research and digging through links to understand a certain topic. It can help people compose written materials on anything they wish, including essays, stories, speeches, resumes and more. It is also good at helping people to come up with ideas, and since AI is particularly good at dealing with volume it can provide a litany of possible solutions to humans looking for those solutions. Perhaps the largest change it will bring lies in computer programming. ChatGPT and other AI chatbots have been found to be particularly good at writing and fixing computer code, and there is evidence that using AI assistance in coding could cut total programming time in half according to research conducted by GitHub. For certain healthcare administrative takes, chatbots seem to have a bright future and can connect patients to their care providers in ways that weren’t possible before. Access to healthcare services is one of the most apparent ways, particularly for those living in rural and remote areas far away from the nearest hospital. Disparities among According to the Journal of Public Health, evidence clearly shows that Americans living in rural areas have higher levels of chronic disease, worse health outcomes, and poorer access to digital health solutions as compared with urban and suburban areas. Not only do individuals living in rural areas live much farther away from their nearest hospital, but the facilities themselves often lack the medical personnel and outpatient services common at more urban hospitals which contributes to the inconsistencies in care and outcomes. Similarly, given the increased administrative burden that accompanies the digitization of healthcare and healthcare records, doctors are increasingly occupied by the deluge of tasks that are more suited to automation than others. For example, certain tasks like appointment scheduling, medical records management, and responding to routine and frequently asked patient questions aren’t always the most effective use of medical professionals’ time and could be handled in a consumer friendly and efficient manner by chatbots like ChatGPT. Given the easy way that users are able to interact with ChatGPT there is also the potential to eliminate some of the traditional barriers to the delivery of healthcare, particularly the one-to-many issue given clinician shortages. However, this will not happen near term and will require some refinement. First, as noted by STATNews, “ChatGPT was trained on a dataset from late 2021, [so] its knowledge is currently stuck in 2021…even if the company is able to regularly update the vast swaths of information the tool considers across the internet, it would be crucial for the system to be updated in near real-time for it to remain highly accurate for health care uses.” In addition, the article quoted Elliot Bolton from Stanford’s Center for Research on Foundation Models, who noted that ChatGPT is “susceptibe to inventing facts and inventing things, and so text might look [plausible], but may have factual errors.” Bearing that in mind, should ChatGPT follow the path of other chatbots in medicine it does have potential in a number of clinical settings, particularly in the field of mental health. Here it is important to note that neither ChatGPT nor chatbots possess the skills of a licensed and trained mental health professional or the ability to make a nuanced diagnosis so should not be used for diagnosis, drug therapy or treatment of patients in severe distress. That said, the study of chatbots in healthcare has been most extensive around mental health with most systems designed to “empower or improve mental health, perform mental illness screening systems, perform behavior change techniques and in programs to reduce/treat smoking and/or alcohol dependence.” [Review of AI in QAS]. For example, a study from the Journal of Medical Internet Research reported that chatbots have seen promising results in mental health treatment, particularly for depression and anxiety. Among other things “participants reported that chatbots are useful for 1) practicing conversations in a private place; 2) learning, 3) making usersfeel better, 4) preparing users for interactions with health care providers, 5) implementing the learned skills in daily life; 6) facilitating a sense of accountability from daily check-in, and, 7) keeping the learned skills more prominently in users’ minds. Similarly, in a literature review published in the Canadian Journal of Psychiatry assessing the impact of chatbots in a variety of psychiatric studies, numerous applications were found, including the efficacy of chatbots in helping patients recall details from traumatic experiences, decreasing self-reported anxiety or depression with the use of cognitive behavioral therapy, decreasing alcohol consumption, and helping people who may be reluctant to share their experiences with others to talk through their trauma in a healthy way. However, as pointed out by Mashable, ChatGPT wasn’t designed to provide therapeutic care and “while the chatbot is knowledgeable about mental health and may respond with empathy, it can’t diagnose users with a specific mental condition, nor can it reliably and accurately provide treatment details.” In addition to the general cautions about ChatGPT noted previously (it was only trained on data through 2021 and it may invent facts and things), Mashable notes three additional cautions when using ChatGPT for help with mental illness: 1) It was not designed to function as a therapist and can’t diagnose, noting “therapists may frequently acknowledge when they don’t know an answer to a client’s questions, in contrast to a seemingly all-knowing chatbot” in order to get patients to reflect on their circumstances and discover their own insights; 2) ChatGPT may be knowledgeable about mental health, but it is not always comprehensive or right, pointing out that ChatGPT responses can provide incorrect information and that it was unclear what clinical information or treatment protocols it had been trained on; 3) there are [existing] alternatives to using ChatGPT for mental health help, these include chatbots which are specifically designed for mental health like Woebot and Wysa which offer AI guided therapy for a fee. While it is important to keep these cautions and challenges in mind, they also provide a roadmap of areas that ChatGPT is likely to be most effective once these issues are addressed. Similarly, chatbots are also good for monitoring patients and tracking symptoms and behaviors, and many are used as virtual assistants in order to ensure patients’ well-being is positive while ensuring they are adhering to their treatment schedule. A study published in the International Journal of Environmental Research and Public Health evaluated the performance of a virtual health assistant to help ensure patients maintain physical activity, a healthy diet, and track their medication. Results revealed that 79% of participants believed that virtual health assistants have the potential to help change their lifestyles. However, some common complaints were that the chatbot didn’t have as much personality as a real human would, it performed poorly when participants initiated spontaneous communication outside of pre-programmed “check-in” times and that it lacked the ability to provide more personalized feedback. Implications: AI-based chatbots like ChatGPT have the potential to address many of the challenges facing the medical community and help alleviate issues faced due to the workforce shortage. As many have noted, a report by the National Bureau of Economic Research stated that chatbots have the potential to reduce annual US healthcare spending by 5-10%, translating to an estimated $200-360 billion in savings. In addition, due to their 24/7 availability, chatbots provide the ability to respond to questions and concerns of patients at any hour addressing pressing medical issues and reaching people in a non-intrusive way. Moreover, as AI technology continues to develop, an increasing number of healthcare providers are beginning to leverage these solutions to solve persistent industry problems such as high costs, medical personnel shortages, and equity in care delivery. Chatbots are perfectly poised to fill these roles and increase efficiency in the process given they can perform at a similar level to humans. Generally, assessments of physician opinions on the use of chatbots in healthcare indicate the willingness to continue their use, and a study published in the Journal of Medical Internet Research reported that of 100 physicians surveyed regarding the effectiveness of chatbots, 78% believed then to be effective for scheduling appointments, 76% believed them to be helpful in locating nearby health clinics, and 71% believed they could aid in providing medication information. Given current workforce shortages, chatbots can act as virtual assistants to medical professionals and have the potential to greatly expand a physician’s capabilities and free up the need for auxiliary staff to attend to such matters. Although AI chatbot platforms and algorithm solutions show great promise in optimizing routine work tasks, current technology is not yet sufficient to allow independent operation as there are certain nuances that are best addressed by humans. Also, as one research review found “acceptance of new technology among clinicians is limited, which possibly can be explained by misconceptions about the accuracy of the technology.” Along with the opportunities for ChatGPT in healthcare, there are a number of challenges to implementing the technology. According to a study from the Journal of Medicine, Health Care, and Philosophy, since chatbots lack the lived experience, empathy, and understanding of unique situations that real-world medical professionals have they should not be trusted to provide detailed patient assessments, analyze emergency health situations, or triage patients because they may inadvertently give false or flawed advice without the knowledge of the personal factors affecting patients’ health conditions. Some clinicians are worried that they may one day be replaced by AI-powered machines or chatbots which lack the personal touch and often the specific facts and data that make in-person consultations significantly more effective. . While over time AI may be able to mimic human responses, chatbots will still need to be developed so they can effectively react to unexpected and unusual user inputs, provide detailed and factual responses, and deliver a wide range of variability in their responses so that they can have a future in clinical practices. This will ultimately require further developer input and more experience interacting with patients in order to adequately personalize chatbot conversations. Additionally, despite safeguards put in place by developers like the many pre-programmed controls to decline requests that it cannot handle, and the ability to block “unsafe:” and “sensitive” content, an article published in WIRED Magazine noted that ChatGPT will sometimes fabricate facts, restate incorrect information, and exhibit hateful statements & bias that previous users may have expressed to it, leading to unfair treatment of certain groups. The article noted that the safeguards put in place by ChatGPT’s developers can easily be bypassed using different wording to ask a question and emphasized the need for strong and comprehensive protections to prevent abuse of these systems. In addition, there is also the need for data security to ensure patient privacy as all of this data will be fed to private companies developing these tools As the aforementioned Mashable article noted about using ChatGPT for mental health advice, “therapists are prohibited by law from sharing client information, people who use ChatGPT…do not have the same privacy protections.” Related Reading: ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw A Process Evaluation Examining the Performance, Adherence, and Acceptability of a Physical Activity and Diet Artificial Intelligence Virtual Health Assistant The Potential Impact of Artificial Intelligence on Healthcare Spending Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape
- Array Behavioral Care-Virtual Psychiatry and Therapy Across the Continuum
The Driver: Array Behavioral Care recently raised $25 million in its series C round. The round was led by CVS Health with participation from existing investors Wells Fargo Strategic Capital, Harbour Point Capital, Health Velocity Capital, HLM Venture Partners, OSF Healthcare, and OCA Ventures. Array Behavioral Care provides therapy and telepsychiatry-based virtual care for patients with behavioral health concerns and mental health issues. This new series of funding will help to scale their brand and platform as well as grow their team and improve their tech. This will help to expand their services overall into brand new markets and help to reach Americans suffering and in need of mental health services and overall high-quality behavioral health care. Key Takeaways: There are over 6,300 areas in America that lack access to mental health provider care with those areas covering over 15O M Americans (VMG Health) The number of Americans stating that they use telehealth services for mental healthcare rose 10%, to 59% between 2020 and 2021 (Cross River Therapy) Approximately, 50% of Americans will be diagnosed with a mental illness at some point in their life and 1 in 25 Americans are currently living with a mental illness (CDC) Close to 25 percent of respondents reported they are not able to receive the treatment they need for their mental health conditions due to lack of services, shortages in psychiatrists, and lack of mental health workers (Mental Health America) The Story: Array Behavioral Care was founded in 1999 in New Jersey by chief medical officer and co-founder Dr. James Varrell. As noted on the company’s website, Dr. Varrell was an early advocate for telepsychiatry and began writing the business plan for the predecessor to Array, InSight Telepsychiatry while working as a psychiatrist, with the help of his friend Geoffrey Boyce (current CEO and co-founder of Array) who wrote the business plan. In 2019 Insight TelePsychiatry joined forces with Regroup Telehealth another telepsychiatry provider in the Chicago, IL area. Together they became the largest telepsychiatry service clinician organization in the country. In January 2021 the companies relaunched under the name Array Behavioral Care. Varrell's vision was to modify and provide better access to telehealth psychiatry care. He also wanted to focus on better access for patients living in underserved communities as well as remote areas around the country. He believes in order to help counter the provider shortage and increasing mental health crisis in the population, his company is necessary because of its focus on providing quality telebehavioral care. The Differentiator: Array Behavioral Care one of the early providers of telemental health and telepsychiatry has been in the business for over 20 years. In fact, according to the company, their first telepsychiatry encounter dates back to 1999 with a rural hospital. Array’s goal is to collaborate with healthcare systems, hospitals, community-based medical care organizations, and insurers. As such, they are able to offer their services across a range of different settings to help reach people in a variety of care settings including hospitals, community health centers, nursing facilities, and substance-use medical centers, among others. The company’s focus it to provide high-standard behavioral health services through increased services, access, and better patient results all through the range of partnerships. Array also provides administrative, operational, and technical support while attempting to take away that burden from doctors so they can focus on patient care and provide telepsychiatry services across all 50 states. The Big Picture: With the rise in social isolation, ongoing changes in societies and economics due to the Pandemic as well as the ongoing threat of endemic COVID itself, mental health conditions such as anxiety, depression, and substance abuse have come into even more focus. Due to these pressures, mental illness has surged in America since COVID began. With the increased mental health crisis there is an increased recognition of the need for mental health resources, psychological help, and additional therapeutic treatment options. According to the Pew Research Center, 41% of adults have reported feeling increased levels of mental health anguish and despair since the outbreak of COVID. In addition, 58% of young adults have also experienced feeling mental health despair and distress in the period from March 2020 to September 2022. In addition, 22% of adults reported not feeling optimistic about the future at all or only on rare occasions. Given the breadth of services, from counseling to telepsychiatry as well as the broad population that they serve (available in all 50 states) Array seeks to be the care option for those individuals who are suffering from mental illness. By partnering with organizations such as CVS and Humana, Array hopes to expand access to its network of over 40,000 doctors, nurse practitioners, pharmacists, and more. Through such partnerships, Array states that it has the ability to provide access to 90 M people across its network. Given the increased incidence in mental illness and the chronic shortage of providers, solutions like Array are required to close this gap. Array Behavioral Care announces $25M funding round led by CVS Health; Array Behavioral Care raises $25M to expand its telepsychiatry business
- A Glimpse into the Future: 3D Modeling for Clinician Training & Bioprinting-The HSB Blog 1/19/23
Our Take: The 3D modeling and bioprinting industry is emerging as a unique and multifaceted solution towards medical training, planning & executing complex surgeries, and creating biologically necessary, personalized tissues and organs for patients. Innovations in this industry are yielding encouraging results in making diagnoses more accurate, improving clinician knowledge, and giving patients better health outcomes because this technology gives care providers easier access to the resources they need to improve care, albeit at prices that are restrictive to most organizations. Additionally, 3D bioprinting is far from the panacea it is purported to be as it is yet impossible to fully print complex, vascularized structures such as fully functioning human organs, limiting care providers to the creation of basic tissue and biomimetic structures that are designed to temporarily fix a patient’s issues while they wait for organ transplants or other treatment. Ethical concerns exist as well, as many people are uncomfortable with the idea of “playing God” so to speak and using pluripotent human stem cells to create any type of organic structure. Regardless, innovation is expected to continue along with market growth and given a few decades, 3D modeling and bioprinting will likely become more common in the healthcare industry. Key Takeaways: 3D bioprinters can cost up to $65,000, with software costing up to $15,000 and high hourly fees to capture and obtain CT scans from a healthcare provider (Frontiers in Pediatrics) 3D modeling provides a number of advantages for training medical students and clinicians and allows for proficiency-based training in a variety of contexts Information technology training of 3D models generated from medical imaging allows for the creation of easily shareable design blueprints, and machine learning has been used to create training databases and digital twins These technologies give rise to ethical concerns around quality, safety, and human enhancement, as well as technical concerns about the lack of suitable biomaterials and the complexity of the biostructures being printed (Journal of Biomedical Imaging and Bioengineering) The Problem: Healthcare providers are always searching for novel solutions to solve problems that arise when coordinating and delivering care. Technological innovations drive new changes in the market and introduce new ways to diagnose and treat the issues that patients face, and in the context of medical imaging, there is ample room for improvement. Doctors across the globe currently rely on 2D scans derived from computed tomography (CT) and magnetic resonance imaging (MRI) scans, which essentially translate 3D data into 2D scans which leaves more uncertainty and room for interpretation as radiologists convey the information they gather to clinicians who lack their background and experience. Applying these advancements in 3D modeling empowers advancements in fields such as tissue engineering and regenerative medicine. These technologies which aim to artificially create functional tissues constructs, aim to integrate these new methods and to analyze data, build and edit human tissues, and benefit even more from ongoing advancements in medical imaging, Using 3D modeling in conjunction with organ bioprinting technology that relies on these models have the potential to yield large returns for the healthcare industry and could result in significant changes. The Backdrop: 3D bioprinting is the layer-by-layer printing process of functional 3D tissue constructs using a unique type of bio ink known as tissue spheroids. These spheroids lack biological scaffolds and can easily adapt to the correct geometric shape required to bond with other cells. This in turn causes greater cell-cell interaction, cell growth, cell differentiation, and resistance to environmental factors due to high cell density achieved through bioprinting efforts according to a study from the International Journal of Bioprinting. These biological advancements are accompanied by advancements in 3D digital imaging as well, which allows the data collected to be transformed into the 3D images necessary to print tissue in the first place. Information technology transferring of 3D models generated from medical imaging allows for the creation of design blueprints that let other clinicians replicate similar results given they possess the appropriate technology. In addition, as noted by Procedia Engineering, computer-assisted design software including predictive simulations are utilized in both the printing and post-printing process to assist in optimizing the printability of bio inks and can reduce the number of experimental trials required to bioprint tissues. Computer-aided design (CAD) data is combined with computer numerical control, specialized mechanical technology and material science in order to print biomimetic and complex tissue structures using the traditional 3D printing technique of layered overlay, allowing clinicians to replicate anatomical structures with relative biomechanical accuracy considering the fledgling nature of this technology. As noted in the Journal of Advanced Science, training programs deploy sensors and real-time feedback systems to provide more comprehensive feedback to guide instruction and help to better delineate the typical workflow. In tandem with the sensors and real-time feedback, machine learning is being leveraged to create training databases from large datasets and even digital twins which can be used to assist in the planning and execution phases of complicated surgeries according to the International Journal of Bioprinting. At present, 3D bioprinting is primarily used by the pharmaceutical industry to design in vitro models to test new drugs on animals. This assists in making the experimentation process quicker while minimizing mistakes and maximizing cost savings given animal models are generally considered accurate and reliable tools to determine toxicity and model disease but can be expensive and have ethical issues. This technology can also provide patients with personalized organic tissue and organs designed and created from their own cellular material, significantly lowering the risk of organ rejection. As noted by AAPS PharmSciTech, the growth in demand for human treatment using 3D printed tissues are driven by the medical demands of aging Americans, rising demand for organ donors, ethical concerns around animal testing, clinical wound care, and joint repair and replacement procedures. This growth is significant with an expected compound annual growth rate of 15.8% from 2022 to 2030 per Grand View Research. The applications of the 3D imaging and modeling technologies that enable bioprinting hold great promise in and of themselves. For example, 3D imaging can enable clinicians to perform a variety of complex treatments more easily than before. Using CT and MRI scans, radiologists can create 3D reference models that help surgeons better prepare for their job and visualize new solutions that may be harder to deduce from 2D scans. During one procedure in 2016, Dr. Michael Eames used 3D imaging to recreate a digital twin of a child’s arm, who was suffering injuries from unhealed bones that prevented the rotation of his arm and caused intense pain. Once the orthopedic team created the digital twin of the arm, they could see that it was only necessary to reshape the child’s bones, which was an insight that ended up decreasing the procedure time from 4 hours to 30 minutes and returning 90% arm-range movement to the patient only 4 weeks after his surgery. Compared to similar surgeries not conducted using 3D modeling methods, self-reported postoperative pain and scarring were much lower, ultimately leading to lower costs for both hospital and patient, a shorter recovery time, and greater patient satisfaction according to a press release from Axial3D. Training outcomes for medical students is another important application of this technology, allowing for proficiency-based training in a variety of surgical contexts as an increasing number of training curricula are beginning to adopt simulation as a part of their programs. Basic simulators which help new students hone their surgical skills are available as teaching tools, and some simulators meant for skilled surgeons to perfect their strategy before entering the operating room, are able to fully simulate entire procedures such as joint replacements and fixating fractures, according to an article published in the Journal of Future Medicine. Using 3D simulators ultimately changes the nature of learning and gives students a more individual approach towards their coursework as their hands-on experience will no longer be limited to unwieldy manuals, predetermined lab times, and doctor shadowing. 3D models allow for interactive manipulation, a better understanding of spatial relationships, and the utilization of novel methods of visualization for learning anatomy that trainees have been reported to enjoy more. However, despite the plethora of benefits of this technology, price is a limiting factor if educational institutions wish to print these 3D models. For example, an article in the Journal of the American College of Radiology found that each 3D model cost approximately $3,000 per procedure with an operating cost of over$200,000 per year to run the 3D printing service. Skilled technicians and talented 3D designers are also needed to properly utilize specialized software that can interpret and reimagine 2-dimensional CT and MRI scans in great 3-dimensional detail according to an article published in the Annals of 3D Printed Medicine. Implications: As technological innovation rapidly advances, the 3D bioprinting industry will continue to leverage advanced imaging and modeling techniques in attempts to create exact replications of anatomical structures. The use of 3D modeling in viewing anatomical images makes the process of understanding medical imaging more intuitive, leading to more accurate diagnoses, better surgical planning, better patient and care provider education, and improved health outcomes according to an article from Jump Simulation. Innovations in this field are also leading to greater integration of organisms with technology, such as the creation of extremely complex microphysiological devices that integrate sensors within soft tissue structures, created by the Wyss Institute at Harvard University. This technology can be further adapted to create other vascularized tissues that researchers can use to investigate the effects of certain regenerative medicine and drug testing, with biosensors able to yield more accurate and localized results than with previous technologies, according to the Wyss Institute. Additionally, 3D models created using these new methods of medical imagery can easily be shared with other medical practitioners with access to 3D bioprinting technology to benefit in a similar way. However, this raises questions about privacy and the legality of sharing detailed anatomical models of patients’ organs which will require explicit informed consent. Despite these initial promising indications for 3D bioprinting technology, there are a number of challenges that the technology will need to overcome before broader adoption. Although it has a bright future in the healthcare industry, current technology is simply not enough yet to meet the demands of the modern patient and the price to use it is often unattainably high. In addition, a variety of technical challenges exist, including the need to increase the resolution and speed of bioprinter technology, the inability to recreate the organic cellular density of certain tissues and organs, the lack of suitable biomaterials needed to replicate this technology on a much larger scale as pluripotent stem cells are difficult to come by, and of course the complexity of the biostructures being printed, particularly vascularized tissue that must be properly constructed to avoid necrosis as per an article published in the Journal of Biomedical Imaging and Bioengineering. There are also ethical issues that raise a number of concerns such as equality, safety, and human enhancement as outlined by a study from the International Journal of Scientific & Technology Research. Will patients have equal opportunities to access 3D bioprinting technology regardless of socioeconomic status, and how do insurers plan on covering payment for such services, if at all? Is this new technology safe for humans in the long term, and will medical staff receive sufficient training to use it? Finally, will this technology be ultimately used to build better people and improve their bodies by replacing organs with brand new ones, not to mention the inevitable integration of man and machine that is already being assessed in a variety of clinical contexts? These issues must be addressed by care providers, federal regulators, and the patients that will benefit from 3D bioprinting to assuage concerns and give legitimacy to a promising new technology with the potential to revolutionize tissue engineering, regenerative medicine, and clinical training. Related Readings: 3D Bioprinting of Living Tissues 3D Bioprinting Strategies, Challenges, and Opportunities to Model the Lung Tissue Microenvironment and Its Function 3D Printing Helps Surgeon Restore Child’s Sporting Ambitions and Reduce Surgery Time from 4 hours to Under 30 minutes Application of Machine Learning in 3D Bioprinting: Focus on Development of Big Data and Digital Twin Role of Three-Dimensional Visualization Modalities in Medical Education Should Society Encourage The Development Of 3D Printing, Particularly 3D Bioprinting Of Tissues And Organs?