The doctor's office of the future will have an AI listening in

Illustration: Annelise Capossela/Axios

Artificial intelligence is breaking into the doctor's office, with new models that can transcribe, analyze and even offer predictions based on written notes and conversations between physicians and their patients.

Why it matters: AI models can increasingly be trained on what we tell our doctors, now that they're starting to understand our written notes and even our conversations. That will open up new possibilities for care — and new concerns about privacy.

How it works: One of the biggest, if most invisible, contributions AI can make is to automatically capture a physician's written or spoken notes.

  • If note-taking could become automated, it would likely be a huge help to overworked medical professionals suffering from burnout after spending hours manually entering data at the end of their workdays.
  • But the real value comes in via data captured in doctors' conversations with patients or written case notes. These AI models can "extract information and then contextualize it" in ways that doctors can act on, says Duane A. Mitchell, director of the University of Florida Clinical and Translational Science Institute.

Details: For example, while identifying the right set of patients to enroll in clinical trials would usually take weeks of manually extracting information from databases, the AI models could do the work "within minutes," says Mona Flores, global head of AI at Nvidia.

  • By analyzing the histories of millions of case studies, AI systems can help predict how patients might respond to different treatments, or flag doctors about likely complications before a surgery.

Driving the news: Some major deals and announcements about AI companies crossing into health care have come out in the past couple of weeks.

  • On April 8, researchers at UF's academic health center announced a collaboration with Nvidia to develop a massive natural language processing (NLP) model — an AI system designed to recognize and understand human language — trained on the records of more than 2 million patients.
  • On Monday, Microsoft announced it would buy Nuance Communications, a software company that focuses on speech recognition through artificial intelligence and has a popular product that transcribes and analyzes voice conversations between doctors and patients, for $19.7 billion.
  • On Wednesday, the Mayo Clinic launched its mHealth platform, which aims to connect remote patient monitoring devices with AI resources that can help doctors make clinical decisions about care.

By the numbers: For all our focus on vital signs like blood pressure or cholesterol levels, "80% of health care data exists in text or narrative, and the doctor's note is still the primary way things get documented," says William Hogan, the director of biomedical information and data science at the UF College of Medicine.

  • That means everything from notes about a patient's medical history to a doctor's written impressions of a case — the dark matter of medical data that was mostly beyond the reach of computers until recent improvements of NLP.
  • Platforms like Mayo's mHealth can add to that data hoard by harvesting information from the growing number of remote health monitoring devices patients wear, which let doctors keep tabs on patients outside the clinic.

Mental health is one of the best examples of how AI models might change medicine.

  • "Clinical psychiatry occurs in very much the same way as it did 100 years ago, where a clinician will sit down and talk to a patient and based on that conversation, develop a treatment plan," writes the psychiatrist Daniel Barron in his forthcoming book, "Reading Our Minds: The Rise of Big Data Psychiatry."
  • Instead, Barron envisions a near future in which those conversations can be recorded by AI models that can analyze patient speech and even facial expressions for clues about mental illness and how to treat it.

The catch: How many of us would be comfortable with the idea of an AI listening in and analyzing our conversations with a family doctor, let alone a therapist?

  • The ability of current AI systems to accurately characterize human emotion in speech or facial expressions — without being contaminated by bias — is far from clear.
  • America's fractured health care system means it's difficult to connect different sets of data that might tell us we have a big problem on our hands — like the CDC's struggles to identify rare cases of side effects from the Johnson & Johnson vaccine.

What's next: "Clinicians and patients need to have a conversation to figure out how best to make use" of data and AI systems, Barron says.

  • "How can we demonstrate whether it's beneficial? How comfortable are we sharing this data and with whom?"

The bottom line: Personal health is one area in which each of us stands to benefit from AI's ability to suck up and analyze vast quantities of data — but it's also where sharing that data feels the most uncomfortable.

Source: Read Full Article