News stories offer cautionary tales for journalists covering AI and tech

0
23


AI chatbot

Photograph by Mohamed Hassan by way of Pixabay

Well being care organizations have been largely embracing synthetic intelligence packages and instruments to assist in areas like looking out data and medical documentation. Though the computing expertise is highly effective and the methods are designed to be taught as they go, current information tales printed by Nationwide Public Radio, STAT and the Wall Road Journal spotlight that these methods are imperfect, and human enter remains to be crucial and useful.

NPR and STAT reported on the Nationwide Consuming Problems Affiliation’s choice to close down its volunteer-staffed nationwide helpline and as a substitute use a chatbot (a synthetic intelligence program that simulates dialog) named Tessa. (See hyperlinks to the tales under in sources.)

The group’s management appeared to have had good intentions, reasoning that Tessa might reply quicker and assist extra individuals. Sadly, inside one week the chatbot instrument was recommending weight-reduction plan and weight reduction ideas, which could be triggering and perpetuate the circumstances of individuals with consuming problems. 

After customers shared issues on social media, the affiliation introduced it was indefinitely disabling Tessa. At time of publication, the organization’s website still listed information for an impartial disaster textual content line staffed by educated volunteers.

One nameless volunteer instructed STAT that in ending the helpline, NEDA missed a chance to make use of AI to automate database searches or discover up-to-date supplier info, which might have streamlined a number of the time-intensive work of accumulating sources for callers. These duties led to the longer wait occasions Tessa was meant to cut back. 

In another story, the Wall Road Journal interviewed a number of nurses about their expertise with synthetic intelligence alerts and algorithms, reporting how the packages’ output and recommendations typically dangerously contradict nurses’ experience and scientific judgment. 

In a single case, an oncology nurse acquired an alert that one in every of her sufferers could have sepsis. Though she thought the alert was misguided, she needed to observe protocol by taking a blood pattern, probably exposing the affected person to an infection and including to his invoice. One other nurse on a call-in recommendation line listened to a affected person’s signs and, following protocol steered by the algorithm, recognized the affected person with cough, chilly and/or flu and scheduled a telephone appointment with a physician a number of hours later. That affected person was later recognized with pneumonia, acute respiratory failure and renal failure. He died a number of days later.

“Whether or not a nurse is assured sufficient to belief her personal judgment to override an algorithm usually is determined by hospital coverage,” reporter Lisa Bannon wrote. The article additionally talked about a Nationwide Nurses United survey by which 24% of respondents mentioned that they had been prompted by a scientific algorithm to make decisions they believed weren’t in sufferers’ greatest pursuits.

Classes for journalists 

Each information tales function good reminders to observe due diligence when reporting on new applied sciences. When protecting well being care methods’ adoption of synthetic intelligence applied sciences, it’s vital to transcend inquiring why they’re adopting the applied sciences or what they hope to realize. It’s essential to ask who’s going to thoughts the shop. 

Who can be liable for monitoring the expertise to evaluate how properly it’s working? How will they assess the expertise’s efficiency, and the way usually will they accomplish that? It’s additionally vital to get suggestions from customers, whether or not they be well being care staff or sufferers. What do they like or dislike in regards to the expertise? How is it useful, and what are its limitations? Is it assembly its supposed targets?

Protecting these questions in thoughts is essential, as AI expertise solely appears to be rising in scope. According to a recent story in HealthTech magazine, emerging makes use of for AI in well being care embody:

  • Laptop imaginative and prescient expertise to routinely monitor surgical sufferers.
  • Evaluation of real-time situational information to foretell affected person outcomes and alter care.
  • Voice-activated expertise to handle clinician documentation and route inpatient requests to the appropriate division.

As well as, well being methods together with UNC Well being in North Carolina, UW Well being in Wisconsin, Stanford Well being Care, and UC San Diego Well being are piloting generative AI expertise to assist physicians reply to sufferers’ questions in on-line portals, Becker’s Health IT reported. The expertise will craft the preliminary draft, and physicians can assessment and edit it earlier than sending. An article from UC San Diego famous that messages can be clearly marked with disclosures stating the message was routinely generated and reviewed by a doctor. 

I’ve additionally seen information reviews of hospitals utilizing AI to create automated transcripts of patient encounters, read and analyze patients’ electronic medical records to supply an inventory of scientific trials they may qualify for, and offer wellness features like guided meditations and suggestions for out of doors recreation. Different packages are designed to assist make predictions for conditions like delirium in the intensive care unit, short- and long-term lung cancer risk, and even physician turnover.

Some well being methods similar to Northwestern Drugs in Chicago and Duke Well being in North Carolina, and the Division of Well being and Human Providers, have added chief AI officers to their government groups. They, together with biomedical ethicists, can be good sources for journalists protecting AI as we go ahead.

Assets:



LEAVE A REPLY

Please enter your comment!
Please enter your name here