This article has been saved to your Favorites!

Senators Told That AI Is Already Harming Patients

By Gianna Ferrarin · 2023-11-09 20:52:06 -0500 ·

While the health care industry is focused on how new developments in artificial intelligence will reshape the field, some experts believe more attention should be paid to the fact that AI isn't just a hypothetical — it's here, and already influencing patient care.

Testifying on Capitol Hill Wednesday, Christine Huberty, an attorney at nonprofit Greater Wisconsin Agency on Aging Resources, or GWAAR, described the case of Jim, an 81-year-old who'd suffered from COVID-related pneumonia while undergoing chemotherapy.

According to Huberty, doctors recommended that Jim remain in a nursing facility for 30 days, but the algorithm used by his insurer's subcontractor decided he needed only 14.2 to 17.8 days. Jim was ultimately denied coverage on his 16th day of rehabilitation and was released, despite ongoing concerns about his condition.

"Jim's doctors and therapists did not agree with the algorithm's predicted discharge date, nor did they agree with Jim's own decision to return home so soon," Huberty told the Senate Subcommittee on Primary Health and Retirement Security. "AI directed Jim's care."

Huberty was accompanied by a handful of doctors who testified on anticipated — and current — impacts of using artificial intelligence in health care, from easing administrative burdens in clinical care to synthesizing biological weapons.

Eleanor Chung, a health care and life sciences associate at Epstein Becker Green, told Law360 that the panel "spoke of processes familiar to our health care system: the needs for auditing and testing, for measuring efficacy and outcomes, and for accountability."

"When those needs are addressed, AI has great potential to improve cost, quality, and access to health care," Chung said in her emailed statement.

Throughout the hearing, Sen. Ed Markey, D-Mass., and other subcommittee members asked the health experts who they thought should bear the burden of proof in demonstrating the safety of AI, how to ensure accountability for bias in algorithms and what steps Congress can take to advance the prevention of biosecurity risks.

Thomas Inglesby, a doctor at the Johns Hopkins Center for Health Security, told the subcommittee that he believes Congress should go beyond the measures outlined in President Joe Biden's recent executive order on the use of AI across a range of industries.

For example, as a condition for funding life sciences research, the executive order directs federal agencies to require that synthetic nucleic acids are procured through manufacturers that adhere to a screening framework. But Inglesby said Congress can go a step further by expanding oversight of companies and other organizations beyond those that are federally funded.

"In terms of the public interest, it doesn't really matter whether someone is receiving federal funding or not," Inglesby told Law360 after the hearing. "What matters is that anyone who orders nucleic acids to build a virus or a pathogen in the United States should be handled the same way."

In his testimony, Inglesby urged Congress to give the U.S. Department of Health and Human Services the authority to require anybody who purchases nucleic acids to order only from manufacturers that screen orders and customers for high-end biological risks.

He also urged Congress to commission a rapid risk assessment to determine whether the executive order will sufficiently address these risks, or whether congressional action is needed to bolster prevention.

The biological risks identified by Inglesby include the use of AI technology to develop biological weapons and pathogens that can cause pandemics. He added that this technology also has the potential for public health benefits, such as improving the speed and precision of vaccine development and predicting the properties of pathogens.

The subcommittee also explored the potential of AI to reduce burdens in clinical care, hearing testimony from University of Kansas doctor Keith Sale, who spoke about AI technology that can transcribe notes from patient visits.

That technology would help reduce "physician burnout" and increase patient outcomes by allowing doctors to focus on patients instead of having to type notes during visits, as many currently do in order to keep up with logging electronic medical records. He emphasized that he sees AI as a tool to assist physicians, not replace them.

"It is not something that should replace what I decide in practice or how I make decisions that affect my patients," Sale said. "So, ultimately, it is designed to enhance my practice, not replace me in practice."

But at Wednesday's hearing, Huberty repeatedly raised warnings about AI-powered systems already making decisions for patients against their best interests.

"What was most concerning for me was that the other witnesses were often talking about hypotheticals and potential consequences down the line of new AI and untested AI," Huberty told Law360 in an interview. "And I'm sitting there with demonstrated patient harm from use of old AI that's been around for years."

Huberty also told Law360 that cases like Jim's are on the rise. According to Huberty, the Wisconsin-based non-profit once saw one to two coverage denials per year; now, it sees that many a week. She said the algorithm used by naviHealth, the subcontractor in Jim's case, is used widely by health providers. The health startup was acquired by Optum, a UnitedHealth unit, in 2020.

Huberty said she was concerned about patients who, unlike Jim, are not able to appeal these coverage denials.

"I truly believe that many of them are getting sent home and dying as a result of early discharges," Huberty said.

--Editing by Dave Trumbore.

For a reprint of this article, please contact reprints@law360.com.