How should AI be designed and regulated, and who should it serve?

The U.S. Food and Drug Administration unveiled a new set of draft recommendations on clinical decision support software recently, and in its guidance, the agency said it’s taking a risk-based approach to categorization of these various CDS tools, many of them powered by artificial intelligence, that it hopes will be an “appropriate regulatory framework that takes into account the realities of how technology advances play a crucial role in the efficient development of digital health technologies.”

Given the vast proliferation of AI and machine learning software across healthcare, and the speed at which it’s evolving, that certainly won’t be the last word from the FDA, or other regulatory agencies, on the subject.

A truly global framework

Indeed, said Robert Havasy, managing director of the Personal Connected Health Alliance, when he looks across the U.S. and around the world he sees the beginnings of a “truly global framework emerging, with common principles among the U.S., Europe and other places,” for the safe and effective deployment of AI in healthcare.

Havasy was speaking at the HIMSS Connected Health Conference during a roundtable discussion about developing approaches to AI regulation and design.

“We’re assessing risk with a global system,” said Havasy. “There are some common principles, one of which seems to be that the risk is presumed to be lower when there are competent individuals who can make their own decisions and understand how a system works.”

As Dr. Art Papier, a dermatologist at the University of Rochester and founder and CEO of the AI company VisualDx explained, even if an AI algorithm says a mole is 99.9% benign, if the patient says the mole has recently changed, it’s getting removed.

Healthcare is nowhere near the point where “these algorithms are reliable enough to trust” without skilled human intervention, he said.

An explainable process

At VisualDx, said Papier, “we are very process oriented. As we read the FDA guidance we’re seeing that the FDA really wants to make sure that your process is explainable, and you know running your tests and having the data to support the work.”

It’s critical, he said, for AI developers to be “explaining what you’re doing as best you can and surfacing that – so your users don’t have a sense that it’s just a big black box.”

Matias Klein, CEO of Kognition.ai, recognizes that “there’s a lot of fear, uncertainty and doubt out there about what exactly AI is and how it will impact our world.” This fear takes many forms, not just in healthcare but across society.

“There’s the job displacement fear, that AI is going to replace all our jobs,” he said. “There’s the Big Brother fear, where you know the government’s going to spy on us and the AI is going to know everything about us. And then there’s the Skynet fear: Terminator is gonna wake up one day and take over.”

Pop culture views of AI

Those concerns are fueled by pop culture and aren’t based in reality, per se, he acknowledged. “But I think at the end of the day, it speaks to people’s concerns. They are fearful about change and new things, and they want to make sure that it is rolled out in a way that is safe. So I applaud the FDA for seeing this through a risk-based lens and making sure that we regulate AI to protect people so they won’t be harmed.”

Because when developed safely and deployed wisely, “this is an area where I have radically disruptive potential for humanity, which is to make the world smarter and more secure,” said Klein. “If we apply AI to monitor, whether it be for security and safety or whether it be for clinical indications, I think the ability AI has to process vast amounts of data is an astounding productivity and decision support multiplier. But we have to deploy it in thoughtful ways.”

Healthcare IT News recently spoke with Dr. Jesse Ehrenfeld, board chair of the American Medical Association, about that organization’s positions on the pros and cons of AI and machine learning in clinical practice.

A step in the right direction

At the Connected Health Conference, Ehrenfeld said the AMA sees the FDA guidance as a “step in the right direction; from our perspective, oversight and regulation of all of these healthcare systems has to be based on the risk of the harm and benefits.

“It’s got to take into account things like the intended use, transparency, the ability to understand what the evidence for safety is, the level of automation,” he explained. “And I think there is a real value in having common terminology.

“That has been, I think, a stumbling block in this space as various parts of the ecosystem,” he added. “On the regulatory side, across the developer community, and God help the physicians and patients who are trying to muddle through what these things actually mean when you go and interact with the technology.”

The FDA’s recent work, said Ehrenfeld, advance the work of “getting to a uniform understanding about what were some of these words mean that we’re all talking about.”

Patients are not in the process

But Grace Cordovano, a board-certified patient advocate and founder of Enlightening Result, had a different set of concerns – based around the fact, she said, “that patients are really not included” in AI development, “nor are their care partners.”

“We have people who are living and breathing and navigating the crux of these problems in their healthcare systems, but we’re not putting that as a feedback loop into our data,” Cordovano explained. “And that lived expertise and experience is so valuable. How are we incorporating these real-world findings into what we’re doing? I want to make sure that we’re not just building something to apply for patients, like applying the sunscreen.

“There’s this whole movement of participatory medicine where patients want to be a part of it, and they want to be co-designing and they want to be entrepreneurs,” she added “We need to leverage that as an asset.”

Ehrenfeld agreed – and said the same could be said, in many cases, for physicians.

“We have seen so much technology developed in a vacuum that doesn’t actually take into account what you know the clinician is probably trying to solve,” he said. “And so I think it goes both ways. The most effective deployments and technologies are informed by lived experience and an understanding of what actual problems patients are up against.”

Patients taking the initiative

Mark Sendak, population health and data science lead at Duke Institute for Health Innovation, said he’s been intrigued and encouraged recently by how patients have been taking the initiative on AI development of their own.

“One of the most exciting places I’ve seen for AI is with diabetes,” he explained. “There are places where patients are actually leading the way – folks who are hacking into Type 1 diabetes monitors and then developing their own algorithms and running them. Patients are being empowered to use tools and technologies for their disease management.”

From Havasy’s perspective, he pointed out that “the biggest piece of AI development, besides the algorithms, is the dataset used to train them.”

Echoing Cordovano’s point, he said that means “making sure that patients are adequately represented, that their conditions are represented in a way they feel is appropriate, as opposed to just a clinical marker or just a number somewhere.

“Getting that participation early in creating training sets can demonstrate these systems are inclusive,” he explained. “There’s all sorts of fears, not just about patients not being included, but about whether data is gender biased or racially biased, or other things. We’re all worried about how the data builds these systems. Getting people involved in creating those data sets, certifying those data sets, I think, is an important way to get people involved in the process.”

Twitter: @MikeMiliardHITN
Email the writer: [email protected]

Healthcare IT News is a publication of HIMSS Media.

Source: Read Full Article