Some FDA-approved AI medical devices are not ‘adequately’ evaluated, Stanford study says

VentureBeat

Kyle Wiggers

Some AI-powered medical devices approved by the U.S. Food and Drug Administration (FDA) are vulnerable to data shifts and bias against underrepresented patients. That’s according to a Stanford study published in Nature Medicine last week, which found that even as AI becomes embedded in more medical devices — the FDA approved over 65 AI devices last year — the accuracy of these algorithms isn’t necessarily being rigorously studied.

Although the academic community has begun developing guidelines for AI clinical trials, there aren’t established practices for evaluating commercial algorithms. In the U.S., the FDA is responsible for approving AI-powered medical devices, and the agency regularly releases information on these devices including performance data.

The coauthors of the Stanford research created a database of FDA-approved medical AI devices and analyzed how each was tested before it gained approval. Almost all of the AI-powered devices — 126 out of 130 — approved by the FDA between January 2015 and December 2020 underwent only retrospective studies at their submission, according to the researchers. And none of...

Get the Morning Update

Thanks for subscribing!