Quantcast
Channel: safety – Luke Oakden-Rayner
Browsing latest articles
Browse All 8 View Live

Medical AI Safety: We have a problem.

For the first time ever AI systems can directly harm patients. Are we doing enough to prevent a medical AI tragedy, the equivalent of a thalidomide event?

View Article



Medical AI Safety: Doing it wrong.

Medical AI has a safety problem; we know for a fact our testing isn't reliable. We've seen how this plays out before.

View Article

Half a million x-rays! First impressions of the Stanford and MIT chest x-ray...

My first impressions of these datasets. How do they measure up, and how useful might they be?

View Article

The best medical AI research (that you probably haven’t heard of)

I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!

View Article

Improving Medical AI Safety by Addressing Hidden Stratification

Medical AI testing is unsafe, but addressing hidden stratification may be a way to prevent harm, without upending the current regulatory environment.

View Article


The FDA has approved AI-based PET/MRI “denoising”. How safe is this technology?

Super-resolution promises to be one of the most impactful medical imaging AI technologies, but only if it is safe. This week we saw the FDA approve the first MRI super-resolution product, from the same...

View Article

Docs are ROCs: a simple fix for a “methodologically indefensible” practice in...

The way we currently report human performance systematically underestimates it, making AI look better than it is.

View Article

AI has the worst superpower… medical racism.

Medical AI can detect the racial identity of patients from x-rays. This is extremely concerning, and raises urgent questions about how we test medical AI systems.

View Article

Browsing latest articles
Browse All 8 View Live




Latest Images