News & Articles

ML Learning – “Deep Fake”

The Corner Office

Wayne Moore

4/11/2023

When using a medical imaging device with AI how would the physician know if the Machine Learning algorithm used for its intended use just didn’t make something up?  When analyzing the ML generated image how can one trust that subjective human review of images is sufficient todetermine whether an image met quality requirements using these new technologies? All stakeholders in AI, physicians, engineers, and regulatory bodies such as the FDA, have growing concerns that “deep-fake” technology could generate medical images of patients that do not exist. The FDA is also concerned that the physician may not be informed if the image is manipulated by AI in some way. For example, an AI algorithm may make changes to the homogeneity of an ultrasound image, which is the very thing that the reviewer may be checking to determine acceptable image quality.As AI continues to mature this can become a real issue. What may be needed is an entirely new family of tissue phantoms of TBD design and characteristics that can be used to quantitatively determine if the data being produced by AI is trustworthy for human use. Image quality, data integrity, and data quality are different concerns and will require different solutions for verification and validation. At Acertara we are staying on top of the “deep-fake” concern associated with machine learning and the outputs from these algorithms. We are currently  considering what image quality metrics we will use and just how these new phantoms must be integrated into the world of AI-based medical imaging. . Should be very interesting.

Until Next Month,

This image has an empty alt attribute; its file name is image-3.png

Wayne

March 27, 2023 Newsletter