Artificial intelligence is often envisioned as a helper to experts such as radiologists, acting as a second set of eyes. Ideally, these systems could help tirelessly inspect every aspect of the patient, while radiologists focus their attention budget on the regions of greatest concern.
Unfortunately, it is difficult to anticipate every possible abnormality. As a result, anomaly detection is often touted as an ideal solution because it only requires data from healthy individuals. However, this advantage also makes anomaly detection inherently ill-defined.
The goal of this Research Topic is to support the advance of anomaly detection methods in radiology. In order to be useful as a second set of eyes, anomaly detection methods need to be able to detect subtle, early signs of disease across a broad range of conditions. This is a very lofty goal, but our aim is to motivate the community to take a small step toward it. In particular, we believe that anomaly detection methods that expend time and energy to learn, should gain some non-trivial expertise about the normal data. This means they should learn something about the anatomy that is more informative than mean image subtraction or edge detection.
Current evaluation data sometimes lacks more difficult anomalies that test the model's expertise, or they lack the annotations necessary to compute accurate performance metrics. The medical out-of-distribution (MOOD) analysis challenge addresses this issue with a diverse test dataset containing real and synthetic labeled anomalies. To avoid overfitting, it is important that this dataset is kept private; but this can also make it difficult for researchers to study and iterate on their models' failure cases. As such, we propose a series of publicly available synthetic anomaly tests. These will draw on input from the MOOD community and include cases that are difficult for current reconstruction and self-supervised methods.
We are looking for manuscripts that advance the field of medical anomaly detection. To evaluate these methods on difficult cases and facilitate comparison across papers, we require all submitted papers to run evaluation on our synthetic test dataset, in addition to whichever clinical datasets they choose.
Our compulsory dataset will be publicly available for authors to analyze and we hope it can provide insight to improve their designs. That being said, we ask authors not to train on the test dataset or use synthetic training/validation anomalies that mimic the test data. Any synthetic anomalies used for training/validation must be displayed in a figure side-by-side with samples from our compulsory dataset.
We acknowledge that public synthetic anomaly datasets can be biased or short-lived. So we also greatly welcome papers presenting new challenging clinical datasets or work illuminating the theoretical limits of anomaly detection.
This Research Topic accepts all article types.
Keywords:
Anomaly Detection, Machine Learning, Out-of-Distribution, Unsupervised Learning, Medical Imaging
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Artificial intelligence is often envisioned as a helper to experts such as radiologists, acting as a second set of eyes. Ideally, these systems could help tirelessly inspect every aspect of the patient, while radiologists focus their attention budget on the regions of greatest concern.
Unfortunately, it is difficult to anticipate every possible abnormality. As a result, anomaly detection is often touted as an ideal solution because it only requires data from healthy individuals. However, this advantage also makes anomaly detection inherently ill-defined.
The goal of this Research Topic is to support the advance of anomaly detection methods in radiology. In order to be useful as a second set of eyes, anomaly detection methods need to be able to detect subtle, early signs of disease across a broad range of conditions. This is a very lofty goal, but our aim is to motivate the community to take a small step toward it. In particular, we believe that anomaly detection methods that expend time and energy to learn, should gain some non-trivial expertise about the normal data. This means they should learn something about the anatomy that is more informative than mean image subtraction or edge detection.
Current evaluation data sometimes lacks more difficult anomalies that test the model's expertise, or they lack the annotations necessary to compute accurate performance metrics. The medical out-of-distribution (MOOD) analysis challenge addresses this issue with a diverse test dataset containing real and synthetic labeled anomalies. To avoid overfitting, it is important that this dataset is kept private; but this can also make it difficult for researchers to study and iterate on their models' failure cases. As such, we propose a series of publicly available synthetic anomaly tests. These will draw on input from the MOOD community and include cases that are difficult for current reconstruction and self-supervised methods.
We are looking for manuscripts that advance the field of medical anomaly detection. To evaluate these methods on difficult cases and facilitate comparison across papers, we require all submitted papers to run evaluation on our synthetic test dataset, in addition to whichever clinical datasets they choose.
Our compulsory dataset will be publicly available for authors to analyze and we hope it can provide insight to improve their designs. That being said, we ask authors not to train on the test dataset or use synthetic training/validation anomalies that mimic the test data. Any synthetic anomalies used for training/validation must be displayed in a figure side-by-side with samples from our compulsory dataset.
We acknowledge that public synthetic anomaly datasets can be biased or short-lived. So we also greatly welcome papers presenting new challenging clinical datasets or work illuminating the theoretical limits of anomaly detection.
This Research Topic accepts all article types.
Keywords:
Anomaly Detection, Machine Learning, Out-of-Distribution, Unsupervised Learning, Medical Imaging
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.