Self-Learning Algorithms Could Improve AI-Based Evaluation of Medical Imaging Data
By MedImaging International staff writers Posted on 22 Dec 2020 |

Image: nnU-Net handles a broad variety of datasets and target image properties (Photo courtesy of Isensee et al. / Nature Methods)
Scientists have presented a new method for configuring self-learning algorithms for a large number of different imaging datasets – without the need for specialist knowledge or very significant computing power.
In the evaluation of medical imaging data, artificial intelligence (AI) promises to provide support to physicians and help relieve their workload, particularly in the field of oncology. Yet regardless of whether the size of a brain tumor needs to be measured in order to plan treatment or the regression of lung metastases needs to be documented during the course of radiotherapy, computers first have to learn how to interpret the three-dimensional imaging datasets from computed tomography (CT) or magnetic resonance imaging (MRI). They must be able to decide which pixels belong to the tumor and which do not.
AI experts refer to the process of distinguishing between the two as 'semantic segmentation'. For each individual task – for example recognizing a renal carcinoma on CT images or breast cancer on MRI images – scientists need to develop special algorithms that can distinguish between tumor and non-tumor tissue and can make predictions. Imaging datasets for which physicians have already labeled tumors, healthy tissue, and other important anatomical structures by hand are used as training material for machine learning. It takes experience and specialized knowledge to develop segmentation algorithms such as these.
Scientists from the German Cancer Research Center (DKFZ; Heidelberg, Germany) have now developed a method that adapts dynamically and completely automatically to any kind of imaging datasets, thus allowing even researchers with limited prior expertise to configure self-learning algorithms for specific tasks. The method, known as nnU-Net, can deal with a broad range of imaging data: in addition to conventional imaging methods such as CT and MRI, it can also process images from electron and fluorescence microscopy. Using nnU-Net, the DKFZ researchers obtained the best results in 33 out of 53 different segmentation tasks in international competitions, despite competing against highly specific algorithms developed by experts for specific individual questions. The team is making nnU-Net available as an open source tool to be downloaded free of charge.
So far, AI-based evaluation of medical imaging data has mainly been applied in research contexts and has not yet been broadly used in the routine clinical care of cancer patients. However, medical informatics specialists and physicians see considerable potential for its use, for example for highly repetitive tasks, such as those that often need to be performed as part of large-scale clinical studies. nnU-Net can help harness this potential, according to the scientists.
"nnU-Net can be used immediately, can be trained using imaging datasets, and can perform special tasks – without requiring any special expertise in computer science or any particularly significant computing power," explained Klaus Maier-Hein.
Related Links:
German Cancer Research Center (DKFZ)
In the evaluation of medical imaging data, artificial intelligence (AI) promises to provide support to physicians and help relieve their workload, particularly in the field of oncology. Yet regardless of whether the size of a brain tumor needs to be measured in order to plan treatment or the regression of lung metastases needs to be documented during the course of radiotherapy, computers first have to learn how to interpret the three-dimensional imaging datasets from computed tomography (CT) or magnetic resonance imaging (MRI). They must be able to decide which pixels belong to the tumor and which do not.
AI experts refer to the process of distinguishing between the two as 'semantic segmentation'. For each individual task – for example recognizing a renal carcinoma on CT images or breast cancer on MRI images – scientists need to develop special algorithms that can distinguish between tumor and non-tumor tissue and can make predictions. Imaging datasets for which physicians have already labeled tumors, healthy tissue, and other important anatomical structures by hand are used as training material for machine learning. It takes experience and specialized knowledge to develop segmentation algorithms such as these.
Scientists from the German Cancer Research Center (DKFZ; Heidelberg, Germany) have now developed a method that adapts dynamically and completely automatically to any kind of imaging datasets, thus allowing even researchers with limited prior expertise to configure self-learning algorithms for specific tasks. The method, known as nnU-Net, can deal with a broad range of imaging data: in addition to conventional imaging methods such as CT and MRI, it can also process images from electron and fluorescence microscopy. Using nnU-Net, the DKFZ researchers obtained the best results in 33 out of 53 different segmentation tasks in international competitions, despite competing against highly specific algorithms developed by experts for specific individual questions. The team is making nnU-Net available as an open source tool to be downloaded free of charge.
So far, AI-based evaluation of medical imaging data has mainly been applied in research contexts and has not yet been broadly used in the routine clinical care of cancer patients. However, medical informatics specialists and physicians see considerable potential for its use, for example for highly repetitive tasks, such as those that often need to be performed as part of large-scale clinical studies. nnU-Net can help harness this potential, according to the scientists.
"nnU-Net can be used immediately, can be trained using imaging datasets, and can perform special tasks – without requiring any special expertise in computer science or any particularly significant computing power," explained Klaus Maier-Hein.
Related Links:
German Cancer Research Center (DKFZ)
Latest Industry News News
- Bayer and Google Partner on New AI Product for Radiologists
- Samsung and Bracco Enter Into New Diagnostic Ultrasound Technology Agreement
- IBA Acquires Radcal to Expand Medical Imaging Quality Assurance Offering
- International Societies Suggest Key Considerations for AI Radiology Tools
- Samsung's X-Ray Devices to Be Powered by Lunit AI Solutions for Advanced Chest Screening
- Canon Medical and Olympus Collaborate on Endoscopic Ultrasound Systems
- GE HealthCare Acquires AI Imaging Analysis Company MIM Software
- First Ever International Criteria Lays Foundation for Improved Diagnostic Imaging of Brain Tumors
- RSNA Unveils 10 Most Cited Radiology Studies of 2023
- RSNA 2023 Technical Exhibits to Offer Innovations in AI, 3D Printing and More
- AI Medical Imaging Products to Increase Five-Fold by 2035, Finds Study
- RSNA 2023 Technical Exhibits to Highlight Latest Medical Imaging Innovations
- AI-Powered Technologies to Aid Interpretation of X-Ray and MRI Images for Improved Disease Diagnosis
- Hologic and Bayer Partner to Improve Mammography Imaging
- Global Fixed and Mobile C-Arms Market Driven by Increasing Surgical Procedures
- Global Contrast Enhanced Ultrasound Market Driven by Demand for Early Detection of Chronic Diseases
Channels
Radiography
view channel
Novel Breast Imaging System Proves As Effective As Mammography
Breast cancer remains the most frequently diagnosed cancer among women. It is projected that one in eight women will be diagnosed with breast cancer during her lifetime, and one in 42 women who turn 50... Read more
AI Assistance Improves Breast-Cancer Screening by Reducing False Positives
Radiologists typically detect one case of cancer for every 200 mammograms reviewed. However, these evaluations often result in false positives, leading to unnecessary patient recalls for additional testing,... Read moreMRI
view channel
PET/MRI Improves Diagnostic Accuracy for Prostate Cancer Patients
The Prostate Imaging Reporting and Data System (PI-RADS) is a five-point scale to assess potential prostate cancer in MR images. PI-RADS category 3 which offers an unclear suggestion of clinically significant... Read more
Next Generation MR-Guided Focused Ultrasound Ushers In Future of Incisionless Neurosurgery
Essential tremor, often called familial, idiopathic, or benign tremor, leads to uncontrollable shaking that significantly affects a person’s life. When traditional medications do not alleviate symptoms,... Read more
Two-Part MRI Scan Detects Prostate Cancer More Quickly without Compromising Diagnostic Quality
Prostate cancer ranks as the most prevalent cancer among men. Over the last decade, the introduction of MRI scans has significantly transformed the diagnosis process, marking the most substantial advancement... Read moreUltrasound
view channel
Deep Learning Advances Super-Resolution Ultrasound Imaging
Ultrasound localization microscopy (ULM) is an advanced imaging technique that offers high-resolution visualization of microvascular structures. It employs microbubbles, FDA-approved contrast agents, injected... Read more
Novel Ultrasound-Launched Targeted Nanoparticle Eliminates Biofilm and Bacterial Infection
Biofilms, formed by bacteria aggregating into dense communities for protection against harsh environmental conditions, are a significant contributor to various infectious diseases. Biofilms frequently... Read moreNuclear Medicine
view channel
New SPECT/CT Technique Could Change Imaging Practices and Increase Patient Access
The development of lead-212 (212Pb)-PSMA–based targeted alpha therapy (TAT) is garnering significant interest in treating patients with metastatic castration-resistant prostate cancer. The imaging of 212Pb,... Read more
New Radiotheranostic System Detects and Treats Ovarian Cancer Noninvasively
Ovarian cancer is the most lethal gynecological cancer, with less than a 30% five-year survival rate for those diagnosed in late stages. Despite surgery and platinum-based chemotherapy being the standard... Read more
AI System Automatically and Reliably Detects Cardiac Amyloidosis Using Scintigraphy Imaging
Cardiac amyloidosis, a condition characterized by the buildup of abnormal protein deposits (amyloids) in the heart muscle, severely affects heart function and can lead to heart failure or death without... Read moreGeneral/Advanced Imaging
view channel
New AI Method Captures Uncertainty in Medical Images
In the field of biomedicine, segmentation is the process of annotating pixels from an important structure in medical images, such as organs or cells. Artificial Intelligence (AI) models are utilized to... Read more.jpg)
CT Coronary Angiography Reduces Need for Invasive Tests to Diagnose Coronary Artery Disease
Coronary artery disease (CAD), one of the leading causes of death worldwide, involves the narrowing of coronary arteries due to atherosclerosis, resulting in insufficient blood flow to the heart muscle.... Read more
Novel Blood Test Could Reduce Need for PET Imaging of Patients with Alzheimer’s
Alzheimer's disease (AD), a condition marked by cognitive decline and the presence of beta-amyloid (Aβ) plaques and neurofibrillary tangles in the brain, poses diagnostic challenges. Amyloid positron emission... Read more.jpg)
CT-Based Deep Learning Algorithm Accurately Differentiates Benign From Malignant Vertebral Fractures
The rise in the aging population is expected to result in a corresponding increase in the prevalence of vertebral fractures which can cause back pain or neurologic compromise, leading to impaired function... Read moreImaging IT
view channel
New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible
Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more