Just as radiology went digital over 20 years ago, pathology is making the same transition. Pathology samples provide information about disease microstructure as well as the amount of protein being expressed by cells (biomarkers). Therefore, these tissue samples play a critical role in cancer diagnosis, staging and treatment. For the last 100 years, pathologists have largely depended on a microscope to examine disease and tissue under magnification. This is starting to change as wholeslide scanners, digital microscopes and advanced archiving software solutions become more common place. These tools and technologies have given rise to the field of digital pathology.
As digital pathology is gaining momentum, there is great potential to improve pathologist workflows, and to support remote consultations and education. Just as important, through digitization, it is now possible to leverage image analysis and machine learning algorithms on digital pathology images; a domain known as computational pathology. In a clinical-care scenario, computational pathology algorithms increase diagnostic accuracy and allow interpretation tasks to be completed with greater efficiency. In the medical and pharmaceutical communities, digital pathology algorithms can fully enable personalized medicine by advancing research into disease processes and treatment efficacy through analysis of large datasets. Given the size and complexity of pathology image data, automated approaches are more efficient, objective and scalable compared to manual analysis.
Computational pathology is an area that is still in its infancy with potential to change the way pathology is practiced. Consequently, this field has generated significant interest from both academia and industry who are committing significant resources in developing these tools. Although promising, a major hurdle in the development of computational pathology technologies lies in the variability and noise in the data. As image analysis and machine learning algorithms are quantitative, inconsistency in the data reduces the ability to detect objects robustly and reliably.
In large digital pathology datasets, images have been generated by a variety of scanner and stain vendors, and also with varying preparation protocols creating variability in the characteristics of the images. For example, scanner vendors use proprietary colour profiles to translate stain absorption features into colour, resulting in scanner-dependent colour characteristics. Acquisition noise, which affects the statistical properties of the intensities being imaged, is also largely scanner-dependent. Similarly, stain vendors use differing composition of stains, which create variability in the way that colour is rendered in digital pathology images. Different preparation methods, along with human error can cause overstaining and background staining.
Variability in the colour and noise levels in digital pathology images make it challenging to apply image analysis and machine learning algorithms reliably to large, multi-institutional datasets. Therefore, this is a major issue that needs to be addressed before there is wide-scale adoption of computational pathology.
This Research Topic welcomes original manuscripts from authors, that specifically address image variability and retrospective standardization methods that manage it through automated algorithms. Some suggested topics include:
- Robust image analysis and machine learning algorithms,
- Computational pathology tools applied to large datasets,
- Analysis of variability in digital pathology images,
- Preprocessing and artifact reduction methodologies for digital pathology,
- Stain deconvolution, colour normalization,
- Domain adaptation,
- Outcome prediction using standardized data.
Just as radiology went digital over 20 years ago, pathology is making the same transition. Pathology samples provide information about disease microstructure as well as the amount of protein being expressed by cells (biomarkers). Therefore, these tissue samples play a critical role in cancer diagnosis, staging and treatment. For the last 100 years, pathologists have largely depended on a microscope to examine disease and tissue under magnification. This is starting to change as wholeslide scanners, digital microscopes and advanced archiving software solutions become more common place. These tools and technologies have given rise to the field of digital pathology.
As digital pathology is gaining momentum, there is great potential to improve pathologist workflows, and to support remote consultations and education. Just as important, through digitization, it is now possible to leverage image analysis and machine learning algorithms on digital pathology images; a domain known as computational pathology. In a clinical-care scenario, computational pathology algorithms increase diagnostic accuracy and allow interpretation tasks to be completed with greater efficiency. In the medical and pharmaceutical communities, digital pathology algorithms can fully enable personalized medicine by advancing research into disease processes and treatment efficacy through analysis of large datasets. Given the size and complexity of pathology image data, automated approaches are more efficient, objective and scalable compared to manual analysis.
Computational pathology is an area that is still in its infancy with potential to change the way pathology is practiced. Consequently, this field has generated significant interest from both academia and industry who are committing significant resources in developing these tools. Although promising, a major hurdle in the development of computational pathology technologies lies in the variability and noise in the data. As image analysis and machine learning algorithms are quantitative, inconsistency in the data reduces the ability to detect objects robustly and reliably.
In large digital pathology datasets, images have been generated by a variety of scanner and stain vendors, and also with varying preparation protocols creating variability in the characteristics of the images. For example, scanner vendors use proprietary colour profiles to translate stain absorption features into colour, resulting in scanner-dependent colour characteristics. Acquisition noise, which affects the statistical properties of the intensities being imaged, is also largely scanner-dependent. Similarly, stain vendors use differing composition of stains, which create variability in the way that colour is rendered in digital pathology images. Different preparation methods, along with human error can cause overstaining and background staining.
Variability in the colour and noise levels in digital pathology images make it challenging to apply image analysis and machine learning algorithms reliably to large, multi-institutional datasets. Therefore, this is a major issue that needs to be addressed before there is wide-scale adoption of computational pathology.
This Research Topic welcomes original manuscripts from authors, that specifically address image variability and retrospective standardization methods that manage it through automated algorithms. Some suggested topics include:
- Robust image analysis and machine learning algorithms,
- Computational pathology tools applied to large datasets,
- Analysis of variability in digital pathology images,
- Preprocessing and artifact reduction methodologies for digital pathology,
- Stain deconvolution, colour normalization,
- Domain adaptation,
- Outcome prediction using standardized data.