Skip to main content

EDITORIAL article

Front. Comput. Sci., 30 May 2022
Sec. Computer Vision
This article is part of the Research Topic Methods and Tools for Bioimage Analysis View all 13 articles

Editorial: Methods and Tools for Bioimage Analysis

  • 1Univ. Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, Bordeaux, France
  • 2Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR3420, US 4, Bordeaux, France
  • 3European Bioinformatics Institute, European Molecular Biology Laboratory (EMBL), Cambridge, United Kingdom
  • 4Fondazione Human Technopole, Milan, Italy

Editorial on the Research Topic
Methods and Tools for Bioimage Analysis

A core purpose of biological imaging data is the quantification of complex phenomena through bioimage analysis. Recent advances in our ability to observe living systems, enabling life scientists to monitor the dynamics of biological phenomena across spatiotemporal scales and at high resolution, have led to a much increased demand for methods and tools to analyze microscopy data. In many scientific projects in the life sciences, the bottleneck has now shifted from not having the technology to image interesting phenomena to not being able to extract information from the unfathomable amounts of image data acquired during said imaging experiments.

Historically, Bioimage Analysis operates in between the disciplines of computer science, physics, microscopy, medicine, and biology, presenting a rather unique set of challenges on various levels. With growing demands, the community grew, and saw the emergence of national and international networks such as NEUBIAS (http://eubias.org/NEUBIAS/), COBA (https://openbioimageanalysis.org/), or BINA (https://www.bioimagingnorthamerica.org/), all aiming at connecting bioimage analysis researchers, helping them to better align their interest, and ultimately be more efficient and less redundant. This has also helped this community to better understand what its strengths are, and where bioimage analysis methods are still missing.

An interesting and quite unique aspect of our work as Bioimage Analysts is that we need to develop not only methods but also tools—both tasks being exquisitely time-consuming and requiring very different skills. Our “users,” on the other hand, are experts in the specific biological systems they study and should ideally not require beyond average computational skills. Hence, even the most powerful new computational method is of little to no use if not cast into accessible and usable software tools.

We present a Research Topic of useful new methods, all cast into accessible software tools that enable life-scientists to analyze their microscopy image data.

In this Research Topic we collected work that presents methods needed to analyze microscopy image data, all of which are cast into accessible software tools. We cover a wide range of pressing bioimage analysis tasks, with individual contributions addressing either specific or common analysis problems, in ways that might focus on describing novel methodological ideas or on making known methods and approaches available in efficient and user-friendly ways. All presented work has one thing in common: the promise to enable the analysis of scientific image data in the hands of our users.

Four contributions have a particularly interesting focus on novel methodological approaches, and are essential for pushing forward robustness and automation in the bioimage analysis field. Ulicna et al. develop a hybrid deep learning and Bayesian cell tracking approach to reconstruct lineage trees from live-cell microscopy data, offering their novel approach to users in an open Python-based software tool. The work by Haase demonstrates the utility of defining image filtering operations directly on a grid of biologically-relevant objects such as cells instead of image pixels, thereby rendering these filtering operations efficient enough for real-time applications even on sizable image data. In Matskevych et al., the authors observe that the power of deep learning based segmentations and the simplicity of a random forest based pixel classification can be combined in an intriguing new transfer learning idea. Finally, Klatzow et al. combine several state-of-the-art concepts for surface matching into a fully automated shape correspondence pipeline, demonstrated to efficiently streamline the landmark-free alignment of complex 3D objects for downstream shape analysis.

Other work we present excels in making potentially widely applicable ideas and methods better accessible to a wide range of users and/or applicable to truly large datasets. While biological image data is filled with fibrillar structures of various kinds, quantifying their orientation is often a critical step. Marcotti et al. present a method, with open sources in Matlab and Python, allowing users to efficiently and accurately quantify both local and global alignments by fibrillar structure using a Fourier-based alignment strategy. In Chiaruttini et al., the authors introduce Warpy, a pipeline for solving another pressing alignment problem, namely the alignment of whole slides in histopathology. Their solution combined state of the art software platforms (elastix, Fiji and its BigWarp plugin, and QuPath), enabling users to perform semi-automated non-linear registration of their data through robust user interfaces. In Arzt et al., the authors present Labkit, a random forest based trainable pixel classifier capable of segmenting truly large volumetric image and time-series data. Labkit offers an intuitive user interface, and ows its efficiency to BigDataViewer and imglib2, two essential parts of the modern Fiji ecosystem (https://fiji.sc/). While Fiji still is a pillar of many tools and workflows, napari (https://napari.org/) has recently begun to take the bioimage analysis field by storm. It allows users to visualize and manipulate their images in a Python-based environment, with n-dimensionality being one of napari's core principles. D'Antuono and Pisignano present ZELDA, a plugin for napari, designed to enable users to build customizable bioimage analysis workflows, demonstrated in the context of 3D. Finally, Ritchie et al. present Tonga, a new software platform that gives emphasis to ease-of-use and user-friendliness. Tonga's wizard feature suggests suitable methods to its users, depending on the image and the task at hand, with a special focus on detection and segmentation of nuclei.

The final group of contributions enable specific bioimage quantification investigations. By definition, these pipelines and tools are not intended to be generally applicable, but without them, the research they support would become impossibly time-consuming to conduct, and insights hard to gain. Rodriguez et al. present a ML based computer vision approach to quantify the foraging behavior of bees at the entrance of their hive using deep learning for detecting bees and then tracking them while also estimating their pose. In Rahm et al., the authors introduce a mean-squared displacement analysis to classify single-molecule tracklets into immobile, confined diffusing, and freely diffusing states. Their system can also detect transitions between these modes, allowing them to better understand the molecular system they study. Last but not least, Schmied et al. present SynActJ, an ImageJ plugin combined with a R Shiny app designed for the automatic detection and analysis of synaptic activity in time-lapse movies. SynActJ allows end-to-end analysis, from filter-based synapse segmentation, to the full analysis of all previously segmented traces within a user-friendly tool.

In summary, this Research Topic provides a collection of bioimage analysis methods and open software tools, developed with the needs of our community in mind. We strongly believe that the work presented in this Research Topic demonstrates how broad our field is and how important it is to conduct research that contributes with not only novel methods to pressing analysis problems, but also with sound solutions that are open, available, and applicable by others to their own data. If our community keeps pushing these virtues, progress in the life sciences will be making progress in bigger and bigger strides.

Author Contributions

FJ, VU, and FL conceived the Research Topic and wrote the manuscript. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We wanted to thank all the contributors to this Research Topic for their insightful and interesting manuscripts.

Keywords: bioimage analysis, methods, tools, software platform, accessibility

Citation: Levet F, Uhlmann V and Jug F (2022) Editorial: Methods and Tools for Bioimage Analysis. Front. Comput. Sci. 4:931939. doi: 10.3389/fcomp.2022.931939

Received: 29 April 2022; Accepted: 16 May 2022;
Published: 30 May 2022.

Edited and reviewed by: Marcello Pelillo, Ca' Foscari University of Venice, Italy

Copyright © 2022 Levet, Uhlmann and Jug. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Florian Jug, Zmxvcmlhbi5qdWcmI3gwMDA0MDtmaHQub3Jn; Virginie Uhlmann, dWhsbWFubiYjeDAwMDQwO2ViaS5hYy51aw==; Florian Levet, Zmxvcmlhbi5sZXZldCYjeDAwMDQwO3UtYm9yZGVhdXguZnI=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.