Skip to main content

ORIGINAL RESEARCH article

Front. Neuroinform.
Volume 18 - 2024 | doi: 10.3389/fninf.2024.1429670
This article is part of the Research Topic Emerging Trends in Large-Scale Data Analysis for Neuroscience Research View all 6 articles

LYNSU: Automated 3D neuropil segmentation of fluorescent images for Drosophila brains

Provisionally accepted
Kai-Yi Hsu Kai-Yi Hsu 1Chi-Tin Shih Chi-Tin Shih 2,3*Nan-Yow Chen Nan-Yow Chen 4*Chung-Chuan Lo Chung-Chuan Lo 1,3*
  • 1 Institute of Systems Neuroscience, College of Life Science, National Tsing Hua University, Hsinchu, Taiwan
  • 2 Tunghai University, Taichung, Taiwan
  • 3 Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
  • 4 National Center for High-Performance Computing, Hsinchu, Taiwan

The final, formatted version of the article will be published soon.

    The brain atlas, which provides information about the distribution of genes, proteins, neurons, or anatomical regions, plays a crucial role in contemporary neuroscience research. To analyze the spatial distribution of those substances based on images from different brain samples, we often need to warp and register individual brain images to a standard brain template. However, the process of warping and registration may lead to spatial errors, thereby severely reducing the accuracy of the analysis. To address this issue, we develop an automated method for segmenting neuropils in the Drosophila brain for fluorescence images from the FlyCircuit database. This technique allows future brain atlas studies to be conducted accurately at the individual level without warping and aligning to a standard brain template. Our method, LYNSU (Locating by YOLO and Segmenting by U-Net), consists of two stages. In the first stage, we use the YOLOv7 model to quickly locate neuropils and rapidly extract small-scale 3D images as input for the second stage model. This stage achieves a 99.4% accuracy rate in neuropil localization. In the second stage, we employ the 3D U-Net model to segment neuropils. LYNSU can achieve high accuracy in segmentation using a small training set consisting of images from merely 16 brains. We demonstrate LYNSU on six distinct neuropils or structures, achieving a high segmentation accuracy comparable to professional manual annotations with a 3D Intersection-over-Union (IoU) reaching up to 0.869. Our method takes only about 7 seconds to segment a neuropil while achieving a similar level of performance as the human annotators. To demonstrate a use case of LYNSU, we applied it to all female Drosophila brains from the FlyCircuit database to investigate the asymmetry of the mushroom bodies (MBs), the learning center of fruit flies. We used LYNSU to segment bilateral MBs and compare the volumes between left and right for each individual. Notably, of 8,703 valid brain samples, 10.14% showed bilateral volume differences that exceeded 10%. The study demonstrated the potential of the proposed method in high-throughput anatomical analysis and connectomics construction of the Drosophila brain.

    Keywords: Fluorescence image, U-net, YOLO, connectomics, image segmentation, anatomical analysis

    Received: 08 May 2024; Accepted: 15 Jul 2024.

    Copyright: © 2024 Hsu, Shih, Chen and Lo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence:
    Chi-Tin Shih, Tunghai University, Taichung, Taiwan
    Nan-Yow Chen, National Center for High-Performance Computing, Hsinchu, Taiwan
    Chung-Chuan Lo, Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.