Automatic segmentation of mitochondria and endolysosomes in volumetric electron microscopy data

https://doi.org/10.1016/j.compbiomed.2020.103693Get rights and content

Highlights

  • A novel public volumetric data-set of cellular ultra-structure in electron microscopy volumes.

  • A new state-of-the-art pipeline for segmentation of mitochondria and endo-lysosomes.

  • Contrast enhancement with transfer learning improves segmentation of unbalanced EM data.

Abstract

Automatic segmentation of intracellular compartments is a powerful technique, which provides quantitative data about presence, spatial distribution, structure and consequently the function of cells. With the recent development of high throughput volumetric data acquisition techniques in electron microscopy (EM), manual segmentation is becoming a major bottleneck of the process. To aid the cell research, we propose a technique for automatic segmentation of mitochondria and endolysosomes obtained from urinary bladder urothelial cells by the dual beam EM technique. We present a novel publicly available volumetric EM dataset – the first of urothelial cells, evaluate several state-of-the-art segmentation methods on the new dataset and present a novel segmentation pipeline, which is based on supervised deep learning and includes mechanisms that reduce the impact of dependencies in the input data, artefacts and annotation errors. We show that our approach outperforms the compared methods on the proposed dataset.

Introduction

Eukaryotic cells are divided into numerous membrane enclosed compartments, or organelles [1]. Mitochondria produce most of the ATP in the cell, but they are also involved in many other cell functions [2]. Endosomes are intracellular compartments of endocytotic pathway that transport material from the plasma membrane to lysosomes [1]. Since degradation of endocytosed material takes place in late endosomes and lysosomes, we use thereafter the combined term ‘endolysosomes’. All these compartments are highly dynamic and plastic, constantly undergoing fusions and fissions, and moving within the cell, which reflect physiologic states and/or differentiation stages of a cell. Since these processes are important for understanding of diseases at the subcellular level [3], [4], a robust pipelines for automatic segmentation of intracellular compartments are needed.

In the urothelium, an epithelium covering urinary bladder, cells undergo a unique differentiation from the basal to the superficial cell layer [5]. Normal superficial cells, called umbrella cells, synthesize large amounts of specialized apical plasma membrane that forms a blood–urine permeability barrier [6]. Numerous intracellular compartments, including mitochondria and endolysosomes, contribute to maintaining the barrier [5], [7], [8], [9]. Various diseases of the urinary bladder compromise the barrier [10]. Our studies have shown that bladder cancers or various types of cystitis alter the synthesis, transport and degradation of the apical plasma membrane [11], [12], [13], but changes in the intracellular compartments in large cell volumes have not yet been studied.

Most intracellular compartments are below or at the resolution limit of light microscopes, therefore their ultrastructural studies are only feasible by electron microscopy (EM). To study three-dimensional (3D) ultrastructure of intracellular compartments and their spatial and temporal distribution at nanometre resolution, two EM techniques are particularly suitable [14]. Electron tomography, which is performed with the transmission electron microscope, has voxel dimensions 1 – 10 nm; however the volumes are limited to 1μm3, which represent only a very limited part of a cell (volume of a single umbrella cell is approximately 50.000μm3) [15], [16]. On the other hand, dual beam microscopy, which combines a focused ion beam and scanning electron microscope (FIB-SEM), enables voxel dimensions 10nm1μm, but volumes of material studied are in the range 1 – 50 μm3 [16], [17]. The FIB-SEM obtains a stack of serial sections by repeated milling of thin layers of material by a focused ion beam and acquiring of micrographs of the exposed inner surfaces. The result is a large set of volumetric data on intracellular compartments, which needs to be segmented in order to understand it in a context of cell function.

The manual segmentation of various intracellular compartments of interest on hundreds or thousands of micrographs is very time consuming and prone to bias. Therefore, research on methods for automatic segmentation of microscopy data has recently flourished. As in other image analysis fields, many recent works are based on deep convolutional neural networks (CNNs), which are in most of the tasks outperforming the traditional approaches [18]. In 2015 Ronneberger et al. [19] proposed an architecture called the U-Net, designed specifically for two-dimensional (2D) medical image segmentation. The main idea of the U-Net is to incorporate local and larger contextual information from the input image. Based on this concept, many architectures were proposed for volumetric data. Çiçek et al. [20] proposed the 3D U-Net architecture, Milletari et al. [21] proposed the V-Net architecture as an extension to the U-Net layout. At almost the same time, Kamnitsas et al. [22] presented an architecture named DeepMedic, which is also a 3D CNN, but has a dual pathway architecture that processes multiple scale inputs simultaneously. Li et al. [23] presented a volumetric architecture called HighRes3DNet which uses dilated convolutions.

All of the described architectures are achieving state-of-the-art results in different medical domains. As stated in the review paper [18] which revised 380 deep learning papers from the medical image analysis field, the CNNs are currently the top performing approach for many tasks, but the exact architecture is not necessarily the most important determinant in getting a good solution. Authors claim that it is the expert knowledge about the task that can provide advantages, since many researches use the exact same architectures with the same type of networks, but have widely varying results. In the following subsection we describe methods that have been developed specifically for the segmentation of mitochondria and are thus related to our research.

The field of automated segmentation of volumetric EM data has been largely driven by connectomics, the effort to reconstruct neural wiring diagrams, where the CNNs for the segmentation of cellular boundaries were proposed very early [24], [25]. Successful approaches have been proposed also for segmentation of synapses which is a similar task to mitochondria segmentation [26], [27].

On the basis of related work, several methods have been proposed that are specifically designed for automatic segmentation of mitochondria. Liu et al. [28] presented a method for segmentation from SEM images based on the Mask R-CNN [29]. Their main contribution is in the post-processing of segmentation masks obtained with the deep network. The post-processing is done in three steps: a morphological opening operation is first used to eliminate the small regions and smooth the large regions, a multi-layer (3D) information fusion algorithm is then used to eliminate the mitochondria shorter than a set threshold and finally an algorithm is employed to improve the consistency in the adjacent layers. Combining a deep CNN with post-processing was also proposed by Oztel et al. [30]. They have developed their own CNN architecture where training is done using 32 × 32 × 1 non-overlapping blocks extracted from the training volume in electron microscopy volumes. Blocks are assigned a ground truth label based on the percentage of pixels from mitochondria and non-mitochondria classes. The last fully connected layer of the network outputs two channel mitochondria versus non-mitochondria class scores, that are then converted to binary classification. They also present three steps of post-processing: 2D spurious detection filtering, boundary refinement, and 3D filtering. All of the described approaches are showing promising results but contrary to our method they do not use 3D spatial information in network training.

While all of the described approaches use 2D convolutions, Haberl et al. [31] presented a 3D convolution based approach called the CDeep3M. It is a ready-to-use volumetric segmentation solution employing a cloud-based deep CNN called the DeepEM3D [32]. Results of mitochondria segmentation with DeepEM3D do not outperform state-of-the-art results, however the approach is interesting because it is very robust and achieves good results on different target classes (nuclei, mitochondria, synaptic vesicles, membrane).

Because of small training datasets, a new type of methods based on domain adaption algorithms have arisen. By now, they do not outperform the existing algorithms for mitochondria segmentation, but results are promising. Bermudez-Chacon et al. [33] proposed domain-adaptive two-stream U-Net [33]. This approach use training data from the domain with plenty of training data to improve the segmentation on another domain with less training data. They propose a method with dual U-Net architecture where they use one stream for the source domain and another one for the target domain. The streams are connected so that they share some of the weights. In [34], authors propose the Y-Net architecture which adapts the classical encoder–decoder layout with an added reconstruction decoder in order to align the source and target encoder features. They tested their work transferring knowledge from isotropic FIB-SEM to anisotropic TEM volumes as well as from brain EM images to HeLa cells.

Public datasets for evaluation of mitochondria segmentation are scarce. The most widely adopted datasets were developed by Lucchi [35] and Xiao [36]. Currently, the best approach according to evaluations on Lucchi’s dataset is the supervoxel based method of the same authors [37]. They used a nonlinear RBF-SVM classifier to segment mitochondria in 3D and 2D data. It is one of the rare approaches which does not rely on the CNNs. The best approach according to evaluations on the Xiao’s dataset is the deep learning approach which exploits 3D spatial information proposed by the same authors [36]. They used a variant of the 3D U-Net with residual blocks. To solve the problem of vanishing gradients during training, they injected auxiliary classifier layers into the hidden layers.

Segmentation of mitochondria was addressed also for fluorescence microscopy data where the target structures were tagged with the use of fluorescence contrasting. Some of the most recent advances are presented in [38], where iterative deep learning workflows allow for generation of initial high-quality three-dimensional segmentations, which are then used as annotations for training deep learning models.

The motivation for the work presented in this paper is to further the research on segmentation of intracellular compartments. We propose a method for automatic segmentation of two types of intracellular compartments — mitochondria and endolysosomes. We propose a novel, publicly available urothelial FIB-SEM dataset (the UroCell dataset), which will enlarge the variety of available datasets with annotated intracellular compartments.

The new dataset is to our knowledge the first public dataset for segmentation of mitochondria and endolysosomes which is not obtained from brain tissue, as well as the first isotropic dataset with labelled endolysosomes.

We evaluate several state-of-the-art approaches to medical data segmentation on our dataset, and propose a new CNN-based segmentation pipeline, which achieves state-of-the-art results for mitochondria and endolysosomes segmentation. In our approach, we introduce techniques to increase the robustness of segmentation by reducing the problem of class imbalance, reducing the impact of varying brightness/contrast and image quality in different parts of the dataset and reducing the impact of unreliable annotations. By making the segmentation pipeline more robust, we demonstrate that our approach can also yield state-of-the-art results on other public isotropic dataset as well.

Section snippets

Materials and methods

In this section, we describe the novel UroCell dataset, and outline our proposed method for segmentation of intracellular compartments.

Experiments and results

Discussion

Results show that our method yields the best score when segmenting both types of intracellular compartments in the UroCell dataset. If we look at the Dice coefficients presented in Table 1, we can see how different proposed mechanisms and their combinations affect the results. The proposed contrast enhancement scores the highest for the mitochondria class, but has difficulties with endolysosomes as it confuses them often with mitochondria or background. If we compare the mean DSC results of

Conclusions

With our paper, we make the following contributions. We introduce a novel publicly available dataset with manually labelled intracellular compartments, which is to our knowledge the first FIB-SEM dataset not obtained from brain tissue which contains labels for both mitochondria and endolysosomes for the same region. The dataset is, in comparison to other public datasets, more diverse, as it consists of five different sub-volumes from different parts of a cell, as well as annotations for two

CRediT authorship contribution statement

Manca Žerovnik Mekuč: Methodology, Software, Writing - original draft. Ciril Bohak: Writing - review & editing. Samo Hudoklin: Data curation, Writing - original draft, Writing - review & editing. Byeong Hak Kim: Methodology, Software, Writing - original draft. Rok Romih: Data curation, Writing - review & editing. Min Young Kim: Writing - review & editing. Matija Marolt: Methodology, Software, Writing - original draft, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The authors acknowledge the financial support from the Slovenian Research Agency (research core funding No. P3-0108) and the support of the NVIDIA Corporation with donation of a Titan V GPU used for this research. We thank Bruno M. Humbel and Caroline Kizilyaprak for their dual beam expertise. We acknowledge the help of all the volunteers who helped us with manual labelling, especially Eva Boneš.

References (48)

  • KreftM.E. et al.

    Formation and maintenance of blood–urine barrier in urothelium

    Protoplasma

    (2010)
  • DodmaneP.R. et al.

    Characterization of intracellular inclusions in the urothelium of mice exposed to inorganic arsenic

    Toxicol. Sci.

    (2014)
  • VieiraN. et al.

    Snx31: a novel sorting nexin associated with the uroplakin-degrading multivesicular bodies in terminally differentiated urothelial cells

    PLoS One

    (2014)
  • LiaoY. et al.

    Mitochondrial lipid droplet formation as a detoxification mechanism to sequester and degrade excessive urothelial membranes

    Mol. Biol. Cell

    (2019)
  • RomihR. et al.

    Recent advances in the biology of the urothelium and applications for urinary bladder dysfunction

    BioMed. Res. Int.

    (2014)
  • ZupancicD.

    Heterogeneity of uroplakin localization in human normal urothelium, papilloma and papillary carcinoma

    Radiol. Oncol.

    (2013)
  • LeeG. et al.

    Cystitis: from urothelial cell biology to clinical applications

    BioMed. Res. Int.

    (2014)
  • ZupančičD. et al.

    Selective binding of lectins to normal and neoplastic urothelium in rat and mouse bladder carcinogenesis models

    Protoplasma

    (2014)
  • MirandaK. et al.

    Three dimensional reconstruction by electron microscopy in the life sciences: An introduction for cell and tissue biologists

    Mol. Reprod. Dev.

    (2015)
  • CantoniM. et al.

    Advances in 3D focused ion beam tomography

    MRS Bull.

    (2014)
  • TitzeB. et al.

    Volume scanning electron microscopy for imaging biological ultrastructure

    Biol. Cell

    (2016)
  • RonnebergerO. et al.

    U-net: Convolutional networks for biomedical image segmentation

  • ÇiçekÖ. et al.

    3D U-Net: learning dense volumetric segmentation from sparse annotation

  • MilletariF. et al.

    V-net: Fully convolutional neural networks for volumetric medical image segmentation

  • Cited by (30)

    • Automatic segmentation and reconstruction of intracellular compartments in volumetric electron microscopy data

      2022, Computer Methods and Programs in Biomedicine
      Citation Excerpt :

      In our work, we present methods for reconstruction of mitochondria and FVs in EM data. Segmentation of mitochondria has been addressed by many authors [33–37], a detailed overview has been presented in our previous paper [15]. Lately, several methods were proposed that also include reconstruction, where the main goal is to rectify locally incorrect segmentation results.

    • A survey on applications of deep learning in microscopy image analysis

      2021, Computers in Biology and Medicine
      Citation Excerpt :

      Nuh et al designed a computer-aided diagnosis (CAD) system based on deep learning, which can segment cancer cell patches from histopathological images and assist in cancer diagnosis [50]. Deep learning based segmentation algorithms have also been developed for detection of different cell types, such as breast cancer cells [51], liver cells [52], blood cells [53,54], cervical cells [55] and corneal endothelial cells [56]. Intracellular compartments segmentation.

    • High-density single shot 3D sensing using adaptable speckle projection system with varying preprocessing

      2021, Optics and Lasers in Engineering
      Citation Excerpt :

      The auto-exposure methods are normally associated with the unstable brightness [37] hence manual adjustment is performed. Histograms have been utilized to enhance the contrast of the images in numerous applications [38–41]. The stereo matching performance in passive stereo methods can be improved by enhancing the contrast of the images and by correcting the illumination [42,43].

    View all citing articles on Scopus
    View full text