The described pipeline is designed for the segmentation of electron microscopy datasets larger than gigabytes, to extract whole-cell morphologies. Once the cells are reconstructed in 3D, customized software designed around individual needs can be used to perform a qualitative and quantitative analysis directly in 3D, also using virtual reality to overcome view occlusion.
Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.
The first proposed model for an electron microscopy setup allowing automated serial section and imaging dates back to 19811; the diffusion of such automated, improved setups to image large samples using EM increased in the last ten years2,3, and works showcasing impressive dense reconstructions or full morphologies immediately followed4,5,6,7,8,9,10.
The production of large datasets came with the need for improved pipelines for image segmentation. Software tools for the manual segmentation of serial sections, such as RECONSTRUCT and TrakEM211,12, were designed for transmission electron microscopy (TEM). As the whole process can be extremely time-consuming, these tools are not appropriate when dealing with the thousands of serial micrographs that can be automatically generated with state-of-the-art, automated EM serial section techniques (3DEM), such as block-face scanning electron microscopy (SBEM)3 or focused ion beam-scanning electron microscopy (FIB-SEM)2. For this reason, scientists put efforts into developing semi-automated tools, as well as fully automated tools, to improve segmentation efficiency. Fully automated tools, based on machine learning13 or state-of-the-art, untrained pixel classification algorithms14, are being improved to be used by a larger community; nevertheless, segmentation is still far from being fully reliable, and many works are still based on manual labor, which is inefficient in terms of segmentation time but still provides complete reliability. Semi-automated tools, such as ilastik15, represent a better compromise, as they provide an immediate readout for the segmentation that can be corrected to some extent, although it does not provide a real proofreading framework, and can be integrated using TrakEM2 in parallel16.
Large-scale segmentation is, to date, mostly limited to connectomics; therefore, computer scientists are most interested in providing frameworks for integrated visualizations of large, annotated datasets and analyze connectivity patterns inferred by the presence of synaptic contacts17,18. Nevertheless, accurate 3D reconstructions can be used for quantitative morphometric analyses, rather than qualitative assessments of the 3D structures. Tools like NeuroMorph19,20 and glycogen analysis10 have been developed to take measurements on the 3D reconstructions for lengths, surface areas, and volumes, and on the distribution of cloud points, completely discarding the original EM stack8,10. Astrocytes represent an interesting case study, because the lack of visual clues or repetitive structural patterns give investigators a hint about the function of individual structural units and consequent lack of an adequate ontology of astrocytic processes21, make it challenging to design analytical tools. One recent attempt was Abstractocyte22, which allows a visual exploration of astrocytic processes and the inference of qualitative relationships between astrocytic processes and neurites.
Nevertheless, the convenience of imaging sectioned tissue under EM comes from the fact that the amount of information hidden in intact brain samples is enormous and interpreting single section images can overcome this issue. The density of structures in the brain is so high that 3D reconstructions of even a few objects visible at once would make it impossible to distinguish them visually. For this reason, we recently proposed the use of virtual reality (VR) as an improved method to observe complex structures. We focus on astrocytes23 to overcome occlusion (which is the blocking of the visibility of an object of interest with a second one, in a 3D space) and ease qualitative assessments of the reconstructions, including proofreading, as well as quantifications of features using count of points in space. We recently combined VR visual exploration with the use of GLAM (glycogen-derived lactate absorption model), a technique to visualize a map of lactate shuttle probability of neurites, by considering glycogen granules as light-emitting bodies23; in particular, we used VR to quantify the light peaks produced by GLAM.
1. Image Processing Using Fiji
2. Segmentation (Semi-automated) and Reconstruction Using ilastik 1.3.2
3. Proofreading/Segmentation (Manual) in TrakEM2 (Fiji)
4. 3D Analysis
By using the procedure presented above, we show results on two image stacks of different sizes, to demonstrate how the flexibility of the tools makes it possible to scale up the procedure to larger datasets. In this instance, the two 3DEM datasets are (i) P14 rat, somatosensory cortex, layer VI, 100 µm x 100 µm x 76.4 µm4 and (ii) P60 rat, hippocampus CA1, 7.07 µm x 6.75 µm x 4.73 µm10.
The preprocessing steps (Figure 1) can be performed the same way for both datasets, simply taking into account that a larger dataset such as the first stack, which is 25 GB, requires more performing hardware to handle large data visualization and processing. The second stack is only 1 GB, with a perfectly isotropic voxel size.
The size of the data might not be directly related to the field of view (FOV), rather than to the resolution of the stack itself, which depends on the maximum pixel size of the sensor of the microscope, and the magnification of the stack. In any case, logically, larger FOVs are likely to occupy more physical space compared to smaller FOVs, if acquired at the same resolution.
Once the image stack is imported, as indicated in section 1 of the protocol, in Fiji software (Figure 1A), a scientific release of ImageJ12, one important point is to make sure that the image format is 8-bit (Figure 1B). This because many acquisition software different microscopy companies producers generate their proprietary file format in 16-bit to store metadata relevant to information about the acquiring process (i.e., pixel size, thickness, current/voltage of the electron beam, chamber pressure) together with the image stack. Such adjustments allow scientists to save memory, as the extra 8-bit containing metadata does not affect the images. The second important parameter to check is the voxel size, which then allows the reconstruction following the segmentation to be performed on the correct scale (micrometers or nanometers; Figure 1B).
Stacks might need realignment and/or stitching if they have been acquired using tiling; these operations can be performed within TrakEM2 (Figure 2A), although regarding realignment, automated 3DEM techniques like FIB-SEM or 3View are usually well realigned.
One last step requires filtering and, possibly, downsampling of the stack, depending on which objects need to be reconstructed and whether downsampling affects the recognition of the reconstructable features. For instance, for the larger stack (of the somatosensory cortex of P14 rats), it was not possible to compromise the resolution for the benefit of the reconstruction efficiency, while for the smaller stack (of the hippocampus CA1 of P60 rats), it was possible to do so because the resolution was far above what was needed for the smallest objects to be reconstructed. Finally, the use of the unsharp mask enhances the difference between the membranes and the background, making it favorable for the reconstructions of software like ilastik which uses gradients to pre-evaluate borders.
Following the image processing, reconstruction can be performed either manually using TrakEM2, or semi-automatically using ilastik (Figure 2C). A dataset like the smaller one listed here (ii), that can be downsampled to fit into memory, can be fully segmented using ilastik (Figure 2B) to produce a dense reconstruction. In the case of the first dataset listed here (i), we have managed to load and preprocess the entire dataset with a Linux workstation with 500 GB of RAM. The sparse segmentation of 16 full morphologies was obtained with a hybrid pipeline, by extracting rough segmentation that was manually proofread using TrakEM2.
The 3D analysis of features like surface areas, volumes, or the distribution of intracellular glycogen can be performed within a Blender environment (Figure 3) using custom codes, such as NeuroMorph19 or glycogen analysis10.
In case of datasets containing glycogen granules as well, analysis on their distribution can be inferred using GLAM, a C++ code that would generate colormaps with influence area directly on the mesh.
Finally, such complex datasets can be visualized and analyzed using VR, which has been proven to be useful for the analysis of datasets with a particular occluded view (Figure 4). For instance, peaks inferred from GLAM maps were readily inferred visually from dendrites in the second dataset discussed here.
Figure 1: Image processing and preparation for image segmentation. (a) Main Fiji GUI. (b) Example of stacked images from dataset (i) discussed in the representative results. The panel on the right shows properties allowing the user to set the voxel size. (c) Example of a filer and resize operation applied to a single image. The panels on the right show magnifications from the center of the image. Please click here to view a larger version of this figure.
Figure 2: Segmentation and reconstruction using TrakEM2 and ilastik. (a) TrakEM2 GUI with objects manually segmented (in red). (b) The exported mask from panel a can be used as input (seed) for (c) semi-automated segmentation (carving). From ilastik, masks (red) can be further exported to TrakEM2 for manual proofreading. (d) Masks can be then exported as 3D triangular meshes to reveal reconstructed structures. In this example, four neurons, astrocytes, microglia, and pericytes from dataset (i) (discussed in the representative results) were reconstructed using this process. Please click here to view a larger version of this figure.
Figure 3: 3D analysis of reconstructed morphologies using customized tools. (a) Isotropic imaged volume from FIB-SEM dataset (ii) (as discussed in the representative results). (b) Dense reconstruction from panel a. Grey = axons; green = astrocytic process; blue = dendrites. (c) Micrograph showing examples of targets for quantifications such as synapses (dataset (i)) and astrocytic glycogen granules (dataset (ii)) in the right magnifications. (d) Mask from panel c showing the distribution of glycogen granules around synapses. (e) Quantification of glycogen distribution from the dataset from panel c, using the glycogen analysis toolbox from Blender. The error bars indicate standard errors. N = 4,145 glycogen granules. (f) A graphic illustration of the input and output visualization of GLAM. Please click here to view a larger version of this figure.
Figure 4: Analysis in VR. (a) A user wearing a VR headset while working on (b) the dense reconstruction from FIB-SEM dataset (ii) (as discussed in the representative results). (c) Immersive VR scene from a subset of neurites from panel b. The green laser is pointing to a GLAM peak. (d) Example of an analysis from GLAM peak counts in VR. N = 3 mice per each bar. Analysis of FIB-SEM from a previous publication28. The error bars indicate the standard errors; *p < 0.1, one-way ANOVA. Please click here to view a larger version of this figure.
The method presented here is a useful step-by-step guide for the segmentation and 3D reconstruction of a multiscale EM dataset, whether they come from high-resolution imaging techniques, like FIB-SEM, or other automated serial sectioning and imaging techniques. FIB-SEM has the advantage of potentially reaching perfect isotropy in voxel size by cutting sections as thin as 5 nm using a focused ion beam, its FOV might be limited to 15-20 µm because of side artifacts, which are possibly due to the deposition of the cut tissue if the FOV exceeds this value. Such artifacts can be avoided by using other techniques, such as SBEM, which uses a diamond knife to cut serial sections inside the microscope chamber. In this latter case, the z resolution can be around 20 nm at best (usually, 50 nm), but the FOV might be larger, although the pixel resolution should be compromised for a vast region of interest. One solution to overcome such limitations (magnification vs. FOV) is to divide the region of interest in tiles and acquire each of them at a higher resolution. We have shown here results from both an SBEM stack-dataset (i) in the representative results-and a FIB-SEM stack-dataset (ii) in the representative results.
As the generation of larger and larger datasets is becoming increasingly more common, efforts in creating tools for pixel classification and automated image segmentation are multiplying; nevertheless, to date, no software has proven reliability comparable to that of human proofreading, which is therefore still necessary, no matter how time-consuming it is. In general, smaller datasets that can be downsampled, like in the case of dataset (ii), can be densely reconstructed by a single, expert user in a week, including proofreading time.
The protocol presented here involves the use of three software programs in particular: Fiji (version 2.0.0-rc-65/1.65b), ilastik (version 1.3.2 rc2), and Blender (2.79), which are all open-source and multi-platform programs and downloadable for free. Fiji is a release of ImageJ, powered by plugins for biological image analysis. It has a robust software architecture and is suggested as it is a common platform for life scientists and includes TrakEM2, one of the first and most widely used plugins for image segmentation. One issue experienced by many users lately is the transition from Java 6 to Java 8, which is creating compatibility issues; therefore, we suggest refraining from updating to Java 8, if possible, to allow Fiji to work properly. ilastik is a powerful software providing a number of frameworks for pixel classification, each one documented and explained on their website. The carving module used for the semi-automated segmentation of EM stacks is convenient as it saves much time, allowing scientists to reduce the time spent on manual work from months to days for an experienced user, as with a single click an entire neurite can be segmented in seconds. The preprocessing step is very intense from a hardware point of view, and very large datasets, like the SBEM stack presented here, which was 26 GB, require peculiar strategies to fit into memory, considering that one would acquire large dataset because cannot compromise field of view and resolution. Therefore, downsampling might not be an appropriate solution in this case. The latest release of the software can do the preprocessing in a few hours with a powerful Linux workstation, but the segmentation would take minutes, and scrolling through the stack would still be relatively slow. We still use this method for a first, rough segmentation, and proofread it using TrakEM2. Finally, Blender is a 3D modeling software, with a powerful 3D rendering engine, which can be customized with python scripts that can be embedded in the main GUI as add-ons, such as NeuroMorph and glycogen analysis. The flexibility of this software comes with the drawback that, in contrast to Fiji, for instance, it is not designed for the online visualization of large datasets; therefore, visualizing and navigating through large meshes (exceeding 1 GB) might be slow and not efficient. Because of this, it is always advisable to choose techniques that reduce mesh complexity but are careful not to disrupt the original morphology of the structure of interest. The remesh function comes in handy and is an embedded feature of the NeuroMorph batch import tool. An issue with this function is that, depending on the number of vertices of the original mesh, the octree depth value, which is related to the final resolution, should be modified accordingly. Small objects can be remeshed with a small octree depth (e.g. 4), but the same value might disrupt the morphology of larger objects, which needs bigger values (6 at best, to 8 or even 9 for a very big mesh, such as a full cell). It is advisable to make this process iterative and test the different octree depths if the size of the object is not clear.
As mentioned previously, one aspect that should be taken into account is the computational power to be dedicated to reconstruction and analysis, related to the software that is being used. All the operations shown in the representative results of this manuscript were obtained using a MacPro, equipped with an AMD FirePro D500 Graphics card, 64 GB of RAM, and an Intel Xeon E5 CPU with 8 cores. Fiji has a good software architecture for handling large datasets; therefore, it is recommended to use a laptop with a good hardware performance, such as a MacBook Pro with a 2.5 GHz Intel i7 CPU and 16 GB of RAM. ilastik software is more demanding in terms of hardware resources, in particular during the preprocessing step. Although downsampling the image stack is a good trick to limit the hardware requests from the software and allows the user to process a stack with a laptop (typically if it is below 500 pixels in x,y,z), we suggest the use of a high-end computer to run this software smoothly. We use a workstation equipped with an Intel Xeon Gold 6150 CPU with 16 cores and 500 GB of RAM.
When provided with an accurate 3D reconstruction, scientists can discard the original micrographs and work directly on the 3D models to extract useful morphometric data to compare cells of the same type, as well as different types of cells, and take advantage of VR for qualitative and quantitative assessments of the morphologies. In particular, the use of the latter has proven to be beneficial in the case of analyses of dense or complex morphologies that present visual occlusion (i.e., the blockage of view of an object of interest in the 3D space by a second one placed between the observer and the first object), making it difficult to represent and analyze them in 3D. In the example presented, an experienced user took about 4 nonconsecutive hours to observe the datasets and count the objects. The time spent on VR analysis might vary as aspects like VR sickness (which can, to some extent, be related to car sickness) can have a negative impact on the user experience; in this case, the user might prefer other analysis tools and limit their time dedicated to VR.
Finally, all these steps can be applied to other microscopy and non-EM techniques that generate image stacks. EM generates images that are, in general, challenging to handle and segment, compared with, for instance, fluorescence microscopy, where something comparable to a binary mask (signal versus a black background), that in principle can be readily rendered in 3D for further processing, often needs to be dealt with.
The authors have nothing to disclose.
This work was supported by the King Abdullah University of Science and Technology (KAUST) Competitive Research Grants (CRG) grant "KAUST-BBP Alliance for Integrative Modelling of Brain Energy Metabolism" to P.J.M.
Fiji | Open Source | 2.0.0-rc-65/1.65b | Open Source image processing editor www.fiji.sc |
iLastik | Open Source | 1.3.2 rc2 | Image Segmentation tool www.ilastik.org |
Blender | Blender Foundation | 2.79 | Open Source 3D Modeling software www.blender.org |
HTC Vive Headset | HTC | Vive / Vive Pro | Virtual Reality (VR) Head monted headset www.vive.com |
Neuromorph | Open Source | — | Collection of Blender Addons for 3D Analysis neuromorph.epfl.ch |
Glycogen Analysis | Open Source | — | Blender addon for analysis of Glycogen https://github.com/daniJb/glyco-analysis |
GLAM | Open Source | — | C++ Code For generating GLAM Maps https://github.com/magus74/GLAM |