Fluorescence microscopy

Description

While a quickly retrained cellpose network (only on xy slices, no need to train on xz or yz slices) is giving good results in 2D, the anisotropy of the SIM image prevents its usage in 3D. Here the workflow consists in applying 2D cellpose segmentation and then using the CellStich libraries to optimize the 3D labelling of objects from the 2D independant labels.

Here the provided notebook is fully compatible with Google Collab and can be run by uploading your own images to your gdrive. A model is provided to be replaced by your own (create by CellPose 2.0)

has function
example of usage
Description

CellStich proposes a set of tools for 3D segmentation from 2D segmentation: it reassembles 2D labels obtained from cell in slices in unique 3D labels across slices. It isparticularly robust to anisotropy, and is the ideal companion to cellpose 2D models or other 2D deep learning based models. One could also think about using it for cell tracking by overlap (using time as a third dimension).

cellstitch
Description

btrack is a Python library for multi object tracking, used to reconstruct trajectories in crowded fields. btrack implemented a residual U-Net model coupledd with a classification CNN to allow accurate instance segmentation of the cell nuclei. To track the cells over time and through cell divisions, btrack developed a Bayesian cell tracking methodology that uses input features from the images to enable the retrieval of multi-generational lineage information from a corpus of thousands of hours of live-cell imaging data.

need a thumbnail
Description

The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.

The entry-point to the codebase is the main.py file. The user has the option to

  • Train the network on their own dataset
  • Load a pre-trained model and use that for inference on their own data
  • NoteThe provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'
has function
daman
Description

MiNA is a simplified workflow for analyzing mitochondrial morphology using fluorescence images or 3D stacks in Fiji. The workflow makes use of ImageJ Ops3D ViewerSkeletonize (2D/3D)Analyze Skeleton, and Ridge Detection. In short, the tool estimates mitochondrial footprint (or volume) from a binarized copy of the image as well as the lengths of mitochondrial structures using a topological skeleton. The values are reported in a table and overlays (or a 3D rendering) are generated to assess the accuracy of the analysis.

example skeleton image (from https://imagej.net/plugins/mina#processing-pipeline-and-usage)