Cell segmentation

Description

Fiji plugin to segment oocyte and zona pellucida contours from transmitted light images and extract hundreds of morphological features to describe numerically the oocyte. Segmentation is based on trained neural networks (U-Net) that were trained on both mouse and human oocytes (in prophase and meiosis I) acquired in different conditions. They are freely avaialable on the github repository and can be retrained if necessary. Oocytor also have options to extract hundreds of morphological/intensity features to characterize manually the oocyte (eg perimeter, texture...). These features can also be used in machine learning pipeline for automatic phenotyping.

Description

While a quickly retrained cellpose network (only on xy slices, no need to train on xz or yz slices) is giving good results in 2D, the anisotropy of the SIM image prevents its usage in 3D. Here the workflow consists in applying 2D cellpose segmentation and then using the CellStich libraries to optimize the 3D labelling of objects from the 2D independant labels.

Here the provided notebook is fully compatible with Google Collab and can be run by uploading your own images to your gdrive. A model is provided to be replaced by your own (create by CellPose 2.0)

has function
example of usage
Description

CellStich proposes a set of tools for 3D segmentation from 2D segmentation: it reassembles 2D labels obtained from cell in slices in unique 3D labels across slices. It isparticularly robust to anisotropy, and is the ideal companion to cellpose 2D models or other 2D deep learning based models. One could also think about using it for cell tracking by overlap (using time as a third dimension).

cellstitch
Description

SuperDSM is a globally optimal segmentation method based on superadditivity and deformable shape models for cell nuclei in fluorescence microscopy images and beyond.

Description

btrack is a Python library for multi object tracking, used to reconstruct trajectories in crowded fields. btrack implemented a residual U-Net model coupledd with a classification CNN to allow accurate instance segmentation of the cell nuclei. To track the cells over time and through cell divisions, btrack developed a Bayesian cell tracking methodology that uses input features from the images to enable the retrieval of multi-generational lineage information from a corpus of thousands of hours of live-cell imaging data.

need a thumbnail
Description

The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.

The entry-point to the codebase is the main.py file. The user has the option to

  • Train the network on their own dataset
  • Load a pre-trained model and use that for inference on their own data
  • NoteThe provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'
has function
daman
Description

This workflow describes a deep-learning based pipeline for reliable single-organoid segmentation and tracking in 2D+t high-resolution brightfield microscopy of mouse mammary epithelial organoids. The pipeline involves a four-layer U-Net to infer semantic segmentation predictions, adaptive morphological filtering to establish candidate organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking step to associate the corresponding organoid instances in time.

It is particularly focused on automatically detecting an organoid located approximately in the center of the first frame and track all its subsequent instances in the remaining frames, emphasizing on accurate organoid boundary delineation. Furthermore, segmentation network was trained using plausible pix2pixHD-generated bioimage data. Syntheric image simulator code and data are also available here.

Adapted from https://cbia.fi.muni.cz/research/spatiotemporal/organoids.html
Description

OrganoSeg is an open-source software that integrates segmentation, filtering, and analysis for breast-cancer spheroid and colon and colorectal-cancer organoid morphologies.

Figure 2 in OrganoSeg Scientific Reports publication
Description

OrganoID is an image analysis platform that automatically recognizes, labels, and tracks single organoids, pixel-by-pixel, in brightfield and phase-contrast microscopy experiments. The platform was trained on images of pancreatic cancer organoids and validated on separate images of pancreatic, lung, colon, and adenoid cystic carcinoma organoids.

need a thumbnail

Introduction to 3D Analysis with 3D ImageJ Suite

The 3D ImageJ Suite is a set of algorithms and tools (mostly ImageJ plugins) developed since 2010, originally for 3D analysis of fluorescence microscopy. Since then, the plugins have been widely used and cited more than 200 times in biological journals. In this presentation we will give a general introduction to the tools available in the 3D ImageJ Suite : filtering, 3D segmentation for spots and nuclei, and 3D analysis. A graphical interface to manage 3D objects, the 3DManager, was also developed and will be presented.

GPU Accelerated Image Processing with CLIJ2

The NEUBIAS Academy at home about CLIJ2 gives an introduction to accelerated image processing using Graphics Processing Units (GPUs) in ImageJ/Fiji. Core concepts are explained as well as usage of the tools with the ImageJ Macro recorder and auto-completion in Fijis script editor. Furthermore, an outlook is provided of how the CLIJ project will develop in the coming years to provide long-term maintained access to GPU-acceleration in the Bio-Image Analysis context.

Image Analysis of Biological Data using CellProfiler

After the session you will be able to built your own CellProfiler pipeline, including:

  • Image data import
  • Object segmentation (e.g. detect nuclei in an image) using the modules "IdentifyPrimaryObjects" and "IdentifySecondaryObjects"
  • Object feature measurements (e.g. measure size, shape and intensity of cells)
  • Measurements export to a spreadsheet
  • Creating and saving quality control images
Description

This workflow applies a Stardist pre-trained model (versatile_fluo or versatile_HE) depending on the input images ie. uses both models for a dataset including both fluorescence (grayscale or RGB where all channels are equal) and H&E stained (RGB where channels are not equal) images.

This version uses tensorflow CPU version (See Dockerfile) to ensure compatibility with a larger number of computers. A GPU version should be possible by adapting the Dockerfile with tensorflow-gpu and/or nvidia-docker images.

has topic
has function
need a thumbnail
Description

This workflow processes a group of images containing cells with discernible nuclei and segments the nuclei and outputs a binary mask that show where nuclei were detected. It performs 2D nuclei segmentation using pre-trained nuclei segmentation models of Cellpose. And it was developed as a test workflow for Neubias BIAFLOWS Benchmarking tool.

has topic
has function
need a thumbnail
Description

The Incucyte® Base Analysis Software provides a guided interface and purpose-built tools, which include the process of acquiring, viewing, analyzing and sharing images of living cells.

need a thumbnail
Description

The authors present an ImageJ-based, semi-automated phagocytosis workflow to rapidly quantitate three distinct stages during the early engulfment of opsonized beads.

Description

SMLM is a mature but still growing field, which still lacks efficient and user-friendly analysis and visualization software platform adapted for both users and developers. We here introduce PoCA, a powerful open-source software platform dedicated to the visualization and analysis of 2D and 3D point-cloud data. PoCA allows manipulating large datasets, and integrates a plugin architecture, a native batch analysis engine and a Python code interpreter, facilitating both the analysis of data and the integration of new methods.

Visualization, segmentation and exploration of 3D SMLM data
Description

The empanada-napari plugin is built to democratize deep learning image segmentation for researchers in electron microscopy (EM). It ships with MitoNet, a generalist model for the instance segmentation of mitochondria. There are also tools to quickly build and annotate training datasets, train generic panoptic segmentation models, finetune existing models, and scalably run inference on 2D or 3D data. To make segmentation model training faster and more robust, CEM pre-trained weights are used by default. These weights were trained using an unsupervised learning algorithm on over 1.5 million EM images from hundreds of unique EM datasets making them remarkably general.

Empanada-napari
Description

ASTEC stands for Adaptive Segmentation and Tracking of Embryonic Cells. It proposes a full workflow for time lapse light sheet imaging analysis, including drift/motion compensation before the segmentation itself, and the capacity to correct for it.  It was used to process 3D+t movies acquired by the MuViSPIM light-sheet microscope in particular.

Astec embryon
Description

ClearMap is a toolbox for the analysis and registration of volumetric data from cleared tissues.

It was initially developed to map brain activity at cellular resolution in whole mouse brains using immediate early gene expression. It has since then been extended as a tool for the qunatification of whole mouse brain vascualtur networks at capilary resolution.

It is composed of sevral specialized modules or scripts: tubemap, cellmap, WobblyStitcher.

ClearMap has been designed to analyze O(TB) 3d datasets obtained via light sheet microscopy from iDISCO+ cleared tissue samples immunolabeled for proteins. The ClearMap tools may also be useful for data obtained with other types of microscopes, types of markers, clearing techniques, as well as other species, organs, or samples.

ClearMap SCreenshot
Description

Machine Learning made easy

APEER ML provides an easy way to train your own machine learning
models and segment your microscopy images. No expertise or coding required.

APEER

Description

The tool allows to measure the area of the invading spheroïd in a 3D cell invasion assay. It can also count and measure the area of the nuclei within the spheroïd.

need a thumbnail
Description

Histology Topography Cytometry Analysis Toolbox (histoCAT) is a package to visualize and analyse multiplexed image cytometry data interactively. It can also export data in.fcs data for further analysis using  a specialized cytometry sofwtare such as Flowjo. 

It can be run as a compiled standalone or from matlab.

Description

BioImage.IO -- a collaborative effort to bring AI models to the bioimaging community. 

  • Integrated with Fiji, ilastik, ImJoy and more
  • Try model instantly with BioEngine
  • Contribute your models via Github

This is a database of pretrained deep Learning models. 

need a thumbnail
Description

LOBSTER (Little Objects Segmentation and Tracking Environment), an environment designed to help scientists design and customize image analysis workflows to accurately characterize biological objects from a broad range of fluorescence microscopy images, including large images, i.e. terabytes of data, exceeding workstation main memory.

  • 75 workflows available 
  • no programming, with GUI
  • matlab based 
Description

This is the ImageJ/Fiji plugin for StarDist, a cell/nuclei detection method for microscopy images with star-convex shape priors ( typically for Dapi like staining of nuclei). The plugin can be used to apply already trained models to new images.

Stardist
Description

Summary

Deep learning-based segmentation of cells, both fluorescence, and bright-field images ("a generalist algorithm for cellular segmentation"). The tool can be used either online or local or via notebooks (e.g. ZeroCostDL4Mic).

How to use it

cellpose can be used online via ready-to-use Jyupyter notebooks with very good documentation. These notebooks are listed here.

Local Installation

The general local installation procedure can be found here.

... Installing to Silicon Mac (M1 processor) needs some tricks, and as of October 2021, the following sequence of commands works. numba should be conda-installed before pip-installing the cellpose.


conda create --name cellpose python=3.8
conda activate cellpose
conda install numba
git clone https://github.com/MouseLand/cellpose.git
cd cellpose
pip install -e .

has topic
has function
Description

DeepImageJ is a user-friendly plugin that enables the use of a variety of pre-trained deep learning models in ImageJ and Fiji. The plugin bridges the gap between deep learning and standard life-science applications. DeepImageJ runs image-to-image operations on a standard CPU-based computer and does not require any deep learning expertise.

Training developper constructs and upload trained model, and made them available to users.

Models are available in a repository here.

It is macro recordable. It is advised to use DeepImageJ on a computer with GPU (CPU will likely be 20x slower)

has topic
deepImageJ
Description

VAST (Volume Annotation and Segmentation Tool) is a utility application for manual annotation of large EM stacks.

General labeling tool, used for a large variety of 3D data sets; electron-microscopic, multi-channel light-microscopic, and Micro-CT data sets as well as videos, and annotating arbitrary structures, regions and locations, depending on the user’s needs.

Description

The macro will segment nuclei and separate clustered nuclei in a 3D image using a 2D Gaussian blur, followed by Thresholding, 2D hole filling and a 2D watershed. As a result an index-mask image is written for each input image.

need a thumbnail
Description

U-Net segmentation as presented in Reference Publication. The model predicts three classes: background, edge and foreground. The model was trained with Kaggle Data Science Bowl (DSB) 2018 training set.

has topic
has function
need a thumbnail
Description

Nuclei Segmentation using Deep Learning for individual cell analysis (DeepCell).

has topic
has function
need a thumbnail
Description

OligoMacro Toolset, is an ImageJ macro-toolset aimed at isolating oligodendrocytes from wide-field images, tracking isolated cells, characterizing processes morphology along time, outputting numerical data and plotting them. It takes benefit of ImageJ built-in functions to process images and extract data, and relies on the R software in order to generate graphs.

need a thumbnail
Description

 

DeepCell is neural network library for single cell analysis, written in Python and built using TensorFlow and Keras.

DeepCell aids in biological analysis by automatically segmenting and classifying cells in optical microscopy images. This framework consumes raw images and provides uniquely annotated files as an output.

The jupyter session in the read docs are broken, but the one from the GitHub are functional (see usage example )

deepcell
Description

Code to segment yeast cells using a pre-trained mask-rcnn model. We've tested this with yeast cells imaged in fluorescent images and brightfield images, and gotten good results with both modalities. This code implements an user-friendly script that hides all of the messy implementation details and parameters. Simply put all of your images to be segmented into the same directory, and then plug and go.

has function
Description

There are many methods in bio-imaging that can be parametrized. This gives more flexibility
to the user as long as tools provide easy support for tuning parameters. On the other hand, the
datasets of interest constantly grow which creates the need to process them in bulk. Again,
this requires proper tool support, if biologist is going to be able to organize such bulk
processing in an ad-hoc manner without the help of a programmer. Finally, new image
analysis algorithms are being constantly created and updated. Yet, lots of work is necessary to
extend a prototype implementation into product for the users. Therefore, there is a growing
need for software with a graphical user interface (GUI) that makes the process of image
analysis easier to perform and at the same time allows for high throughput analysis of raw
data using batch processing and novel algorithms. Main program in this area are written in
Java, but Python grow in bioinformatics and will be nice to allow easy wrap algorithm written
in this language.
Here we present PartSeg, a comprehensive software package implementing several image
processing algorithms that can be used for analysis of microscopic 3D images. Its user
interface has been crafted to speed up workflow of processing datasets in bulk and to allow
for easy modification of algorithm’s parameters. In PartSeg we also include the first public
implementation of Multi-scale Opening algorithm descibed in [1]. PartSeg allows for
segmentation in 3D based on finding connected components. The segmentation results can be
corrected manually to adjust for high noise in the data. Then, it is possible to calculate some
standard statistics like volume, mass, diameter and their user-defined combinations for the
results of the segmentation. Finally, it is possible to superimpose segmented structures using
weighted PCA method. Conclusions: PartSeg is a comprehensive and flexible software
dedicated to help biologists in processing, segmentation, visualization and the analysis of the
large microscopic 3D image data. PartSeg provides well established algorithms in an easy-touse,
intuitive, user-friendly toolbox without sacrificing their power and flexibility.

 

Examples include Chromosome territory analysis.

PartSeg
Description

AssayScope is an intuitive application dedicated to large scale image processing and data analysis. It is meant for histology, cell culture (2D, 3D, 2D+t) and phenotypic analysis. 

need a thumbnail
Description

"The Microscope Image Analysis Toolbox MiToBo is an extension for the widely used image processing application ImageJ and its new release ImageJ 2.0.
MiToBo ships with a set of operators ready to be used as plugins in ImageJ. They focus on the analysis of biomedical images acquired by various types of microscopes."

Description

Nessys: Nuclear Envelope Segmentation System

 

Nessys is a software written in Java for the automated identification of cell nuclei in biological images (3D + time). It is designed to perform well in complex samples, i.e when cells are particularly crowded and heterogeneous such as in embryos or in 3D cell cultures. Nessys is also fast and will work on large images which do not fit in memory.


Nessys also offers an interactive user interface for the curation and validation of segmentation results. Think of this as a 3D painter / editor. This editor can also be used to generate manually segmented images to use as ground truth for testing the accuracy of the automated segmentation method.


Finally Nessys, contains a utility for assessing the accuracy of the automated segmentation method. It works by comparing the result of the automated method to a manually generated ground truth. This utility will provide two types of output: a table with a number of metrics about the accuracy and an image representing a map of the mismatch between the result of the automated method and the ground truth.

has function
Description

The interactive Watershed Fiji plugin provides an interactive way to explore local maxima and threshold values while a resulting label map is updated on the fly.

After the user has found a reliable parameter configuration, it is possible to apply the same parameters to other images in a headless mode, for example via ImageJ macro scripting.

Description

ZEN and APEER – Open Ecosystem for integrated Machine-Learning Workflows

Open ecosystem for integrated machine-learning workflows to train and use machine-learning models for image processing and image analysis inside the ZEN software or on the APEER cloud-based platform

Highlights ZEN

  • Simple User Interface for Labeling and Training
  • Engineered Features Sets and Deep Feature Extraction + Random Forrest for Semantic Segmentation
  • Object Classification workflows
  • Probability Thresholds and Conditional Random Fields
  • Import your own trained models as *.czann files (see: czmodel · PyPI)
  • Import "AIModel Containes" from arivis AI for advanced Instance Segmentation
  • Integration into ZEN Measurement Framework
  • Support for Multi-dimensional Datasets and Tile Images
  • open and standardized format to store trained models
ZEN Intellesis Segmentation

ZEN Intellesis Segmentation - Training UI

ZEN Intellesis - Pretrained Networks

ZEN Intellesis Segmentation - Use Deep Neural Networks

Intellesis Object Classification

ZEN Object Classification

Highlights Aarivis AI

  • Web-based tool to label datasets to train Deep Neural Networks
  • Fully automated hyper-parameter tuning
  • Export of trained models for semantic segmentation and AIModelContainer for Instance Segmentation
Annotation Tool

APEER Annotation Tool

Description

This one example workflow from the Cell Profiler(CP)  Examples . CP is commonly used to count cells or other objects as well as percent-positives, by measuring the per-cell staining intensity. This pipeline shows how to do both of these tasks, and demonstrates how various modules may be used to accomplish the same result. 

In a few words, it used the IdentifyPrimaryObject module of CellProfiler to detect nuclei from a channel (e.g DAPI), then again the same module on another channel to detect another probe (e.g some particular histone)  .

Then objects (nuclei) are related to the second object (Histone), to create a parent child-relation ship: where nuclei can have histone has child. Nuclei are then filtered according to the property of having histone (positive) or not having histone (negtiveobject) related to them.  If needed, nuclei can be expanded in order to include touching object rather than object inside only.

The percentage of positive nuclei vs total number of nuclei can then be computed using the CalculateMath Module.

Positivepercentcell
Description

 

The phase contrast microscopy segmentation toolbox (PHANTAST) is a collection of open-source algorithms and tools for the processing of phase contrast microscopy (PCM) images. It was developed at University College London's department of Biochemical Engineering and CoMPLEX.

has function