Image segmentation

Image segmentation is (one of) the (few) concept(s) on the border between Image (pre)processing (Image->Image) and Image analysis (Image->Data).

Description

AnyLabeling is Effortless AI-assisted data labeling tool with AI support from Segment Anything and YOLO models!

AnyLabeling = LabelImg + Labelme + Improved UI + Auto-labeling

Installation

Standalone (executable)

The executable file links are provided in Assets section here

Install from source

git clone https://github.com/vietanhdev/anylabeling
cd anylabeling
pip install .

Install from PyPI

pip install anylabeling

With GPU support:

pip install anylabeling-gpu
Description

PlantSeg is a tool for cell instance aware segmentation in densely packed 3D volumetric images. The pipeline uses a two stages segmentation strategy (Neural Network + Segmentation). The pipeline is tuned for plant cell tissue acquired with confocal and light sheet microscopy. Pre-trained models are provided.

Description

NODeJ is an ImageJ plugin for 3D segmentation of nuclear objects.

"The three-dimensional nuclear arrangement of chromatin impacts many cellular processes operating at the DNA level in animal and plant systems. Chromatin organization is a dynamic process that can be affected by biotic and abiotic stresses. Three-dimensional imaging technology allows to follow these dynamic changes, but only a few semi-automated processing methods currently exist for quantitative analysis of the 3D chromatin organization. We present an automated method, Nuclear Object DetectionJ (NODeJ), developed as an imageJ plugin. This program segments and analyzes high intensity domains in nuclei from 3D images. NODeJ performs a Laplacian convolution on the mask of a nucleus to enhance the contrast of intra-nuclear objects and allow their detection. We reanalyzed public datasets and determined that NODeJ is able to accurately identify heterochromatin domains from a diverse set of Arabidopsis thaliana nuclei stained with DAPI or Hoechst. NODeJ is also able to detect signals in nuclei from DNA FISH experiments, allowing for the analysis of specific targets of interest. NODeJ allows for efficient automated analysis of subnuclear structures by avoiding the semi-automated steps, resulting in reduced processing time and analytical bias. NODeJ is written in Java and provided as an ImageJ plugin with a command line option to perform more high-throughput analyses. NODeJ can be downloaded from https://gitlab.com/axpoulet/image2danalysis/-/releases with source code, documentation and further information avaliable at https://gitlab.com/axpoulet/image2danalysis . The images used in this study are publicly available at https://www.brookes.ac.uk/indepth/images/ and https://doi-org.osaka-u.idm.oclc.org/10.15454/1HSOIE ."

has function
A DAPI-stained nucleus at left, followed by a white segmentation mask, a false-color heatmap, and segmented heterochromatin blocks.
Description

This workflow is the integration of YOLO (You Only Look Once) machine learning models, image pre-processing scripts and labeling tools within the Galaxy platform. Galaxy is an open, web-based platform used primarily for data analysis in computational biology, but it also has applications in image processing and other fields. 

How the Galaxy YOLO image segmentation tool works

The combination of Galaxy and YOLO allows researchers to perform object detection and image analysis without requiring extensive programming knowledge. Here's how it generally works: 

  • Web-based interface: Galaxy provides a graphical, user-friendly interface to access powerful analysis tools. Users can simply upload their image data, select the YOLO tool, and run the analysis.
  • YOLO model execution: The Galaxy tool executes a pre-trained YOLO model, often from the Ultralytics framework, on the input images. These models can perform tasks like object detection (drawing bounding boxes) or instance segmentation (creating pixel-level masks).
  • Training and prediction: Some tools allow for both model training and prediction. Users can train a custom YOLO model on their own labeled datasets to detect specific objects of interest. For example, bioimage analysis may involve detecting cells or other structures.
  • Other integrations: Other machine-learning tools can be integrated with YOLO in Galaxy. For instance, the AnyLabeling tool supports YOLO for semi-automated and active learning-based data annotation. 
Description

MetaXpress or in full name "MetaXpress® High-Content Image Acquisition and Analysis Software" is a commercially available closed source software for high-content analysis from Molecular Devices, LLC.. The program is a kind of visually guided workflow programming environment. There is a programming module called CME (custom module editor) which lets one setup integrated workflows for bioimage analysis with visual feedback. It is designed for high-throughput in connection with a included database which stores the experimental data. 

It has several toolboxes for semiautomated processing of various tasks:

3D Analysis (requires Custom Module Editor), Curve fitting, Transmitted light segmentation (requires Custom Module Editors), Angiogenesis tube formation, Cell cycle, Cell health, Cell scoring , Count nuclei, Granularity, Live/dead , Mitotic index, Micronuclei , Monopole detection, Multi-Wavelength cell scoring, Multi-wavelength translocation, Neurite outgrowth , Transfluor® Assay, Translocation* (includes Translocation-Enhanced*) , Transfluor HT Assay , Nuclear translocation HAT, Cell proliferation HT

After the workflow is setup it is possible to apply it automatically to a stack of stored images. The derived data from those analyses is stored in the metaxpress database and can be exported from there.

The use of each toolbox requires a separate license.

Description
# Install the ultralytics package from PyPI
pip install ultralytics

You can also install ultralytics directly from the Ultralytics GitHub repository. This can be useful if you want the latest development version. Ensure you have the Git command-line tool installed, and then run:

# Install the ultralytics package from GitHub
pip install git+https://github.com/ultralytics/ultralytics.git@main
Description

Tools for segmentation and tracking in microscopy build on top of Segment Anything. Segment and track objects in microscopy images interactively with a few clicks.

uSAM logo
Description

Ultralytics creates cutting-edge, state-of-the-art (SOTA) YOLO models built on years of foundational research in computer vision and AI. Constantly updated for performance and flexibility, our models are fast, accurate, and easy to use. They excel at object detection, tracking, instance segmentation, image classification, and pose estimation tasks.

SNT

Description

SNT is ImageJ’s framework for tracing, visualization, quantitative analyses and modeling of neuronal morphology. For tracing, SNT supports modern multidimensional microscopy data, semi-automated and automated routines, and options for editing traces. For data analysis, SNT features advanced visualization tools, access to all major morphology databases, and support for whole-brain circuitry data.

Schematic Overview of SNT components and SNT functionality
Description

Big-FISH is a python package for the analysis of smFISH images (2D/3D). It includes various methods to analyze microscopy images, such spot detection and segmentation of cells and nuclei.

need a thumbnail
Description

Fiji plugin to segment oocyte and zona pellucida contours from transmitted light images and extract hundreds of morphological features to describe numerically the oocyte. Segmentation is based on trained neural networks (U-Net) that were trained on both mouse and human oocytes (in prophase and meiosis I) acquired in different conditions. They are freely avaialable on the github repository and can be retrained if necessary. Oocytor also have options to extract hundreds of morphological/intensity features to characterize manually the oocyte (eg perimeter, texture...). These features can also be used in machine learning pipeline for automatic phenotyping.

Description

EPySeg is a package for segmenting 2D epithelial tissues. EPySeg also ships with a graphical user interface that allows for building, training and running deep learning models.

Training can be done with or without data augmentation (2D-xy and 3D-xyz data augmentation are supported). EPySeg relies on the segmentation_models library. EPySeg source code is available here. Cloud version available here.

has function
need a thumbnail
Description

ZeroCostDL4Mic: exploiting Google Colab to develop a free and open-source toolbox for Deep-Learning in microscopy

ZeroCostDL4Mic is a collection of self-explanatory Jupyter Notebooks for Google Colab that features an easy-to-use graphical user interface. They are meant to quickly get you started on learning to use deep-learning for microscopy. 

need a thumbnail
Description

Fractal is a framework to process high-content imaging data at scale and prepare it for interactive visualization. Fractal provides distributed workflows that convert TBs of image data into OME-Zarr files. The platform then processes the 3D image data by applying tasks like illumination correction, maximum intensity projection, 3D segmentation using cellpose and measurements using napari workflows. The pyramidal OME-Zarr files enable interactive visualization in the napari viewer.

need a thumbnail
Description

 

Relate is a correlative software package optimised to work with EM, EDS, EBSD, & AFM data and images.  It provides the tools you need to correlate data from different microscopes, visualise multi-layered data in 2D and 3D, and conduct correlative analyses.

  • Combining data from different imaging modalities (e.g. AFM, EDS & EBSD)

  • Interactive display of multi-layer correlated data

  • Analytical tools for metadata interrogation

  • Documented workflows and processes

Correlate

  • Import data from AZtec using the H5oina file format
  • Import AFM data
  • Correlate both sets of data using intuitive image overlays and image matching tools
  • Produce combined multimodal datasets

Visualise

  • 2D display of multi-layered data
  • 3D visualisation of topography combined with AFM material properties, EM images, and EDS & EBSD map overlays
  • Customisation of colour palettes, data overlays, image rendering options, and document display
  • Export images and animations

Analyse

  • Generate profile (cross section) views of multimodal data
  • Measure and quantify data across multiple layers
  • Analyse areas via data thresholding using amount of x-ray counts, phase maps, height, or other material properties.
  • Select an extensive range of measurement parameters
  • Export analytical data to text or CSV files
Relate analysis workflow example
Description

The empanada-napari plugin is built to democratize deep learning image segmentation for researchers in electron microscopy (EM). It ships with MitoNet, a generalist model for the instance segmentation of mitochondria. There are also tools to quickly build and annotate training datasets, train generic panoptic segmentation models, finetune existing models, and scalably run inference on 2D or 3D data. To make segmentation model training faster and more robust, CEM pre-trained weights are used by default. These weights were trained using an unsupervised learning algorithm on over 1.5 million EM images from hundreds of unique EM datasets making them remarkably general.

Empanada-napari

MIA

Description

ModularImageAnalysis (MIA) is an ImageJ plugin which provides a modular framework for assembling image and object analysis workflows. Detected objects can be transformed, filtered, measured and related. Analysis workflows are batch-enabled by default, allowing easy processing of high-content datasets.

MIA is designed for “out-of-the-box” compatibility with spatially-calibrated 5D images, yielding measurements in both pixel and physical units.  Functionality can be extended both internally, via integration with SciJava’s scripting interface, and externally, with Java modules that extend the MIA framework. Both have full access to all objects and images in the analysis workspace.

Workflows are, by default, compatible with batch processing multiple files within a single folder. Thanks to Bio-Formats, MIA has native support for multi-series image formats such as Leica .lif and Nikon .nd2.

Workflows can be automated from initial image loading through processing, object detection, measurement extraction, visualisation, and data exporting. MIA includes near 200 modules integrated with key ImageJ plugins such as Bio-Formats, TrackMate and Weka Trainable Segmentation.

Module(s) can be turned on/off dynamically in response to factors such as availability of images and objects, user inputs and measurement-based filters. Switches can also be added to “processing view” for easy workflow control.

MIA is developed in the Wolfson Bioimaging Facility at the University of Bristol.

Description

Machine Learning made easy

APEER ML provides an easy way to train your own machine learning
models and segment your microscopy images. No expertise or coding required.

APEER

Image Analysis Training Resources

Submitted by Perrine on Wed, 06/30/2021 - 14:15

This is a resource for image analysis training material, with a focus on research in the life sciences.

Currently, this resource is mainly meant to serve image analysis trainers, helping them to design courses. However, we might add more text (or videos) to the material such that it could also be used by students for self-directed study.

Description

This tool allows to analyze morphological characteristics of complex roots. While for young roots the root system architecture can be analyzed automatically, this is often not possible for more developed roots. The tool is inspired by the Sholl analysis used in neuronal studies. The tool creates a binary mask and the Euclidean Distance Transform from the input image. It then allows to draw concentric circles around a base point and to extract measures on or within the circles. Instead of circles, which present the distance from the base point, horizontal lines can be used, which present the distance in the soil from the base-line. The following features are currently implemented:

  • The area of the root per distance/depth.
  • The number of border pixel per distance/depth, giving an idea of the surface in contact with the soil.
  • The maximum radius per distance/depth of a root, measured at the crossing points with the circles or lines.
  • The number of crossings of roots with the circles or lines.
  • The maximum distance to the left and the right from the vertical axis at crossing points with the circles or lines.
Concentric circles on the mask of a root, created by the Analyze Complex Roots Tool
Description

webKnossos is an open-source data sharing and annotation platform for tera-scale 2D and 3D image datasets.

The core features of webKnossos are:

  • fast 3D data streaming
  • share links to specific locations in the data
  • uniquely fast skeleton annotation (flight mode) and
  • efficient volume annotation
  • mesh rendering
  • collaboration and sharing tools

webKnossos facilitates image analysis workflows on multi-terabyte datasets, including visualization of raw and multi-modal microscopy data, distributed training data generation and proof-reading of automatic segmentation.

As a scientific resource, webknossos.org serves as a database for published image datasets including their annotations.

 

 

Description

The ImageM application proposes an integrated user interface that facilitates the processing and the analysis of multi-dimensional images within the Matlab environment. It provides a user-friendly visualization of multi-dimensional images, a collection of image processing algorithms and methods for analysis of images, the management of spatial calibration, and facilities for the analysis of multi-variate images. Its graphical user interface is largely inspired from the open source software "ImageJ". ImageM can also be run on the open source alternative software to Matlab, Octave.

ImageM is freely distributed on GitHub: https://github.com/mattools/ImageM.

Processing of a 3D image with the ImageM sotfware
Description

This is the ImageJ/Fiji plugin for StarDist, a cell/nuclei detection method for microscopy images with star-convex shape priors ( typically for Dapi like staining of nuclei). The plugin can be used to apply already trained models to new images.

Stardist
Description

VAST (Volume Annotation and Segmentation Tool) is a utility application for manual annotation of large EM stacks.

General labeling tool, used for a large variety of 3D data sets; electron-microscopic, multi-channel light-microscopic, and Micro-CT data sets as well as videos, and annotating arbitrary structures, regions and locations, depending on the user’s needs.

Description

Blood vessels tracing in 3D image from Tubeness filtering (user defined scale), 3D opening (radius set to 2), thresholding (user defined level) and 3D skeletonization.

need a thumbnail
Description

Starting from image stacks, the nuclear boundary as well as nuclear bodies are segmented. As output, NucleusJ automatically measures 15 parameters quantifying shape and size of nuclei as well as intra-nuclear objects and the positioning of the objects within the nuclear volume.

has function
Description

LBADSA is based on the fitting of the Young-Laplace equation to the image data to measure drops.

has function
Description

Dragonfly is a software platform for the intuitive inspection of multi-scale multi-modality image data. Its user-friendly experience translates into powerful quantitative findings with high-impact visuals, driven by nuanced easy-to-learn controls.

For segmentation: It provides an engine fior machine Learning, Watershed and superpixel methods, support histological data .

It offers a 3D viewer, and python scripting capacities .

It is free for reserach use, but not for commercial usage.

DragonFly
Description

This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bounding boxes and segmentation masks for each instance of an object in the image. It's based on Feature Pyramid Network (FPN) and a ResNet101 backbone.

Description

NEUBIAS-WG5 workflow for nuclei segmentation using Mask-RCNN. The workflow uses Matterport Mask-RCNN. Keras implementation. The model was trained with Kaggle 2018 Data Science Bowl images.

has topic
need a thumbnail
Description
has topic
has function
superpixels - ROI
Description

Collection of several basic standard image segmentation methods focusing on medical imaging. In particular, the key block/applications are (un)supervised image segmentation using superpixels, object centre detection and region growing with a shape prior. Besides the open-source code, there is also a few sample images.

 

has topic
has function
Description

This workflow processes a group of images containing cells with discernible nuclei and segments the nuclei and outputs a binary mask that show where nuclei were detected. It was developed as a test workflow for Neubias BIAFLOWS Benchmarking tool.

has function
Description

The macro will segment nuclei and separate clustered nuclei in a 3D image using a distance transform watershed. As a result an index-mask image is written for each input image.

need a thumbnail
Description

This suite provides plugins to enhance 3D capabilities of ImageJ.

  • 3D Filters (mean, median, max, min, tophat, max local, …) and edge and symmetry filter
  • 3D Segmentation (iterative thresholding, spots segmentation, watershed, …)
    • 3D hysteresis thresholding with two thresholds (see 2D hysteresis for explanation).
    • 3D simple segmentation with thresholding to label 3D objects (similar to 3D objects counter).
    • 3D iterative thresholding (find optimal threshold for each object).
    • 3D spot segmentation with various local threshold estimations.
    • 3D Maxima Finder (with noise parameter)
    • 3D seeds-based watershed with automatic local maxima detection for seeds.
  • 3D Mathematical Morphology tools (fill holes, binary closing, distance map, …)
  • 3D RoiManager (3D display and analysis of 3D objects)
  • 3D Analysis (Geometrical measurements, Mesh measurements, Convex hull, …)
    • 3D Geometrical measurements (volume, surface, …) for each labelled object.
    • 3D Centroid, to compute centroids of labelled objects.
    • 3D Intensity measurements (mean, integrated density, …) in a opened image for each labelled object.
    • 3D Shape measurements (compactness, elongation, …) for each labelled object.
    • 3D Mesh Measurements after triangulation (see 3D Viewer for surface mesh computation).
    • 3D fitting by an ellipsoid and main direction computation (details here).
    • 3D convex hull (see http://rsbweb.nih.gov/ij/plugins/3d-convex-hull/index.html).
    • 3D Radial Distance Area Ratio (RDAR)
    • 3D Density, to compute density of dots, based on closest distance analysis (details here).
  • 3D MereoTopology (Relationship between objects)
  • 3D Tools (Drawing ellipsoids and lines, cropping, …)
    • Drawing 3D line
    • Drawing 3D ellipsoids in any direction
    • Drawing in stacks as volumes
    • Drawing in 3D viewer as surfaces
need a thumbnail
Description

The macro will segment nuclei and separate clustered nuclei using a binary watershed. As a result an index-mask image is written for each input image.

need a thumbnail

video tutorial on 3D vessel segmentation of synchrotron phase contrast tomography

Submitted by czhang on Tue, 01/29/2019 - 20:32

In this tutorial video, a coronary arterial tree is used as the demo example to show in detail how the semi-automatic segmentation workflow, Carving from the open-source image analysis software ilastik, can be used. Tips on how and why a preprocessing is done, as well as parameter settings are provided.

Description

TEM ExosomeAnalyzer is a program for automatic and semi-automatic detection of extracellular vesicles (EVs), such as exosomes, or similar objects in 2D images from transmission electron microscopy (TEM). The program detects the EVs, finds their boundaries, and reports information about their size and shape.

The software has been developed in terms of project MUNI/M/1050/2013 and supported by Grant Agency of Masaryk University.

The EVs are detected based on the shape and edge contrast criteria. The exact shapes of the EVs are then segmented using a watershed-based approach.

With proper parameter settings, even images with EVs both lighter and darked than the background, or containing artifacts or precipitated stain can be processed. If the fully-automatic processing fails to produce the correct results, the program can be used semi-automatically, letting the user adjust the detection seeds during the intermediate steps, or even draw the whole segmentation manually.

screen capture from exosomeAnalyzer
Description

There are many methods in bio-imaging that can be parametrized. This gives more flexibility
to the user as long as tools provide easy support for tuning parameters. On the other hand, the
datasets of interest constantly grow which creates the need to process them in bulk. Again,
this requires proper tool support, if biologist is going to be able to organize such bulk
processing in an ad-hoc manner without the help of a programmer. Finally, new image
analysis algorithms are being constantly created and updated. Yet, lots of work is necessary to
extend a prototype implementation into product for the users. Therefore, there is a growing
need for software with a graphical user interface (GUI) that makes the process of image
analysis easier to perform and at the same time allows for high throughput analysis of raw
data using batch processing and novel algorithms. Main program in this area are written in
Java, but Python grow in bioinformatics and will be nice to allow easy wrap algorithm written
in this language.
Here we present PartSeg, a comprehensive software package implementing several image
processing algorithms that can be used for analysis of microscopic 3D images. Its user
interface has been crafted to speed up workflow of processing datasets in bulk and to allow
for easy modification of algorithm’s parameters. In PartSeg we also include the first public
implementation of Multi-scale Opening algorithm descibed in [1]. PartSeg allows for
segmentation in 3D based on finding connected components. The segmentation results can be
corrected manually to adjust for high noise in the data. Then, it is possible to calculate some
standard statistics like volume, mass, diameter and their user-defined combinations for the
results of the segmentation. Finally, it is possible to superimpose segmented structures using
weighted PCA method. Conclusions: PartSeg is a comprehensive and flexible software
dedicated to help biologists in processing, segmentation, visualization and the analysis of the
large microscopic 3D image data. PartSeg provides well established algorithms in an easy-touse,
intuitive, user-friendly toolbox without sacrificing their power and flexibility.

 

Examples include Chromosome territory analysis.

PartSeg
Description

AssayScope is an intuitive application dedicated to large scale image processing and data analysis. It is meant for histology, cell culture (2D, 3D, 2D+t) and phenotypic analysis. 

need a thumbnail
Description

The Allen Cell Structure Segmenter is a Python-based open source toolkit developed at the Allen Institute for Cell Science for 3D segmentation of intracellular structures in fluorescence microscope images.

It consists of two complementary elements:

  1. Classic image segmentation workflows for 20 distinct intracellular structure localization patterns. A visual “lookup table” is outlining the modular algorithmic steps for each segmentation workflow. This provides an intuitive guide for selection or construction of new segmentation workflows for a user’s particular segmentation task. 
  2. Human-in-the-loop iterative deep learning segmentation workflow trained on ground truth manually curated data from the images segmented with the segmentation workflow. Importantly, this module was not released yet.

 

The Allen Cell Structure Segmenter Overview
Description

Labkit is an open-source tool to segment truly large image data using sparse training data. It has an intuitive and responsive user interface based on Big Data Viewer, allowing users to conveniently browse and annotate even terabyte sized image volumes.

Update site: Labkit

has topic
need a thumbnail
Description

SimpleITK provides a simplified interface to ITK in a variety of languages. A user can either download pre-built binaries, if they are available for the desired platform and language, or SimpleITK can be built from the source code. Currently, Python binaries are available on Microsoft Windows, GNU Linux and Mac OS X. C# and Java binaries are available for Windows. We are also working towards supporting R packaging.

need a thumbnail
Description

ZEN and APEER – Open Ecosystem for integrated Machine-Learning Workflows

Open ecosystem for integrated machine-learning workflows to train and use machine-learning models for image processing and image analysis inside the ZEN software or on the APEER cloud-based platform

Highlights ZEN

  • Simple User Interface for Labeling and Training
  • Engineered Features Sets and Deep Feature Extraction + Random Forrest for Semantic Segmentation
  • Object Classification workflows
  • Probability Thresholds and Conditional Random Fields
  • Import your own trained models as *.czann files (see: czmodel · PyPI)
  • Import "AIModel Containes" from arivis AI for advanced Instance Segmentation
  • Integration into ZEN Measurement Framework
  • Support for Multi-dimensional Datasets and Tile Images
  • open and standardized format to store trained models
ZEN Intellesis Segmentation

ZEN Intellesis Segmentation - Training UI

ZEN Intellesis - Pretrained Networks

ZEN Intellesis Segmentation - Use Deep Neural Networks

Intellesis Object Classification

ZEN Object Classification

Highlights Aarivis AI

  • Web-based tool to label datasets to train Deep Neural Networks
  • Fully automated hyper-parameter tuning
  • Export of trained models for semantic segmentation and AIModelContainer for Instance Segmentation
Annotation Tool

APEER Annotation Tool

Description

Quantitative Criterion Acquisition Network (QCA Net) performs instance segmentation of 3D fluorescence microscopic images. QCA Net consists of Nuclear Segmentation Network (NSN) that learned nuclear segmentation task and Nuclear Detection Network (NDN) that learned nuclear identification task. QCA Net performs instance segmentation of the time-series 3D fluorescence microscopic images at each time point, and the quantitative criteria for mouse development are extracted from the acquired time-series segmentation image. The detailed information on this program is described in our manuscript posted on bioRxiv.

has function