Python

Description

EPySeg is a package for segmenting 2D epithelial tissues. EPySeg also ships with a graphical user interface that allows for building, training and running deep learning models.

Training can be done with or without data augmentation (2D-xy and 3D-xyz data augmentation are supported). EPySeg relies on the segmentation_models library. EPySeg source code is available here. Cloud version available here.

has function
need a thumbnail
Description

VTK is an open-source software system for image processing, 3D graphics, volume rendering and visualization. VTK includes many advanced algorithms (e.g., surface reconstruction, implicit modeling, decimation) and rendering techniques (e.g., hardware-accelerated volume rendering, LOD control).

VTK is used by academicians for teaching and research; by government research institutions such as Los Alamos National Lab in the US or CINECA in Italy; and by many commercial firms who use VTK to build or extend products.

The origin of VTK is with the textbook "The Visualization Toolkit, an Object-Oriented Approach to 3D Graphics" originally published by Prentice Hall and now published by Kitware, Inc. (Third Edition ISBN 1-930934-07-6). VTK has grown (since its initial release in 1994) to a world-wide user base in the commercial, academic, and research communities.

Description

A collection of Image Processing and Analysis (IPA) functions used at the Facility for Advanced Imaging and Microscopy (FAIM).

has function
need a thumbnail
Description

DeXtrusion is a machine learning based python pipeline to detect cell extrusions in epithelial tissues movies. It can also detect cell divisions and SOPs, and can easily be trained to detect other dynamic events.

DeXtrusion takes as input a movie of an epithelium and outputs the spatio-temporal location of cell extrusion events or other event as cell divisions. The movie is discretized into small overlapping rolling windows which are individually classified for event detection by a trained neural network. Results are then put together in event probability map for the whole movie or as spatio-temporal points indicating each event.

DeXtrusion probability map
Description

While a quickly retrained cellpose network (only on xy slices, no need to train on xz or yz slices) is giving good results in 2D, the anisotropy of the SIM image prevents its usage in 3D. Here the workflow consists in applying 2D cellpose segmentation and then using the CellStich libraries to optimize the 3D labelling of objects from the 2D independant labels.

Here the provided notebook is fully compatible with Google Collab and can be run by uploading your own images to your gdrive. A model is provided to be replaced by your own (create by CellPose 2.0)

has function
example of usage
Description

CellStich proposes a set of tools for 3D segmentation from 2D segmentation: it reassembles 2D labels obtained from cell in slices in unique 3D labels across slices. It isparticularly robust to anisotropy, and is the ideal companion to cellpose 2D models or other 2D deep learning based models. One could also think about using it for cell tracking by overlap (using time as a third dimension).

cellstitch
Description

Image segmentation and object detection performance measures

The goal of this package is to provide easy-to-use tools for evaluation of the performance of segmentation methods in biomedical image analysis and beyond, and to fasciliate the comparison of different methods by providing standardized implementations. This package currently only supports 2-D image data.

has function
Description

SuperDSM is a globally optimal segmentation method based on superadditivity and deformable shape models for cell nuclei in fluorescence microscopy images and beyond.

Description

Open source deep learning based framework for multi-animal pose tracking. It can track animal and any number of animals and has a labeling/training GUI for learning and proofreading.

has topic
has function
Description

Algorithm and software created to extract animal trajectories from videos of a collection of animals up to 100 individuals. Idtrackerai uses two convolutional networks: one for animal identification and another to detect when animals touch or cross each other.

has topic
has function
Description

The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.

The entry-point to the codebase is the main.py file. The user has the option to

  • Train the network on their own dataset
  • Load a pre-trained model and use that for inference on their own data
  • NoteThe provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'
has function
daman
Description

This workflow describes a deep-learning based pipeline for reliable single-organoid segmentation and tracking in 2D+t high-resolution brightfield microscopy of mouse mammary epithelial organoids. The pipeline involves a four-layer U-Net to infer semantic segmentation predictions, adaptive morphological filtering to establish candidate organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking step to associate the corresponding organoid instances in time.

It is particularly focused on automatically detecting an organoid located approximately in the center of the first frame and track all its subsequent instances in the remaining frames, emphasizing on accurate organoid boundary delineation. Furthermore, segmentation network was trained using plausible pix2pixHD-generated bioimage data. Syntheric image simulator code and data are also available here.

Adapted from https://cbia.fi.muni.cz/research/spatiotemporal/organoids.html
Description

OrganoID is an image analysis platform that automatically recognizes, labels, and tracks single organoids, pixel-by-pixel, in brightfield and phase-contrast microscopy experiments. The platform was trained on images of pancreatic cancer organoids and validated on separate images of pancreatic, lung, colon, and adenoid cystic carcinoma organoids.

need a thumbnail
Description

FluoGAN is a fluorescence image deconvolution software combining the knowledge of acquisition physical model with gan. It takes a fluctuating sequence of blurred, undersampled and noisy images of the sample of interest  fixed sample as input from wide field or confocal and returns a super resolved image.

FluoGan
Description

Orthanc aims at providing a simple, yet powerful standalone DICOM server. It is designed to improve the DICOM flows in hospitals and to support research about the automated analysis of medical images. Orthanc lets its users focus on the content of the DICOM files, hiding the complexity of the DICOM format and of the DICOM protocol.

Orthanc can turn any computer running Windows, Linux or OS X into a DICOM store (in other words, a mini-PACS system). Its architecture is lightweight and standalone, meaning that no complex database administration is required, nor the installation of third-party dependencies.

What makes Orthanc unique is the fact that it provides a RESTful API. Thanks to this major feature, it is possible to drive Orthanc from any computer language. The DICOM tags of the stored medical images can be downloaded in the JSON file format. Furthermore, standard PNG images can be generated on-the-fly from the DICOM instances by Orthanc.

Orthanc also features a plugin mechanism to add new modules that extends the core capabilities of its REST API. A Web viewer, a PostgreSQL database back-end, a MySQL database back-end, and a reference implementation of DICOMweb are currently freely available as plugins.

orthanc
Description

The empanada-napari plugin is built to democratize deep learning image segmentation for researchers in electron microscopy (EM). It ships with MitoNet, a generalist model for the instance segmentation of mitochondria. There are also tools to quickly build and annotate training datasets, train generic panoptic segmentation models, finetune existing models, and scalably run inference on 2D or 3D data. To make segmentation model training faster and more robust, CEM pre-trained weights are used by default. These weights were trained using an unsupervised learning algorithm on over 1.5 million EM images from hundreds of unique EM datasets making them remarkably general.

Empanada-napari
Description

ASTEC stands for Adaptive Segmentation and Tracking of Embryonic Cells. It proposes a full workflow for time lapse light sheet imaging analysis, including drift/motion compensation before the segmentation itself, and the capacity to correct for it.  It was used to process 3D+t movies acquired by the MuViSPIM light-sheet microscope in particular.

Astec embryon
Description

ClearMap is a toolbox for the analysis and registration of volumetric data from cleared tissues.

It was initially developed to map brain activity at cellular resolution in whole mouse brains using immediate early gene expression. It has since then been extended as a tool for the qunatification of whole mouse brain vascualtur networks at capilary resolution.

It is composed of sevral specialized modules or scripts: tubemap, cellmap, WobblyStitcher.

ClearMap has been designed to analyze O(TB) 3d datasets obtained via light sheet microscopy from iDISCO+ cleared tissue samples immunolabeled for proteins. The ClearMap tools may also be useful for data obtained with other types of microscopes, types of markers, clearing techniques, as well as other species, organs, or samples.

ClearMap SCreenshot
Description

Quote:

pyTFM is a python package that allows you to analyze force generation and stresses in cells, cell colonies, and confluent cell layers growing on a 2-dimensional surface. This package implements the procedures of Traction Force Microscopy and Monolayer Stress Microscopy. In addition to the standard measures for stress and force generation, it also includes the line tension, a measure for the force transfer exclusively across cell-cell boundaries. pyTFM includes an addon for the image annotation tool clickpoints allowing you to quickly analyze and vizualize large datasets.

https://pytfm.readthedocs.io/en/latest/_images/mask_force_measures.png
Description

This Fiji plugin is a python script for CLEM registration using deep learning, but it could be applied in principle to other modalities. The pretrained model was learned on chromatin SEM images and fluorescent staining, but a script is also provided to train an new model, based on CSBDeep. The registration is then performed as a feature based registration, using register virtual stack plugin (which extract features and then perform RANSAc. Editing the script in python gives access to more option (such as the transformation model to be used, similarity by default. Images need to be prepared such that they contain only one channel, but channel of ineterst (to be transformed with the same transformation) can be given as input, and Transform Virtual Stack plugin can be used as well.

F1000R Figure 1 DeepCLEM
Description

QuantiFish is a quantification program intended for measuring fluorescence in images of zebrafish, although use with images of other specimens is possible. This package is geared towards analysis of fluorescent infection models. The software is designed to automate processing of images of single fish, and outputs results as a .csv file. Alongside measures of total fluorescence above a threshold, this package also introduces several measures for dissemination and distribution of fluorescence throughout the specimen.

QuantiFish User Interface
Description

The library contains several helper functions to generate MoBIE project folders and add data to it.  Itis a python library to generate data in the MoBIE data storage layout. 

For further information, look to http://biii.eu/mobie-fiji-viewer

has function
need a thumbnail
Description

Deep learning based image restoration methods have recently been made available to restore images from under-exposed imaging conditions, increase spatio-temporal resolution (CARE) or self-supervised image denoising (Noise2Void). These powerful methods outperform conventional state-of-the-art methods and leverage down-stream analyses significantly such as segmentation and quantification.

To bring these new tools to a broader platform in the image analysis community, we developed a simple Jupyter based graphical user interface for CARE and Noise2Void, which lowers the burden for non-programmers and biologists to access these powerful methods in their daily routine.  CARE-less supports temporal, multi-channel image and volumetric data and many file formats by using the bioformats library. The user is guided through the different computation steps via inline documentation. For standard use cases, the graphical user interface exposes the most relevant parameters such as patch size and number of training iterations, while expert users still have access to advanced parameters such as U-net depth and kernel sizes. In addition, CARE-less provides visual outputs for training convergence and restoration quality. Any project settings can be stored and reused from command line for processing on compute clusters. The generated output files preserve important meta-data such as pixel sizes, axial spacing and time intervals.

need a thumbnail
Description

Yet another pixel classifier Yapic is a deep learning tool to :

train your own filter to enhance the structure of your choice 

train multiple filter at once 

it is based on the u-net convolutional filter . 

To train it : annotation can come from example from Ilastik software , tif labelled files can be transferred to yapic. 

Training takes about hours to days , prediction takes seconds once trained .

It can be ran from command line .

note that only 10 to 20 images with sparse labeling are required for efficient training 

has function
need a thumbnail
Description

Summary

Deep learning-based segmentation of cells, both fluorescence, and bright-field images ("a generalist algorithm for cellular segmentation"). The tool can be used either online or local or via notebooks (e.g. ZeroCostDL4Mic).

How to use it

cellpose can be used online via ready-to-use Jyupyter notebooks with very good documentation. These notebooks are listed here.

Local Installation

The general local installation procedure can be found here.

... Installing to Silicon Mac (M1 processor) needs some tricks, and as of October 2021, the following sequence of commands works. numba should be conda-installed before pip-installing the cellpose.


conda create --name cellpose python=3.8
conda activate cellpose
conda install numba
git clone https://github.com/MouseLand/cellpose.git
cd cellpose
pip install -e .

has topic
has function
Description

The Morphonet Python API provide an easy interface to interact directly with the MorphoNet server. Very useful to upload, download your dataset and superimpose on it any quantitative and quantitative informations.

Description

Vaa3d BJUT Fast Marching Spanning Tree algorithm dockerised workflow for BIAFLOWS

need a thumbnail
Description

3D Neuron Tracing with a Dockerized version of Vaa3D MOST Raytracer.

need a thumbnail
Description

3D Neuron Tracing using Dockerized version of Vaa3D Minimum Spanning Tree (MST).

need a thumbnail
Description

Rivuletpy dockerised workflow for BIAFLOWS.

has topic
need a thumbnail
Description

Vaa3d All-Path-Pruning 2.0 (APP2) dockerised workflow for BIAFLOWS.

need a thumbnail
Description

Cell tracking using MU-Lux-CZ algorithm. Dockerized Workflow for BIAFLOWS implemented by Martin Maska (Masaryk University).

has topic
has function
need a thumbnail
Description

pyimagej provides a set of wrapper functions for integration between ImageJ and Python.

It also provides a high-level entry point for invoking ImageJ server APIs.

has function
need a thumbnail
Description

Track non-dividing particles in 2D time-lapse image.

has topic
has function
need a thumbnail
Description

Execute Nuclei Segmentation in 3D images using pixel classification with ilastik.

has topic
has function
need a thumbnail
Description

U-Net segmentation as presented in Reference Publication. The model predicts three classes: background, edge and foreground. The model was trained with Kaggle Data Science Bowl (DSB) 2018 training set.

has topic
has function
need a thumbnail
Description

Nuclei Segmentation using Deep Learning for individual cell analysis (DeepCell).

has topic
has function
need a thumbnail
Description

Summary

napari is a fast, interactive, multi-dimensional image viewer for Python. It’s designed for browsing, annotating, and analyzing large multi-dimensional images. It’s built on top of Qt (for the GUI), vispy (for performant GPU-based rendering), and the scientific Python stack (e.g. numpyscipy). It includes critical viewer features out-of-the-box, such as support for large multi-dimensional data, and layering and annotation. By integrating closely with the Python ecosystem, napari can be easily coupled to leading machine learning and image analysis tools (e.g. scikit-imagescikit-learnTensorFlowPyTorch), enabling more user-friendly automated analysis.

Installation

  • The installation procedure for Silicon Mac (M1 Processor, arm64 ) requires some tricks. As of Oct 2021, this procedure by Peter Sobolewski works but:
    • For installing pyqt5, use a slightly different command `brew install PyQt@5` to install PyQt5.  

 

Description

 

DeepCell is neural network library for single cell analysis, written in Python and built using TensorFlow and Keras.

DeepCell aids in biological analysis by automatically segmenting and classifying cells in optical microscopy images. This framework consumes raw images and provides uniquely annotated files as an output.

The jupyter session in the read docs are broken, but the one from the GitHub are functional (see usage example )

deepcell
Description

The PYthon Microscopy Environment is an open-source package providing image acquisition and data analysis functionality for a number of microscopy applications, but with a particular emphasis on single molecule localisation microscopy (PALM/STORM/PAINT etc ...). The package is multi platform, running on Windows, Linux, and OSX.

It comes with 3 main modules:

  • PYMEAcquire - Instrument control and simulation
  • dh5view - Image Data Analysis and Viewing
  • VisGUI - Visualising Localization Data Sets
Description

CellProfiler Analyst (CPA) allows interactive exploration and analysis of data, particularly from high-throughput, image-based experiments. Included is a supervised machine learning system which can be trained to recognize complicated and subtle phenotypes, for automatic scoring of millions of cells. CPA provides tools for exploring and analyzing multidimensional data, particularly data from high-throughput, image-based experiments analyzed by its companion image analysis software, CellProfiler.

CPA
Description

A Python based workflow management software that allows to create workflows that seamlessly scale from a single workstation to a high performance computing cluster or cloud environments. 

Description

Code to segment yeast cells using a pre-trained mask-rcnn model. We've tested this with yeast cells imaged in fluorescent images and brightfield images, and gotten good results with both modalities. This code implements an user-friendly script that hides all of the messy implementation details and parameters. Simply put all of your images to be segmented into the same directory, and then plug and go.

has function
Description

This python toolbox performs registration between 2-D microscopy images from the same tissue section or serial sections in several ways to achieve imaging mass spectrometry (IMS) experimental goals.

This code supports the following works and enables others to perform the workflows outlined in the following works, please cite them if you use this toolbox:

  • Advanced Registration and Analysis of MALDI Imaging Mass Spectrometry Measurements through Autofluorescence Microscopy10.1021/acs.analchem.8b02884

  • Next Generation Histology-directed Imaging Mass Spectrometry Driven by Autofluorescence Microscopy10.1021/acs.analchem.8b02885

need a thumbnail
Description

NEUBIAS-WG5 workflow for nuclei segmentation using ilastik v1.3.2 and Python post-processing.

has topic
has function
need a thumbnail
Description

This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bounding boxes and segmentation masks for each instance of an object in the image. It's based on Feature Pyramid Network (FPN) and a ResNet101 backbone.

Description

NEUBIAS-WG5 workflow for nuclei segmentation using Mask-RCNN. The workflow uses Matterport Mask-RCNN. Keras implementation. The model was trained with Kaggle 2018 Data Science Bowl images.

has topic
need a thumbnail
Description

This workflow predict landmark positions on images by using DMBL landmark detection models.

has topic
has function
need a thumbnail
Description

An implementation of Belief Propagation for factor graphs, also known as the sum-product algorithm

has topic
need a thumbnail
Description

This workflow trains DMBL landmark detection models from a dataset of annotated images.

has function
need a thumbnail
Description

This workflow predict landmark positions on images by using LC landmark detection models.

has topic
has function
need a thumbnail
Description

This workflow trains LC landmark detection models from a dataset of annotated images.

has topic
has function
need a thumbnail
Description

This workflow predict landmark positions on images by using MSET landmark detection models.

has topic
has function
need a thumbnail
Description

This workflow trains MSET landmark detection models from a dataset of annotated images.

has topic
has function
need a thumbnail
Description

This is a (Cython-based) Python wrapper for Philipp Krähenbühl's Fully-Connected CRFs (version 2).

need a thumbnail
Description

PyTorch is an open-source machine learning library for Python, based on Torch, used for applications such as natural language processing.

has topic
has function
Description

This workflow segments glands from H&E stained histopathological images
from the Gland Segmentation Challenge (GlaS2015) using deep learning (UNet).
UNet implementation largely inspired from PyTorch-UNet by Milesial. 

need a thumbnail
Description

Collection of several basic standard image segmentation methods focusing on medical imaging. In particular, the key block/applications are (un)supervised image segmentation using superpixels, object centre detection and region growing with a shape prior. Besides the open-source code, there is also a few sample images.

 

has topic
has function
Description

This workflow processes images of cells with discernible nuclei and outputs a binary mask containing where nuclei are detected.

need a thumbnail
Description

Spimagine is a python package to interactively visualize and process time lapsed volumetric data as generated with modern light sheet microscopes (hence the Spim part). The package provides a generic 3D+t data viewer and makes use of GPU acceleration via OpenCL. If provides further an image processor interface for the GPU accelerated denoising and deconvolution methods of gputools.

It is only for display (no analysis). The only drawback: it does not handle multichannel time lapse 3D data (only one channel at a time).

has function
Spimagine
Description

It is an interactive front-end visualization for registration software based on Elasix (VTK/ITK)

has topic
need a thumbnail
Description

There are many methods in bio-imaging that can be parametrized. This gives more flexibility
to the user as long as tools provide easy support for tuning parameters. On the other hand, the
datasets of interest constantly grow which creates the need to process them in bulk. Again,
this requires proper tool support, if biologist is going to be able to organize such bulk
processing in an ad-hoc manner without the help of a programmer. Finally, new image
analysis algorithms are being constantly created and updated. Yet, lots of work is necessary to
extend a prototype implementation into product for the users. Therefore, there is a growing
need for software with a graphical user interface (GUI) that makes the process of image
analysis easier to perform and at the same time allows for high throughput analysis of raw
data using batch processing and novel algorithms. Main program in this area are written in
Java, but Python grow in bioinformatics and will be nice to allow easy wrap algorithm written
in this language.
Here we present PartSeg, a comprehensive software package implementing several image
processing algorithms that can be used for analysis of microscopic 3D images. Its user
interface has been crafted to speed up workflow of processing datasets in bulk and to allow
for easy modification of algorithm’s parameters. In PartSeg we also include the first public
implementation of Multi-scale Opening algorithm descibed in [1]. PartSeg allows for
segmentation in 3D based on finding connected components. The segmentation results can be
corrected manually to adjust for high noise in the data. Then, it is possible to calculate some
standard statistics like volume, mass, diameter and their user-defined combinations for the
results of the segmentation. Finally, it is possible to superimpose segmented structures using
weighted PCA method. Conclusions: PartSeg is a comprehensive and flexible software
dedicated to help biologists in processing, segmentation, visualization and the analysis of the
large microscopic 3D image data. PartSeg provides well established algorithms in an easy-touse,
intuitive, user-friendly toolbox without sacrificing their power and flexibility.

 

Examples include Chromosome territory analysis.

PartSeg
Description

The Allen Cell Structure Segmenter is a Python-based open source toolkit developed at the Allen Institute for Cell Science for 3D segmentation of intracellular structures in fluorescence microscope images.

It consists of two complementary elements:

  1. Classic image segmentation workflows for 20 distinct intracellular structure localization patterns. A visual “lookup table” is outlining the modular algorithmic steps for each segmentation workflow. This provides an intuitive guide for selection or construction of new segmentation workflows for a user’s particular segmentation task. 
  2. Human-in-the-loop iterative deep learning segmentation workflow trained on ground truth manually curated data from the images segmented with the segmentation workflow. Importantly, this module was not released yet.

 

The Allen Cell Structure Segmenter Overview
Description

Scikit-learn (sklearn) is a python library used for machine learning. sklearn contains simple and efficient tools for data mining and data analysis. Modules and functions include those for classification, regression, clustering, dimensionality reduction, model selection and data preprocessing. Many people have contributed to sklearn (list of authors)

has topic
scikit-learn logo.
Description

3-D density kernel estimation (DKE-3-D) method, utilises an ensemble of random decision trees for counting objects in 3D images. DKE-3-D avoids the problem of discrete object identification and segmentation, common to many existing 3-D counting techniques, and outperforms other methods when quantification of densely packed and heterogeneous objects is desired. 

Description

The Jupyter Notebook is the original web application for creating and sharing computational documents. It offers a simple, streamlined, document-centric experience.

Try Jupyter (https://try.jupyter.org) is a site for trying out the Jupyter Notebook, equipped with kernels for several different languages (Julia, R, C++, Scheme, Ruby) without installing anything. Click the link below to go to the page.

need a thumbnail
Description

Maxima finding algorithm implemented in Python recreated from implementation in Fiji(ImageJ)

This is a re-implementation of the java plugin written by Michael Schmid and Wayne Rasband for ImageJ. The original java code source can be found in: https://imagej.nih.gov/ij/developer/source/ij/plugin/filter/MaximumFinder.java.html 

This implementation remains faithful to the original implementation but is not 100% optimised. The java version is faster but this could be alleviated by compiling c code for parts of the code. This script is simply to provide the functionality of the ImageJ find maxima algorithm to individuals writing pure python script.

The algorithm works as follows:

The first stage in the maxima finding algorithm is to find the local maxima. This involves processing the image with a 3x3 neighbourhood maximum filter. Once filtered this image is compared back to the original, where the pixels are the same value represents the locations of the local maxima. Typically there are far too many local maxima to be meaningful so the goal is then to merge and prune this maxima using some kind of measure of quality. In the case of algorithm a single parameter is used, the noise tolerance (Prominence). If a maxima is close to another then the maxima will be merged or removed based on the below criteria.

Starting with the brightest maxima and working down the intensities:

  • Expand out (‘flood fill’) from each maxima location. Neighbouring pixels within a noise tolerance (notl) of the maxima are scanned until the region within tolerance is exhausted.
    • If the pixels are equal to the maxima, mark this as equal.
    • If a greater maxima is met, ignore the active maxima.
    • If the pixels are less than maxima, but greater than maxima minus the noise tolerance, mark as listed.
    • Mark all ‘listed’ pixels 'processed' if they are included within a valid peak region, otherwise reset them.
    • From the regions containing a peak, calculate the best pixel to be considered as maxima based on minimum distance calculation with all those maxima considered equal.
       

For a video detailing how this algorithm works please see:

https://youtu.be/f9vXOMKOlaY

Or for examples of it being used in practise, please see:

https://youtu.be/9wvPsEzRWzI

 

find maxima comparison.
Description

FoCuS-point is stand-alone software for TCSPC correlation and analysis. FoCuS-point utilizes advanced time-correlated single-photon counting (TCSPC) correlation algorithms along with time-gated filtering and innovative data visualization. The software has been designed to be highly user-friendly and is tailored to handle batches of data with tools designed to process files in bulk. FoCuS-point also includes advanced diffusion curve fitting algorithms which allow the parameters of the correlation functions and thus the kinetics of diffusion to be established quickly and efficiently.

Description

FoCuS-scan is software for processing and analysis of large-scale scanning fluorescence correlation spectroscopy (FCS) data. FoCuS-scan can correlate data acquired on conventional turn-key confocal systems and in the form of xt image carpets.

Description

Stochastic optical reconstruction microscopy (STORM) and related methods achieves sub-diffraction-limit image resolution through sequential activation and localization of individual fluorophores. The analysis of image data from these methods has typically been confined to the sparse activation regime where the density of activated fluorophores is sufficiently low such that there is minimal overlap between the images of adjacent emitters. Recently several methods have been reported for analyzing higher density data, allowing partial overlap between adjacent emitters. However, these methods have so far been limited to two-dimensional imaging, in which the point spread function (PSF) of each emitter is assumed to be identical.

In this work, we present a method to analyze high-density super-resolution data in three dimensions, where the images of individual fluorophores not only overlap, but also have varying PSFs that depend on the z positions of the fluorophores.

 

need a thumbnail
Description

SimpleITK provides a simplified interface to ITK in a variety of languages. A user can either download pre-built binaries, if they are available for the desired platform and language, or SimpleITK can be built from the source code. Currently, Python binaries are available on Microsoft Windows, GNU Linux and Mac OS X. C# and Java binaries are available for Windows. We are also working towards supporting R packaging.

need a thumbnail
Description

ZEN and APEER – Open Ecosystem for integrated Machine-Learning Workflows

Open ecosystem for integrated machine-learning workflows to train and use machine-learning models for image processing and image analysis inside the ZEN software or on the APEER cloud-based platform

Highlights ZEN

  • Simple User Interface for Labeling and Training
  • Engineered Features Sets and Deep Feature Extraction + Random Forrest for Semantic Segmentation
  • Object Classification workflows
  • Probability Thresholds and Conditional Random Fields
  • Import your own trained models as *.czann files (see: czmodel · PyPI)
  • Import "AIModel Containes" from arivis AI for advanced Instance Segmentation
  • Integration into ZEN Measurement Framework
  • Support for Multi-dimensional Datasets and Tile Images
  • open and standardized format to store trained models
ZEN Intellesis Segmentation

ZEN Intellesis Segmentation - Training UI

ZEN Intellesis - Pretrained Networks

ZEN Intellesis Segmentation - Use Deep Neural Networks

Intellesis Object Classification

ZEN Object Classification

Highlights Aarivis AI

  • Web-based tool to label datasets to train Deep Neural Networks
  • Fully automated hyper-parameter tuning
  • Export of trained models for semantic segmentation and AIModelContainer for Instance Segmentation
Annotation Tool

APEER Annotation Tool

Description

This is a Jupyter notebook demonstrating the run of a code from IDR data sets by loading a CellProfiler Pipeline 

The example here is applied on real data set, but does not correspond to a biological question. It aims to demonstrate how to create a jupyter notebook to process online plates hosted in the IDR.

It reads the plate images from the IDR.

It loads the CellProfiler Pipeline and replace the reading modules used to read local files from this defaults pipeline by module allowing to read data remotely accessible.

It creates a CSV file and displays it in the notebook.

It makes some plot with Matplotlib.

 

jupyter
Description

Python/C++ port of the ImageJ extension TurboReg/StackReg written by Philippe Thevenaz/EPFL.

A python extension for the automatic alignment of a source image or a stack (movie) to a target image/reference frame.

need a thumbnail
Description

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference. For more details of Chainer, see the documents and resources listed above and join the community in Forum, Slack, and Twitter.

has topic
Description

Python is a programming language.

Python 2.7.0 was released on July 3rd, 2010.

Python 2.7 is scheduled to be the last major version in the 2.x series before it moves into an extended maintenance period. This release contains many of the features that were first released in Python 3.1.

 A bugfix release, 2.7.16, is currently available. Its use is recommended.

need a thumbnail
Description

Quantitative Criterion Acquisition Network (QCA Net) performs instance segmentation of 3D fluorescence microscopic images. QCA Net consists of Nuclear Segmentation Network (NSN) that learned nuclear segmentation task and Nuclear Detection Network (NDN) that learned nuclear identification task. QCA Net performs instance segmentation of the time-series 3D fluorescence microscopic images at each time point, and the quantitative criteria for mouse development are extracted from the acquired time-series segmentation image. The detailed information on this program is described in our manuscript posted on bioRxiv.

has function
Description

Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python.

need a thumbnail
Description

This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis.

Description

NumPy (Numerical Python) is an open source Python library that’s used in almost every field of science and engineering. It’s the universal standard for working with numerical data in Python, and it’s at the core of the scientific Python and PyData ecosystems. The NumPy library contains multidimensional array and matrix data structures. It provides ndarray, a homogeneous n-dimensional array object, with methods to efficiently operate on it. NumPy can be used to perform a wide variety of mathematical operations on arrays.

NumPy users include everyone from beginning coders to experienced researchers doing state-of-the-art scientific and industrial research and development. The NumPy API is used extensively in Pandas, SciPy, Matplotlib, scikit-learn, scikit-image and most other data science and scientific Python packages. 

Learn more about NumPy here!

has function
need a thumbnail
Description

SciPy is a collection of mathematical algorithms and convenience functions built on the NumPy extension of Python. It adds significant power to the interactive Python session by providing the user with high-level commands and classes for manipulating and visualizing data. With SciPy, an interactive Python session becomes a data-processing and system-prototyping environment. Find more about SciPy here!

need a thumbnail
Description

CompuCell3D is a flexible scriptable modeling environment, which allows the rapid construction of sharable Virtual Tissue in-silico simulations of a wide variety of multi-scale, multi-cellular problems including angiogenesis, bacterial colonies, cancer, developmental biology, evolution, the immune system, tissue engineering, toxicology and even non-cellular soft materials. CompuCell3D models have been used to solve basic biological problems, to develop medical therapies, to assess modes of action of toxicants and to design engineered tissues. CompuCell3D intuitive and make Virtual Tissue modeling accessible to users without extensive software development or programming experience.

It uses Cellular Potts Model to model cell behavior.

Description

NiftyNet is a TensorFlow-based open-source convolutional neural networks (CNNs) platform for research in medical image analysis and image-guided therapy. NiftyNet’s modular structure is designed for sharing networks and pre-trained models. Using this modular structure you can:

  • Get started with established pre-trained networks using built-in tools;
  • Adapt existing networks to your imaging data;
  • Quickly build new solutions to your own image analysis problems.
Description

Orbit Image Analysis is a free open source software with the focus to quantify big images like whole slide scans.

It can connect to image servers, e.g. Omero.
Analysis can be done on your local computer or via scaleout functionality in a distrubuted computing environment like a Spark cluster.

Sophisticated image analysis algorithms incl. tissue quantification using machine learning, object segmentation and classification are build in. In addition a versatile API allows you to enhance Orbit and to run your own scripts.

Orbit
Description

automated open-source image acquisition and on-the-fly analysis pipeline (initially developped for analysis of mitotic defects in fission yeast)

maars workflow from publication

 

maars
Description

"An open source machine learning framework for everyone "

TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.

has topic
TensorFlow
Description

Super-resolve anisotropic EM data along low-res axis with deep learning.

 

has function
Description

Multicut workflow for large connectomics data. Using luigi for pipelining and caching processing steps. Most of the computations are done out-of-core using hdf5 as backend and implementations from nifty

Description

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

The purpose of Luigi is to address all the plumbing typically associated with long-running batch processes. You want to chain many tasks, automate them, and failures will happen. These tasks can be anything, but are typically long running things like Hadoop jobs, dumping data to/from databases, running machine learning algorithms, or anything else.

has function
Description

vmtk is a collection of libraries and tools for 3D reconstruction, geometric analysis, mesh generation and surface data analysis for image-based modeling of blood vessels.

vmtk is composed of

  • C++ classes (VTK and ITK -based algorithms)
  • Python classes (high-level functionality - each class is a script)
  • PypeS - Python pipeable scripts, a framework which enables vmtk scripts to interact with each other

 

Description

SuRVoS: Super-Region Volume Segmentation workbench

A volume is first partitioned into Super-Regions (superpixels or supervoxels) and then interactively segmented by the user providing training annotations. SuRVoS can then learn from and extend the annotations to the whole volume.

User interface of SuRVoS showing example annotation on soft x-ray tomography data
Description

The ultimate goal of the NET framework is to make images of networks processable by computers. Therefore we want to have a pixel based image as input, as output we want a representation of the network visible in the image that retains as much information about the original network as possible. NET achives this by first segmenting the image and then vectorizing the network and then extracting information. The information we extract is

  • First and foremost the graph of the network. We find the crossings (nodes) and connections between crossings (edges) and therefore extract information about the neighborhood relations, the topology of the network.
  • We also extract the coordinates of all nodes which enables us to embed them into space. We therefore extract information about the geometry of the network.
  • Last but not least we track the radii of the edges in the extraction process. Therefore every edge has a radius which can be identified with its conductivity.

In the following we will first provide detailed instructions on how to install NET on several platforms. Then we describe the functionality and options of each of the four scripts that make up the NET framework.

has topic
need a thumbnail
Description

WND-CHARM is a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provides classification accuracy comparable to state-of-the-art task-specific image classifiers. WND-CHARM can extract up to ~3,000 generic image descriptors (features) including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are derived from the raw image, transforms of the image, and compound transforms of the image (transforms of transforms). The features are filtered and weighted depending on their effectiveness in discriminating between a set of predefined image classes (the training set). These features are then used to classify test images based on their similarity to the training classes. This classifier was tested on a wide variety of imaging problems including biological and medical image classification using several imaging modalities, face recognition, and other pattern recognition tasks. WND-CHARM is an acronym that stands for "Weighted Neighbor Distance using Compound Hierarchy of Algorithms Representing Morphology."

Generated features
Description

Scipion is an image processing framework for obtaining 3D models of macromolecular complexes using Electron Microscopy (3DEM). It integrates several software packages and presents a unified interface for both biologists and developers. Scipion allows you to execute workflows combining different software tools, while taking care of formats and conversions. Additionally, all steps are tracked and can be reproduced later on.

http://scipion.cnb.csic.es/m/home/
Description

CellProfiler is free, open-source software for quantitative analysis of biological images.

CellProfiler is designed to enable biologists without training in computer vision or programming to quantitatively measure cell or whole-organism phenotypes from thousands of images automatically. The researcher creates an analysis pipeline from modules that find cells and cell compartments, measure features of those cells to form a rich, quantitative dataset that characterizes the imaged site in all of its heterogeneity. CellProfiler is structured so that the most general and successful methods and strategies are the ones that are automatically suggested, but the user can override these defaults and pull from many of the basic algorithms and techniques of image analysis to solve harder problems. CellProfiler is extensible through plugins written in Python or for ImageJ. Strengths: Cells, Neurons, C. Elegans, 2D Fluorescent images, high-throughput screening, phenotype classification, multiple stains/site, interoperability, extensibility, machine learning, segmentation Limitations: largely limited to 2D, not well suited to manually-guided analysis or content review, image size limitations

Description

PopulationProfiler – is light-weight cross-platform open-source tool for data analysis in image-based screening experiments. The main idea is to reduce per-cell measurements to per-well distributions, each represented by a histogram. These can be optionally further reduced to sub-type counts based on gating (setting bin ranges) of known control distributions and local adjustments to histogram shape. Such analysis is necessary in a wide variety of applications, e.g. DNA damage assessment using foci intensity distributions, assessment of cell type specific markers, and cell cycle analysis.

has topic
PopulationProfiler screenshot
Description

BioImageXD is a free open source software package for analyzing, processing and visualizing multi-dimensional microscopy images. It's a collaborative project, designed and developed by microscopists, cell biologists and software engineers from the Universities of Jyväskylä and Turku in Finland, Max Planck Institute CBG in Dresden, Germany and collaborators worldwide. BioImageXD was published in the July 2012 issue of Nature Methods.

Screen capture of BioImageXD
Description

MyTardis is free and open-source data management software. It facilitates annotation, sharing and archiving of data and metadata collected from different modalities. It focuses on integration with scientific instruments, instrument facilities and research storage and computing infrastructure; to address the challenges of data storage, data access, collaboration and data publication. It is currently being used to capture data from areas such as optical microscopy, electron microscopy, medical imaging, protein crystallography, neutron and X-ray scattering, flow cytometry, genomics and proteomics.

Key features:

  • Easy instrument integration.
  • Discipline specific: MX, Imaging, Microscopy, Genomics ...
  • Wide range of data formats & supported instruments.
  • Secure cloud data storage & access.
  • Simple data sharing.
  • Researcher controlled data publishing.
  • APIs for programmatic access to data and metadata.
has topic
has function
need a thumbnail
Description

A workflow in Python to measure muscule fibers corresponding to the method used in Keefe, A.C. et al. Muscle stem cells contribute to myofibres in sedentary adult mice. Nat. Commun. 6:7087 doi: 10.1038/ncomms8087 (2015).

 

Example image:

 

muscleQNT/15536_2032_0.tif ...

Description

This is a learnable segmentation algorithm based on ground-truth images and segmentation mask. It learns a multiple output pixel classification algorithm. It downloads from Cytomine-Core annotation images+alphamasks from project(s), build a segmentation (pixel classifier) model which is saved locally. Typical application: tumor detection in tissues in histology slides. It is based on "Fast Multi-Class Image Annotation with Random Subwindows and Multiple Output Randomized Trees" http://orbi.ulg.ac.be/handle/2268/12205 and was used in "A hybrid human-computer approach for large-scale image-based measurements using web services and machine learning" http://orbi.ulg.ac.be/handle/2268/162084?locale=en

Segmentation illustration
Description

This module is for applying classification models on objects. It downloads from Cytomine-Core annotation images and coordinate of annotated objects from project(s) and build a annotation classification model which is saved locally. It downloads from Cytomine-Core annotations images from an image (e.g. detected by an object finder), apply a classification model (previously saved locally), and uploads to Cytomine-Core annotation terms (in a userjob layer).

has topic
has function
need a thumbnail
Description

This module is for learning classification models from ground-truth data (supervised learning). It downloads from Cytomine-Core annotation images and coordinate of annotated objects from project(s) and build a annotation classification model which is saved locally.  

It is used by Cytomine DataMining applications: classification_validation, classification_model_builder, classification_prediction, segmentation_model_builder and segmentation_prediction. But it can be run without Cytomine on local data (using dir_ls and dir_ts arguments).

has topic
need a thumbnail
Description

SLDC is an open-source Python workflow. SLDC stands for Segment Locate Dispatch Classify. This framework aims at facilitating the development of algorithms for detecting objects in multi-gigapixel images. Particularly, it provides algorithm developers with a structure to define problem-dependent components of their processing workflow (i.e. segmentation and classification) in a concise way. Every other concern such as parallelization and large image handling are encapsulated by the framework. It also features a powerful and customizable logging system and some components to apply several workflows one after another on a same image. SLDC can work on local images or interact with Cytomine

Example image:

Toy image data

has topic
Description

The jicbioimage Python package makes it easy to explore microscopy data in a programmatic fashion (python).

Exploring images via coding means that the exploratory work becomes recorded and reproducible.

Furthermore, it makes it easier to convert the exploratory work into (semi) automated analysis work flows.

Features:

  • Built in functionality for working with microscopy data
  • Automatic generation of audit trails
  • Python integration Works with Python 2.7, 3.3 and 3.4
Description
IceImarisConnector is a simple commodity class that eases communication between Bitplane Imaris and MATLAB or python using the Imaris XT interface.
need a thumbnail
Description

The Huygens Remote Manager is an open-source, efficient, multi-user web-based interface to the Huygens software by Scientific Volume Imaging for parallel batch deconvolutions.

has function
need a thumbnail
Description

- 2D Stabilization in each slice of the stacks in time. - 3D Stabilization intravital imaging of all the stacks (including the dimension Z) - create the videos and the stabilized images in a new folder 2701

has function
need a thumbnail
Description

It is a tool to visualize and annotate volume image data of electron microscopy. Users can annotate objects (e.g. neurons) and skeleton structures. It provides the ability to overlaying the image data with user annotations, representing the spatial structure and the connectivity of labeled objects, and displaying a three dimensional model of it. It can be extended by plugins written in python. A similar, web-based implementation is being developed at webknossos.info. Example datasets are also available.

Annotation in Knossos
Description

LocAlization Microscopy Analyzer (LAMA) is a software tool that contains several well-established data post processing algorithms for single-molecule localization microscopy (SMLM) data. LAMA has implemented algorithms for cluster analysis, colocalization analysis, localization precision estimation and image registration. LAMA works with a graphical user interface (GUI), and accepts simple input data formats as supported by various single- molecule localization software tools.

Description

Cytomine is a rich internet application using modern web and distributed technologies (Grails, HTML/CSS/Javascript, Docker), databases (spatial SQL and NoSQL), and machine learning (tree-based approaches with random subwindows) to foster active and distributed collaboration and ease large-scale image exploitation.

It provides remote and collaborative principles, rely on data models that allow to easily organize and semantically annotate imaging datasets in a standardized way (using user-defined ontologies associated to regions of interest), efficiently support high-resolution multi-gigapixel images (incl. major digital scanner image formats), and provide mechanisms to readily proofread and share image quantifications produced by any image recognition algorithms.

By emphasizing collaborative principles, the aim of Cytomine is to accelerate scientific progress and to significantly promote image data accessibility and reusability. Cytomine allows to break common practices in this domain where imaging datasets, quantification results, and associated knowledge are still often stored and analyzed within the restricted circle of a specific laboratory.

This software is e.g. being used by life scientists in to help them better evaluate drug treatments or understand biological processes directly from whole-slide tissue images (digital histology), by pathologists to share and ease their diagnosis, and by teachers and students for pathology training purposes. It is also used in various microscopy applications.

Cytomine can be used as a stand-alone application (e.g. on a laptop) or on larger servers for collaborative works.

Cytomine implements object classification, image segmentation, content-based image retrieval, object counting, and interest point detection algorithms using machine learning.

cytomine logo
Description

A collection for tracking microtubule dynamics, written in Python.

has function
Description

Image processing library for Python >The scikit-image SciKit (toolkit for SciPy) extends scipy.ndimage to provide a versatile set of image processing routines. It is written in the Python language. This SciKit is developed by the SciPy community. All contributions are most welcome!

has function
Scikit logo
Description
QuickPALM is a set of programs to aid in the acquisition and image analysis of data in “photoactivated localization microscopy” (PALM) and “stochastic optical reconstruction microscopy” (STORM). QuickPALM features the associated QuickPALM ImageJ plugin, which enables PALM/STORM 2D/3D/4D particle detection and image reconstruction in ImageJ.
need a thumbnail
Description

This workflow classifies, or segments, the pixels of an image given user annotations. It is especially suited if the objects of interests are visually (brightness, color, texture) distinct from their surrounding. Users can iteratively select pixel features and provide pixel annotations through a live visualization of selected feature values and current prediction responses. Upon users' satisfaction, the workflow then predicts the remaining unprocessed image(s) regions or new images (as batch processing). Users can export (as images of various formats): selected features, annotations, predicted classification probability, simple segmentation, etc. This workflow is often served as one of the first step options for other workflows offered by ilastik, such as object classification, automatic tracking.

Description
Well maintained and documented project that includes a core tracking incl. GUI as well as Matlab toolboxes to (1) correct tracking results and (2) analyze fly behavior. >Ctrax is an open-source, freely available, machine vision program for estimating the positions and orientations of many walking flies, maintaining their individual identities over long periods of time. It was designed to allow high-throughput, quantitative analysis of behavior in freely moving flies. Our primary goal in this project is to provide quantitative behavior analysis tools to the neuroethology community, thus we've endeavored to make the system adaptable to other labs' setups. We have assessed the quality of the tracking results for our setup, and found that it can maintain fly identities indefinitely with minimal supervision, and on average for 1.5 fly-hours automatically.
need a thumbnail
Description

This workflow classifies objects based on object-level features (e.g. intensity based, morphology based, etc) and user annotations. It needs segmentation images besides the raw image data. Segmentation images can be obtained from ilastik pixel classification, or binary segmentation images from other tools. Within the object classification, one can prefilter objects through thresholds (on pixel probability image) or object sizes (on segmentation image). Outputs are predicted classification label images. Selected features can also be exported. Advanced users also have possibilities to add customized (object) features for classification in a simple plugin fashion through python scripts.

Description
This workflow estimates (densely distributed) object counts by the density of objects in the image without performing segmentation or object detection. Current version only works for 2D images of roundish objects with similar sizes on relatively homogeneous background. Users should provide a few labels of background and objects (especially on clustered objects), and the tool predicts the density of objects on the entire image. Counting is then estimated by integrating the density values on the whole image or specified rectangular regions of interests.
need a thumbnail
Description

A commercial image analysis software. It's interface allows to easily perform measurements and image analysis. Your actions can be recorded and a macro (in a basic script language) can then be created. Almost no knowledge in programming is needed. You can also use python. A SDK is also available to develop stand alone applications in c++. Additional modules allow to use specific operations (3D operators... Examples of available categories of operators : filtering, edge detection, mathematical morphology, segmentation, Frequency operations, mathematical/logical operations, measurements...

need a thumbnail
Description
Python-bioformats is a Python wrapper for Bio-Formats, a standalone Java library for reading and writing life sciences image file formats. Bio-Formats is capable of parsing both pixels and metadata for a large number of formats, as well as writing to several formats. Python-bioformats uses the python-javabridge to start a Java virtual machine from Python and interact with it. Python-bioformats was developed for and is used by the cell image analysis software CellProfiler (cellprofiler.org). PyPI record: https://pypi.python.org/pypi/python-bioformats Documentation: http://pythonhosted.org/python-bioformats/ GitHub repository: https://github.com/CellProfiler/python-bioformats Report bugs here: https://github.com/CellProfiler/python-bioformats/issues python-bioformats is licensed under the GPL license to be compatible with the copy of Bio-Formats that is distributed with the package, but is compatible with a BSD license if loci_tools.jar is replaced with SCIFIO jars. See the accompanying file LICENSE for details.
need a thumbnail
Description
The javabridge Python package makes it easy to start a Java virtual machine (JVM) from Python and interact with it. Python code can interact with the JVM using a low-level API or a more convenient high-level API. PyPI record: https://pypi.python.org/pypi/javabridge Documentation: http://pythonhosted.org/javabridge/ GitHub repository: https://github.com/CellProfiler/python-javabridge Report bugs here: https://github.com/CellProfiler/python-javabridge/issues python-javabridge is licensed under the BSD license. See the accompanying file LICENSE for details.
need a thumbnail
Description

**Python(x,y)** is a free scientific and engineering development software for numerical computations, data analysis and data visualization based on Python programming language, Qt graphical user interfaces and Spyder interactive scientific development environment. Many python libraries related to numerical calculation are packaged, so you do not need to search and install them individually. Included libraries are listed **[here](https://code.google.com/p/pythonxy/wiki/StandardPlugins).**

has function
need a thumbnail
Description

ilastik is a simple, user-friendly tool for interactive image classification, segmentation and analysis. It is built as a modular software framework, which currently has workflows for automated (supervised) pixel- and object-level classification, automated and semi-automated object tracking, semi-automated segmentation and object counting without detection. Most analysis operations are performed lazily, which enables targeted interactive processing of data subvolumes, followed by complete volume analysis in offline batch mode. Using it requires no experience in image processing.

ilastik (the image learning, analysis, and segmentation toolkit) provides non-experts with a menu of pre-built image analysis workflows. ilastik handles data of up to five dimensions (time, 3D space, and spectral dimension). Its workflows provide an interactive experience to give the user immediate feedback on the quality of the results yielded by her chosen parameters and/or labelings.

The most commonly used workflow is pixel classification, which requires very little parameter tuning and instead offers a machine learning technique for segmenting an image based on local image features computed for each pixel.

Other workflows include:

Object classification: Similar to pixel classification, but classifies previously segmented objects by object characteristics in a subsequent step

Autocontext: This workflow improves the pixel classification workflow by running it in multiple stages and showing each pixel the results of the previous stage.

Carving: Semi-automated segmentation of 3D objects (e.g. neurons) based on user-provided seeds

Manual Tracking: Semi-automated cell tracking of 2D+time or 3D+time images based on manual annotations

Automated tracking: Fully-automated cell tracking of 2D+time or 3D+time images with some parameter tuning

Density Counting: Learned cell population counting based on interactively provided user annotation

Strengths: interactive, simple interface (for non-experts), few parameters, larger-than-RAM data, multi-dimensional data (time, 3D space, channel), headless operation, batch mode, parallelized computation, open source

Weaknesses: Pre-built workflows (not reconfigurable), no plugin system, visualization sometimes buggy, must import 3D data to HDF5, tracking requires an external CPLEX installation

Supported Formats: hdf5, tiff, jpeg, png, bmp, pnm, gif, hdr, exr, sif

Description

This library gives the numpy-based infrastructure functions for image processing with a focus on bioimage informatics. It provides image filtering and morphological processing as well as feature computation (both image-level features such as Haralick texture features and SURF local features). These can be used with other Python-based libraries for machine learning to build a complete analysis pipeline.

Mahotas is appropriate for users comfortable with programming or builders of end-user tools.

==== Strengths

The major strengths are in speed and quality of documentation. Almost all of the functionality is implemented in for multiple dimensions. It can be used with other Python packages which provide additional functionality.

Mahotas and all packages on which it relies are open-source.

Description

OMERO is a free, open source image management software. It is client-server based system which supports 5D images, including big images and high-content screening data. Data are stored on a server using relational database. They are accessed using 3 main clients, a desktop client, a web client and a command line tool. There are bindings from OMERO to other image analysis packages, like FLIMfit, OMERO.searcher. The data in OMERO are organized in groups. A user can be a member of one or more groups. This groups can be collaborative or private, there are 4 levels of permissions to access/edit/annotate/delete the data of other users.

The package is supported not only by community forums, but also by a dedicated team which helps users to solve their problems and deals with the bugs submitted via error submission system.

###Strengths

Open source, scalable software, Supports diverse sets of imaging applications and domains (EM,LM, HCS, DigPath) Cross-platform, Java-based application, API support for Java, Python, C++, Django, On-line Forums, Automatic QA and upload of software errors Multi-dimensional images, Web access, Free Demo-server accounts

Limitations

Enterprise-scale software, so complex install, requires expertise, Actively developing API, Python scripts and functions still developing

Omero
Description

Labeled images are integer images where the values correspond to different regions. I.e., region 1 is all of the pixels which have value 1, region two is the pixels with value 2, and so on. By convention, region 0 is the background and often handled differently.

 

has function
Description

mahotas.convolve(f, weights, mode='reflect', cval=0.0, out={new array})

Convolution of f and weights

Convolution is performed in doubles to avoid over/underflow, but the result is then cast to f.dtype. This conversion may result in over/underflow when using small integer types or unsigned types (if the output is negative). Converting to a floating point representation avoids this issue:

c = convolve(f.astype(float), kernel)
has function
need a thumbnail
Description

Align aligns images relative to each other, for example, to correct shifts in the optical path of a microscope in each channel of a multi-channel set of images.

For two or more input images, this module determines the optimal alignment among them. Aligning images is useful to obtain proper measurements of the intensities in one channel based on objects identified in another channel, for example. Alignment is often needed when the microscope is not perfectly calibrated. It can also be useful to align images in a time-lapse series of images. The module stores the amount of shift between images as a measurement, which can be useful for quality control purposes.

Note that the second image (and others following) is always aligned with respect to the first image. That is, the X/Y offsets indicate how much the second image needs to be shifted by to match the first. This module does not perform warping or rotation, it simply shifts images in X and Y. For more complex registration tasks, you might preprocess images using a plugin for that purpose in FIJI/ImageJ.

| Supports 2D? | Supports 3D? | Respects masks? |
|--------------|--------------|-----------------| | Yes | No | Yes |

Measurements made by this module

  • Xshift, Yshift: The pixel shift in X and Y of the aligned image with respect to the original image.

References

  • Lewis JP. (1995) “Fast normalized cross-correlation.” Vision Interface, 1-7.
has function
Description

## Introduction CellCognition is a computational framework dedicated to the automatic analysis of live cell imaging data in the context of High-Content Screening (HCS). It contains algorithms for segmentation of cells and cellular compartments based on various fluorescent markers, features to describe cellular morphology by both texture and shape, tools for visualizing and annotating the phenotypes, classification, tracking and error correction. Events such as mitosis can be automatically identified and aligned to study the temporal kinetics of various cellular processes during cell cycle. CellCognition can be used by novices in the field of image analysis and is applicable to hundreds of thousands of images by parallelization on compute clusters with minimal effort. The tool has been successfully applied to quantitative phenotypic profiling of cell division, yet machine learning enables CellCognition to be used for the analysis of other dynamic processes. ## Backends Following libraries are used: * numpy * VIGRA * PyQT * hdf5 * matplotlib * sklearn * Machine Learning in Python

Cell Cognition logo
Description

VIGRA is a free C++ and Python library that provides fundamental image processing and analysis algorithms. Its generic architecture allows it to be used in many different application contexts and ecosystems. It is designed as an intelligent library (using the C++ template mechanism) which allows users to write code at a fairly high level of abstraction and optimizes away the abstraction overhead upon compilation. It can therefore work efficiently on very large data and forms the basis of ilastik and CellCognition.

Strengths: open source, high quality algorithms, unlimited array dimension, arbitrary pixel types and number of channels, high speed, well tested, very flexible, easy-to-use Python bindings, support for many common file formats (including HDF5)

Limitations: no GUI, C++ not suitable for everyone, BioFormats not supported, parallelization requires external control

Images and Multi-dimensional Arrays: templated image data structures for arbitrary pixel types, fixed-size vectors multi-dimensional arrays for arbitrary high dimensions pre-instantiated images with many different scalar and vector valued pixel types (byte, short, int, float, double, complex, RGB, RGBA etc.) 2-dimensional image iterators, multi-dimensional iterators for arbitrary high dimensions, adapters for various image and array subsets

input/output of many image file formats: Windows BMP, GIF, JPEG, PNG, PNM, Sun Raster, TIFF (including 32bit integer, float, and double pixel types and multi-page TIFF), Khoros VIFF, HDR (high dynamic range), Andor SIF, OpenEXR input/output of images with transparency (alpha channel) into suitable file formats. comprehensive support for HDF5 (input/output of arrays in arbitrary dimensions)

continuous reconstruction of discrete images using splines: Just create a SplineImageView of the desired order and access interpolated values and derivative at any real-valued coordinate.

Image Processing: STL-style image processing algorithms with functors (e.g. arithmetic and algebraic operations, gamma correction, contrast adaptation, thresholding), arbitrary regions of interest using mask images image resizing using resampling, linear interpolation, spline interpolation etc.

geometric transformations: rotation, mirroring, arbitrary affine transformations automated functor creation using expression templates

color space conversions: RGB, sRGB, R'G'B', XYZ, Lab*, Luv*, Y'PbPr, Y'CbCr, Y'IQ, and Y'UV real and complex Fourier transforms in arbitrary dimensions, cosine and sine transform (via fftw) noise normalization according to Förstner computation of the camera magnitude transfer function (MTF) via the slanted edge technique (ISO standard 12233)

Filters: 2-dimensional and separable convolution, Gaussian filters and their derivatives, Laplacian of Gaussian, sharpening etc. separable convolution and FFT-based convolution for arbitrary dimensional data resampling convolution (input and output image have different size) recursive filters (1st and 2nd order), exponential filters non-linear diffusion (adaptive filters), hourglass filter total-variation filtering and denoising (standard, higer-order, and adaptive methods)

tensor image processing: structure tensor, boundary tensor, gradient energy tensor, linear and non-linear tensor smoothing, eigenvalue calculation etc. (2D and 3D) distance transform (Manhattan, Euclidean, Checker Board norms, 2D and 3D) morphological filters and median (2D and 3D) Loy/Zelinsky symmetry transform Gabor filters

Segmentation: edge detectors: Canny, zero crossings, Shen-Castan, boundary tensor corner detectors: corner response function, Beaudet, Rohr and Förstner corner detectors tensor based corner and junction operators

region growing: seeded region growing, watershed algorithm

Image Analysis: connected components labeling (2D and 3D) detection of local minima/maxima (including plateaus, 2D and 3D) tensor-basesd image analysis (2D and 3D) powerful incremental computation of region and object statistics

3-dimensional Image Processing and Analysis: point-wise transformations, projections and expansions in arbitrary high dimensions all functors (e.g. regions statistics) readily apply to higher dimensional data as well separable convolution and FFT-based convolution filters, resizing, morphology, and Euclidean distance transform for arbitrary dimensional arrays (not just 3D) connected components labeling, seeded region growing, watershed algorithm for volume data

Machine Learning: random forest classifier with various tree building strategies variable importance, feature selection (based on random forest) unsupervised decomposition: PCA (principle component analysis) and pLSA (probabilistic latent semantic analysis)

Mathematical Tools: special functions (error function, splines of arbitrary order, integer square root, chi square distribution, elliptic integrals) random number generation rational and fixed point numbers quaternions polynomials and polynomial root finding matrix classes, linear algebra, solution of linear systems, eigen system computation, singular value decomposition

optimization: linear least squares, ridge regression, L1-constrained least squares (LASSO, non-negative LASSO, least angle regression), quadratic programming

Inter-language support: Python bindings in both directions (use Python arrays in C++, call VIGRA functions from Python) Matlab bindings of some functions

Description

EnhanceEdges enhances or identifies edges in an image, which can improve object identification or other downstream image processing.

need a thumbnail
Description

This originally came from this module.

Currently it is available as the ilastik CellProfiler plugin (see this image.sc post for details).

need a thumbnail
Description

IdentifySecondaryObjects identifies objects (e.g., cells) using objects identified by another module (e.g., nuclei) as a starting point.

has topic
Description

DisplayScatterPlot plots the values for two measurements.

has function
need a thumbnail
Description

DisplayPlatemap displays a desired measurement in a plate map view.

Description

CorrectIlluminationCalculate calculates an illumination function that is used to correct uneven illumination/lighting/shading or to reduce uneven background in images.

has topic
Description

Crop crops or masks an image.

This module crops images into a rectangle, ellipse, an arbitrary shape provided by you, the shape of object(s) identified by an Identify module, or a shape created using a previous Crop module in the pipeline.

has function
need a thumbnail
Description

OverlayOutlines is a module from CellProfiler to place outlines of objects over a desired image.

need a thumbnail
Description

**RescaleIntensity** changes the intensity range of an image to your desired specifications.

This module lets you rescale the intensity of the input images by any ofseveral methods. You should use caution when interpreting intensity and texture measurements derived from images that have been rescaled because certain options for this module do not preserve the relative intensities from image to image.

has function
need a thumbnail
Description

This module can perform addition, subtraction, multiplication, division, or averaging of two or more image intensities, as well as inversion, log transform, or scaling by a constant for individual image intensities.

Description

Select a portion of a sequence in all 5 dimensions.

has function
Description

CellProfiler FlagImage module allows to assign a flag if an image meets certain measurement criteria that you specify (for example, if the image fails a quality control measurement). The value of the flag is 1 if the image meets the selected criteria (for example, if it fails QC), and 0 if it does not meet the criteria (if it passes QC).

has function
Description

Converts an image with multiple color channels to one or more grayscale images.

Description

Plots a histogram of the desired measurement.

Description

NuclearQuant is a QuantCenter module. It is designed for cell nuclei detection and quantification of IHC stained samples. NuclearQuant measures several morphological features besides stain intensity. The cell nuclei classification and the final score are calculated by the intensity score and the proportion score.

has topic
NuclearQuant
Description

Converts objects you have identified into an image.

Description

This module classifies objects into a number of different bins according to the value of a measurement (e.g., by size, intensity, shape). It reports how many objects fall into each class as well as the percentage of objects that fall into each class. The module asks you to select the measurement feature to be used to classify your objects and specify the bins to use. It also requires you to have run a measurement or CalculateMath previous to this module in the pipeline so that the measurement values can be used to classify the objects.

Description

Produces files that allow individual batches of images to be processed separately on a cluster of computers.

has topic
has function
null
Description

ExpandOrShrinkObjects expands or shrinks objects by a defined distance.

has function
need a thumbnail
Description

DisplayDensityPlot plots measurements as a two-dimensional density plot.

has function
need a thumbnail
Description

The interface will show the image that you selected as the guiding image, overlaid with colored outlines of the selected objects (or filled objects if you choose). This module allows you to remove or edit specific objects by pointing and clicking to select objects for removal or editing. Once editing is complete, the module displays the objects as originally identified (left) and the objects that remain after this module (right). More detailed Help is provided in the editing window via the ‘?’ button. The pipeline pauses once per processed image when it reaches this module. You must press the Done button to accept the selected objects and continue the pipeline.

Description

FilterObjects eliminates objects based on their measurements (e.g., area, shape, texture, intensity).

has function
need a thumbnail
Description

Produces a grid of desired specifications either manually, or automatically based on previously identified objects.

Description

EnhanceOrSuppressFeatures enhances or suppresses certain image features (such as speckles, ring shapes, and neurites), which can improve subsequent identification of objects.

need a thumbnail