ZEN Intellesis Trainable Segmentation

Description

Perform Advanced Image Segmentation and Processing across Microscopy Methods
 

Overcome the bottleneck of segmenting your Materials Science images and use ZEISS ZEN Intellesis, a module of the digital imaging software ZEISS ZEN.
Independent of the microscope you used to acquire your image data, the algorithm of ZEN Intellesis will provide you with a model for automated segmentation after training. Reuse the model on the same kind of data and beneft from consistent and repeatable segmentation, not influenced by the operator. 
ZEN Intellesis offers a straightforward, ease-to-use workflow that enables every microscope user to perform advanced segmentation tasks rapidly.

Highlights

  • Simple User Interface for Labelling and Training
  • Integration into ZEN Measurement Framework
  • Support for Multi-dimensional Datasets
  • Use powerful machine learning algorithms for pixel-based classifcation
  • Real Multi-Channel Feature Extraction
  • Engineered Feature Set and Deep Feature Extraction on GPU
  • IP-Function for creating masks an OAD-enabled for advanced automation
  • Powered by ZEN and Python3 using Anaconda Python Distribution
  • Just label objects, train your model and segment your images – there is no need for expert image analysis skills
  • Segment any kind of image data in 2D or 3D. Use data from light, electron, ion or x-ray microscopy, or your mobile phone
  • Speed up your segmentation task by built-in parallelization and GPU (graphics processing unit) acceleration
  • Increase tolerance to low signal-to-noise and artifact-ridden data
  • Seamless integration in ZEN framework and image analysis wizard
  • Data agnostic
  • Compatibility with 2D, 3D and up to 6D datasets
  • Export of multi-channel or labeled images
  • Exchange and sharing of models
  • GPU computing
  • Large data handling
  • Common and well-established machine learning algorithms
  • SW Trial License available

CEM

Description

Computer-assisted Evaluation of Myelin formation (CEM) is a collection designed to automate myelin quantification. It requires use input to choose the best threshold values. The myelin is calculated as an overlap between neuronal signal and oligodendrocyte signal. Results are given as pixel counts and percents.

CEM runs as an imageJ plugin with an optional Matlab extension to remove cell bodies. More details are published at Kerman et al. 2015 Development. Supplemental Material includes a detailed user manual and the download link.

Myelin

Tensorflow

Description

"An open source machine learning framework for everyone "

TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.

has topic
TensorFlow

MIPAV

Description

The MIPAV (Medical Image Processing, Analysis, and Visualization) application enables quantitative analysis and visualization of medical images of numerous modalities such as PET, MRI, CT, or microscopy. Using MIPAV's standard user-interface and analysis tools, researchers at remote sites (via the internet) can easily share research data and analyses, thereby enhancing their ability to research, diagnose, monitor, and treat medical disorders.

wnd-charm

Description

WND-CHARM is a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provides classification accuracy comparable to state-of-the-art task-specific image classifiers. WND-CHARM can extract up to ~3,000 generic image descriptors (features) including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are derived from the raw image, transforms of the image, and compound transforms of the image (transforms of transforms). The features are filtered and weighted depending on their effectiveness in discriminating between a set of predefined image classes (the training set). These features are then used to classify test images based on their similarity to the training classes. This classifier was tested on a wide variety of imaging problems including biological and medical image classification using several imaging modalities, face recognition, and other pattern recognition tasks. WND-CHARM is an acronym that stands for "Weighted Neighbor Distance using Compound Hierarchy of Algorithms Representing Morphology."

Generated features

Analyze Particles

Description

An object detection function in ImageJ. [Analyze > Analyze Particles...]. It could simply be used for counting number of cells, but could also do more complex stuffs. ## Jython Snippet Here is a snippet of how to use Particle Analysis in Jython script.

has topic
need a thumbnail

Pixel Classification using ilastik

Description

This workflow classifies, or segments, the pixels of an image given user annotations. It is especially suited if the objects of interests are visually (brightness, color, texture) distinct from their surrounding. Users can iteratively select pixel features and provide pixel annotations through a live visualization of selected feature values and current prediction responses. Upon users' satisfaction, the workflow then predicts the remaining unprocessed image(s) regions or new images (as batch processing). Users can export (as images of various formats): selected features, annotations, predicted classification probability, simple segmentation, etc. This workflow is often served as one of the first step options for other workflows offered by ilastik, such as object classification, automatic tracking.

Mahotas

Description

This library gives the numpy-based infrastructure functions for image processing with a focus on bioimage informatics. It provides image filtering and morphological processing as well as feature computation (both image-level features such as Haralick texture features and SURF local features). These can be used with other Python-based libraries for machine learning to build a complete analysis pipeline.

Mahotas is appropriate for users comfortable with programming or builders of end-user tools.

==== Strengths

The major strengths are in speed and quality of documentation. Almost all of the functionality is implemented in for multiple dimensions. It can be used with other Python packages which provide additional functionality.

Mahotas and all packages on which it relies are open-source.

3D Objects Counter

Description
  • Counts the number of 3D objects in a stack.
  • quantifies for each found object the following parameters:
    • 3D intensity related measurement (with possible redirection to an image with the actual intensity value to be measured, for example for two channels measurements)
    • Volume and shape factors measurements, surface etc...
  • generates results representations such as:
    • Objects' map;
    • Surface voxels' map;
    • Centroids' map;
    • Centres of masses' map.

As ImageJ's “Analyze Particles” function, 3D-OC also has a “redirect to” option, allowing one image to be taken as a mask to quantify intensity related parameters on a second image. But unlike the Analyze particle, it include a thresholding option, meaning that you can start from a gray level  stack, not necessarily a binary mask.

To use it, first set the list of measurements by editing 3D OC Options. Both (3D Object counter and 3D OC Options are now in the default Fiji "Analyze" menu.

need a thumbnail