TEM ExosomeAnalyzer

Description

TEM ExosomeAnalyzer is a program for automatic and semi-automatic detection of extracellular vesicles (EVs), such as exosomes, or similar objects in 2D images from transmission electron microscopy (TEM). The program detects the EVs, finds their boundaries, and reports information about their size and shape.

The software has been developed in terms of project MUNI/M/1050/2013 and supported by Grant Agency of Masaryk University.

The EVs are detected based on the shape and edge contrast criteria. The exact shapes of the EVs are then segmented using a watershed-based approach.

With proper parameter settings, even images with EVs both lighter and darked than the background, or containing artifacts or precipitated stain can be processed. If the fully-automatic processing fails to produce the correct results, the program can be used semi-automatically, letting the user adjust the detection seeds during the intermediate steps, or even draw the whole segmentation manually.

screen capture from exosomeAnalyzer

Paintera

Description

Paintera is a general visualization tool for 3D volumetric data and proof-reading in segmentation/reconstruction with a primary focus on neuron reconstruction from electron micrographs in connectomics. It features/supports:

  •  Views of orthogonal 2D cross-sections of the data at arbitrary angles and zoom levels
  •  Mipmaps for efficient display of arbitrarily large data at arbitrary scale levels
  •  Label data
    •  Painting
    •  Manual agglomeration
    •  3D visualization as polygon meshes
      •  Meshes for each mipmap level
      •  Mesh generation on-the-fly via marching cubes to incorporate painted labels and agglomerations in 3D visualization. Marching Cubes is parallelized over small blocks. Only relevant blocks are considered (huge speed-up for sparse label data).

Paintera is implemented in Java and makes extensive use of the UI framework JavaFX

Paintera screenshot

ZEN Intellesis Trainable Segmentation

Description

Perform Advanced Image Segmentation and Processing across Microscopy Methods
 

Overcome the bottleneck of segmenting your Materials Science images and use ZEISS ZEN Intellesis, a module of the digital imaging software ZEISS ZEN.
Independent of the microscope you used to acquire your image data, the algorithm of ZEN Intellesis will provide you with a model for automated segmentation after training. Reuse the model on the same kind of data and beneft from consistent and repeatable segmentation, not influenced by the operator. 
ZEN Intellesis offers a straightforward, ease-to-use workflow that enables every microscope user to perform advanced segmentation tasks rapidly.

Highlights

  • Simple User Interface for Labelling and Training
  • Integration into ZEN Measurement Framework
  • Support for Multi-dimensional Datasets
  • Use powerful machine learning algorithms for pixel-based classifcation
  • Real Multi-Channel Feature Extraction
  • Engineered Feature Set and Deep Feature Extraction on GPU
  • IP-Function for creating masks an OAD-enabled for advanced automation
  • Powered by ZEN and Python3 using Anaconda Python Distribution
  • Just label objects, train your model and segment your images – there is no need for expert image analysis skills
  • Segment any kind of image data in 2D or 3D. Use data from light, electron, ion or x-ray microscopy, or your mobile phone
  • Speed up your segmentation task by built-in parallelization and GPU (graphics processing unit) acceleration
  • Increase tolerance to low signal-to-noise and artifact-ridden data
  • Seamless integration in ZEN framework and image analysis wizard
  • Data agnostic
  • Compatibility with 2D, 3D and up to 6D datasets
  • Export of multi-channel or labeled images
  • Exchange and sharing of models
  • GPU computing
  • Large data handling
  • Common and well-established machine learning algorithms
  • SW Trial License available

pystackreg

Description

Python/C++ port of the ImageJ extension TurboReg/StackReg written by Philippe Thevenaz/EPFL.

A python extension for the automatic alignment of a source image or a stack (movie) to a target image/reference frame.

need a thumbnail

cvMatch_Template

Description

It implements the template matching function from the OpenCV library. The java interface of OpenCV was done through the javacv library. It is quite similar as the existing template matching plugin but runs much faster and users could choose among six matching methods: 

1.Squared difference

2.Normalized squared difference

3.Cross-correlation

4.Normalized cross-correlation

5.Correlation coefficient

6.Normalized correlation coefficient

The detailed algorithms could be found here.

The cvMatch_Template will search a specific object (image pattern) over an image of interest by the user-specified method. 

Drishti

Description

Drishti (from Sanskrit  word for "vision" or "insight") is a multi-platform, open-source volume-exploration and presentation tool. Written for visualizing tomography data, electron-microscopy data and the like.

Drishti

Z-spacing correction for Fiji

Description

Estimate the positions and spacing between sections (or at local points) of three dimensional image data. This method may be applied to any imaging modality that acquires 3-dimensional data as a stack of 2-dimensional sections. We provide plugins for both Fiji and TrakEM2.

has function

Isotropic Super-Resolution for EM

Description

Super-resolve anisotropic EM data along low-res axis with deep learning.

 

has function

McLuigi

Description

Multicut workflow for large connectomics data. Using luigi for pipelining and caching processing steps. Most of the computations are done out-of-core using hdf5 as backend and implementations from nifty

SuRVoS

Description

SuRVoS: Super-Region Volume Segmentation workbench

A volume is first partitioned into Super-Regions (superpixels or supervoxels) and then interactively segmented by the user providing training annotations. SuRVoS can then learn from and extend the annotations to the whole volume.

User interface of SuRVoS showing example annotation on soft x-ray tomography data