Component

A Component is an implementation of certain image processing / analysis algorithms.

Each component alone does not solve a Bioimage Analysis problem.

These problems can be addressed by combining such components into workflows.

Metric Reloaded: how to select and use your metrics

Submitted by Perrine on Wed, 02/14/2024 - 13:39

The mission of Metrics Reloaded is to guide researchers in the selection of appropriate performance metrics for biomedical image analysis problems, as well as provide a comprehensive online resource for metric-related information and pitfalls. This website provides ressources such as a tool to select the best metric, as well as tutorials about the way to use and interpret metrics in image analysis.

Description

CellStich proposes a set of tools for 3D segmentation from 2D segmentation: it reassembles 2D labels obtained from cell in slices in unique 3D labels across slices. It isparticularly robust to anisotropy, and is the ideal companion to cellpose 2D models or other 2D deep learning based models. One could also think about using it for cell tracking by overlap (using time as a third dimension).

cellstitch
Description

Image segmentation and object detection performance measures

The goal of this package is to provide easy-to-use tools for evaluation of the performance of segmentation methods in biomedical image analysis and beyond, and to fasciliate the comparison of different methods by providing standardized implementations. This package currently only supports 2-D image data.

has function
Description

MATLAB app to characterize nanoparticles imaged with super-resolution microscopy. nanoFeatures will read text and csv files from the NIKON and ONI microscopes and from the ThunderSTORM Fiji plugin, then cluster the localizations and filter by size and sphericity and finally output nanoparticle features like size, aspect ratio, and number of localizations per cluster (total and for each channel).

GUI first tab to browse and input files, select input type and check extra filters if needed.
Description

The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.

The entry-point to the codebase is the main.py file. The user has the option to

  • Train the network on their own dataset
  • Load a pre-trained model and use that for inference on their own data
  • NoteThe provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'
has function
daman