Description
The method proposed in this paper is a robust combination of multi-task learning and unsupervised domain adaptation for segmenting amoeboid cells in microscopy. This end-to-end framework provides a consolidated mechanism to harness the potential of multi-task learning to isolate and segment clustered cells from low contrast brightfield images, and it simultaneously leverages deep domain adaptation to segment fluorescent cells without explicit pixel-level re- annotation of the data.
The entry-point to the codebase is the main.py file. The user has the option to
- Train the network on their own dataset
- Load a pre-trained model and use that for inference on their own data
- Note: The provided pretrained model was trained on 256x256 images. Results on different resolutions could require fine-tuning This model is trained (supervised) on brightfield, and domain adapted to fluorescence data. The results are saved as 'inference.png'