Detail publikace

Self-supervised pretraining for transferable quantitative phase image cell segmentation

VIČAR, T. CHMELÍK, J. JAKUBÍČEK, R. CHMELÍKOVÁ, L. GUMULEC, J. BALVAN, J. PROVAZNÍK, I. KOLÁŘ, R.

Originální název

Self-supervised pretraining for transferable quantitative phase image cell segmentation

Anglický název

Self-supervised pretraining for transferable quantitative phase image cell segmentation

Jazyk

en

Originální abstrakt

In this paper, a novel U-Net-based method for robust adherent cell segmentation for quantitative phase microscopy image is designed and optimised. We designed and evaluated four specific post-processing pipelines. To increase the transferability to different cell types, non-deep learning transfer with adjustable parameters is used in the post-processing step. Additionally, we proposed a self-supervised pretraining technique using nonlabelled data, which is trained to reconstruct multiple image distortions and improved the segmentation performance from 0.67 to 0.70 of object-wise intersection over union. Moreover, we publish a new dataset of manually labelled images suitable for this task together with the unlabelled data for self-supervised pretraining.

Anglický abstrakt

In this paper, a novel U-Net-based method for robust adherent cell segmentation for quantitative phase microscopy image is designed and optimised. We designed and evaluated four specific post-processing pipelines. To increase the transferability to different cell types, non-deep learning transfer with adjustable parameters is used in the post-processing step. Additionally, we proposed a self-supervised pretraining technique using nonlabelled data, which is trained to reconstruct multiple image distortions and improved the segmentation performance from 0.67 to 0.70 of object-wise intersection over union. Moreover, we publish a new dataset of manually labelled images suitable for this task together with the unlabelled data for self-supervised pretraining.

Plný text v Digitální knihovně

Dokumenty

BibTex


@article{BUT172596,
  author="Tomáš {Vičar} and Jiří {Chmelík} and Roman {Jakubíček} and Larisa {Chmelíková} and Jaromír {Gumulec} and Jan {Balvan} and Valentine {Provazník} and Radim {Kolář}",
  title="Self-supervised pretraining for transferable quantitative phase image cell segmentation",
  annote="In this paper, a novel U-Net-based method for robust adherent cell segmentation for quantitative phase microscopy image is designed and optimised. We designed and evaluated four specific post-processing pipelines. To increase the transferability to different cell types, non-deep learning transfer with adjustable parameters is used in the post-processing step. Additionally, we proposed a self-supervised pretraining technique using nonlabelled data, which is trained to reconstruct multiple image distortions and improved the segmentation performance from 0.67 to 0.70 of object-wise intersection over union. Moreover, we publish a new dataset of manually labelled images suitable for this task together with the unlabelled data for self-supervised pretraining.",
  address="Optica Publishing Group",
  chapter="172596",
  doi="10.1364/BOE.433212",
  howpublished="online",
  institution="Optica Publishing Group",
  number="10",
  volume="12",
  year="2021",
  month="september",
  pages="6514--6528",
  publisher="Optica Publishing Group",
  type="journal article in Web of Science"
}