Guidelines while working with partial annotations in deep learning

Описание к видео Guidelines while working with partial annotations in deep learning

U-net has been proven to be an effective architecture for semantic segmentation. Many variations of U-net can be found in the public domain but most (if not all) require densely labeled masks for training. This means, you need to annotate (label) every pixel in every training image. Dense labeling is possible for simple binary problems such as segmenting large objects against a background. But this process is very laborious and may not even be practical for multiclass problems.

What if you can work with partial labels where you focus on annotating under-represented regions in multiple images? This way you can use your annotation time efficiently by working on regions that add information to the model.

The deep learning tools on APEER can now handle partial labels for semantic segmentation. This lets you focus on labeling diverse areas from many images rather than annotating entire images. Please note that APEER is a cloud platform for image analysis that is free for academics.

This video examines two datasets, and goes through the process of iteratively annotating regions by viewing segmentation results. It provides insights into annotation strategies for partial labeling.

You can sign up for your APEER account at:
https://www.apeer.com/

Комментарии

Информация по комментариям в разработке