[SIGGRAPH2022] Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning (CAST)

Описание к видео [SIGGRAPH2022] Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning (CAST)

In this work, we tackle the challenging problem of arbitrary image style transfer using a novel style feature representation learning method. A suitable style representation, as a key component in image stylization tasks, is essential to achieve satisfactory results. Existing deep neural network based approaches achieve reasonable results with the guidance from second-order statistics such as Gram matrix of content features. However, they do not leverage sufficient style information, which results in artifacts such as local distortions and style inconsistency. To address these issues, we propose to learn style representation directly from image features instead of their second-order statistics, by analyzing the similarities and differences between multiple styles and considering the style distribution.

Paper: http://arxiv.org/abs/2205.09542
Code: https://github.com/zyxElsa/CAST_pytorch

Yuxin Zhang, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu
ACM Transactions on Graphics (Proc. of SIGGRAPH 2022).

Комментарии

Информация по комментариям в разработке