Shikun Liu | Vision-Language Reasoning with Multi-Modal Experts

Описание к видео Shikun Liu | Vision-Language Reasoning with Multi-Modal Experts

Sponsored by Evolution AI: https://www.evolution.ai
Abstract: Recent vision-language models have shown impressive multi-modal generation capabilities. However, typically they require training huge models on massive datasets. As a more scalable alternative, we introduce Prismer, a data- and parameter-efficient vision-language model that leverages an ensemble of domain experts. Prismer only requires training of a small number of components, with the majority of network weights inherited from readily-available, pre-trained domain experts, and kept frozen during training. By leveraging experts from a wide range of domains, we show that Prismer can efficiently pool this expert knowledge and adapt it to various vision-language reasoning tasks. In our experiments, we show that Prismer achieves fine-tuned and few-shot learning performance which is competitive with current state-of-the-art models, whilst requiring up to two orders of magnitude less training data.
​Speaker bio: Shikun Liu is a fourth-year PhD student at Dyson Robotics Lab in Imperial College, co-advised by Prof. Andrew Davison and Prof. Edward Johns. Shikun's main research goal is to develop general-purpose multi-task and multi-modal learning systems. To that end, his work has broadly concerned with the study of multi-task relationships, multi-task and auxiliary learning method design, and self and semi-supervised learning frameworks.

Комментарии

Информация по комментариям в разработке