Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter

  • Alaa Lab
  • 2023-10-15
  • 510
Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter
  • ok logo

Скачать Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Rising Stars #7: Zhi Huang (Stanford) - A Vision-Language Model for Pathology using Medical Twitter

Abstract:

The lack of annotated publicly available medical images is a major barrier for computational research and education innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of 208,414 pathology images paired with natural language descriptions. We demonstrate the value of this resource by developing pathology language–image pretraining (PLIP), a multimodal artificial intelligence with both image and text understanding, which is trained on OpenPath. PLIP achieves state-of-the-art performances for classifying new pathology images across four external datasets: for zero-shot classification, PLIP achieves F1 scores of 0.565–0.832 compared to F1 scores of 0.030–0.481 for previous contrastive language–image pretrained model. Training a simple supervised classifier on top of PLIP embeddings also achieves 2.5% improvement in F1 scores compared to using other supervised model embeddings. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical information is a tremendous resource that can be harnessed to develop medical artificial intelligence for enhancing diagnosis, knowledge sharing and education.

Bio:

Zhi Huang is a postdoctoral fellow at Stanford University. In August 2021, He received a Ph.D. degree from Purdue University, majoring in Electrical and Computer Engineering (ECE). Prior to that, he received his Bachelor of Science degree in Automation (BS-MS direct entry class) from Xi'an Jiaotong University School of Electronic and Information Engineering. His background is in the area of Artificial Intelligence, Digital Pathology, and Computational Biology. From May 2019 to August 2019, he was at Philips Research North America as a Research Intern.

Related publication:

Huang, Z.*, Bianchi, F.*, Yuksekgonul, M., Montine, T. J., & Zou, J. (2023). A visual–language foundation model for pathology image analysis using medical Twitter. Nature Medicine, 1-10. (Nature Medcine September cover story) (*: Equal contribution)

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]