Data Parallelism Using PyTorch DDP | NVAITC Webinar

Описание к видео Data Parallelism Using PyTorch DDP | NVAITC Webinar

Learn how to do Distributed Data Parallelism using PyTorch DDP

Distributed Data Parallel (DDP) is a technique that enables data parallelism at the module level and can be utilized across multiple machines. When using DDP, applications should create multiple processes and instantiate a single DDP instance for each process.

DDP leverages collective communications provided by the torch distributed package to synchronize gradients and buffers. In this video we teach how to integrate PyTorch DDP with torchvision and DALI.

Join the NVIDIA Developer Program: https://nvda.ws/3OhiXfl
Read and subscribe to the NVIDIA Technical Blog: https://nvda.ws/3XHae9F

#ddp #pytorch #dataparallelism #nvaitc #nccl #deeplearning #multigpu

Комментарии

Информация по комментариям в разработке