Intro to Transformers with self attention and positional encoding || Transformers Series

Описание к видео Intro to Transformers with self attention and positional encoding || Transformers Series

In this first video of the series, we cover the basics of transformers, focusing on self-attention, and positional encoding. Using TensorFlow, we implement a self-attention layer and a classifier for sentiment analysis. The model architecture includes embedding, positional encoding, self-attention, global average pooling, dense layers, and dropout. We train the model, evaluate its accuracy, and visualize the attention weights to gain insights into the model's behavior. Join us for the next video in the series, where we will delve into building a full-fledged transformer for language translation using multi-head attention. Don't forget to leave your questions, suggestions, and comments below, and stay tuned for updates on Instagram.
Thanks for watching! ❤️

Dataset link: https://drive.google.com/file/d/140xs...
Link to code: https://github.com/developershutt/Tra...

For your queries:
Instagram:   / developershutt  
Email: [email protected]

Комментарии

Информация по комментариям в разработке