Run Google Gemma 2B 7B Locally on the CPU & GPU

Описание к видео Run Google Gemma 2B 7B Locally on the CPU & GPU

How to run Google Gemma 2B- and 7B-parameter models locally on the CPU and the GPU on Apple Silicon Macs. Use Gemma Instruct models with the Hugging Face CLI, PyTorch, and Hugging Face's Transformers and Accelerate Python packages.

==================
🎥 VIDEO CHAPTERS
==================

00:00 Introduction
01:05 Find Models in Hugging Face
01:28 Terms
01:57 Install the Hugging Face CLI
02:21 Login
02:55 Download Models
03:51 Download a Single File
04:50 Download a Single File as a Symlink
05:25 Download All Files
06:32 Hugging Face Cache
07:00 Recap
07:29 Using Gemma
08:02 Python Environment
08:47 Run Gemma 2B on the CPU
12:13 Run Gemma 7B in the CPU
13:07 CPU Usage and Generating Code
17:24 List Apple Silicon GPU Devices with PyTorch
18:59 Run Gemma on Apple Silicon GPUs
23:52 Recap
24:25 Outro

==================

💬 Join the conversation on Discord   / discord  
🧠 Machine Intelligence Playlist:    • 🧠 Machine Intelligence  
🔴 Live Playlist:    • 🧠 Live Streams  
🕸 Web Development Playlist:    • 🚀 Web  

🍃 Getting Simple: https://gettingsimple.com
🎙 Podcast: https://gettingsimple.com/podcast
🗣 Ask Questions: https://gettingsimple.com/ask
💬 Discord:   / discord  
👨🏻‍🎨 Sketches: https://sketch.nono.ma
✍🏻 Blog: https://nono.ma
🐦 Twitter:   / nonoesp  
📸 Instagram:   / nonoesp  

Комментарии

Информация по комментариям в разработке