Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU

Описание к видео Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU

In this video, I will compile llama.cpp li from source and run LLama-2 models on Intel's ARC GPU; iGPU and on CPU.

00:00 Introduction
01:17 Compiling LLama.cpp with CLBlast
11:45 Run LLama-2 13 on ARC GPU
14:07 Run LLmama-2 on iGPU

Комментарии

Информация по комментариям в разработке