Local LLM with Ollama, LLAMA3 and LM Studio // Private AI Server

Описание к видео Local LLM with Ollama, LLAMA3 and LM Studio // Private AI Server

Forget using public generative AI resources like ChatGPT. Use local large language models (LLMs) instead which allows you to create a private AI server. In the video, we step through setting up a private AI server in Windows Subsystem for Linux and a few hacks that need to happen so that your API can be accessible to the open source frontend called Open WebUI.

Written post on the steps: https://www.virtualizationhowto.com/2...

Ollama download: https://ollama.com/download
LM Studio: https://lmstudio.ai/

★ Subscribe to the channel:    / @virtualizationhowto  
★ My blog: https://www.virtualizationhowto.com
★ Twitter:   / vspinmaster  
★ LinkedIn:   / brandon-lee-vht  
★ Github: https://github.com/brandonleegit
★ Facebook:   / 100092747277326  
★ Discord:   / discord  
★ Pinterest:   / brandonleevht  

Introduction - 0:00
What are Large Language Models (LLMs) - 0:45
Advantages of hosting LLMs locally - 1:30
Hardware to run LLMs on your own hardware - 2:30
Setting up Ollama - 3:18
Looking at the Linux script for Ollama - 3:40
Downloading and running popular LLM models - 4:18
Command to download LLAMA3 language model - 4:31
Initiating a chat session from the WSL terminal - 5:16
Looking at Hugging Face open source models - 5:46
Open WebUI web frontend for private AI servers - 6:20
Looking at the Docker run command for Open WebUI - 6:50
Accessing, signing up, and tweaking settings in Open WebUI - 7:22
Reviewing the architecture of the private AI solution - 7:38
Talking about a hack for WSL to allow traffic from outside WSL to connect - 7:54
Looking at the netsh command for the port proxy - 8:24
Chatting with the LLM using Open WebUI - 9:00
Writing an Ansible Playbook - 9:20
PowerCLI scripts - 9:29
Overview of LM Studio - 9:42
Business use cases for local LLMs - 10:16
Wrapping up and final thoughts - 10:51

Комментарии

Информация по комментариям в разработке