Run Llama-2 Locally within Text Generation WebUI - Oobabooga

Описание к видео Run Llama-2 Locally within Text Generation WebUI - Oobabooga

In this video, I will show you how to run the Llama-2 13B model locally within the Oobabooga Text Gen Web using with Quantized model provided by theBloke. You don't need to get model weights and codes from Meta.

▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord:   / discord  
▶️️ Subscribe: https://www.youtube.com/@engineerprom...
📧 Business Contact: [email protected]
💼Consulting: https://calendly.com/engineerprompt/c...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LINKS:
Llama-2 Announcement - https://ai.meta.com/llama/
LLama-2 Playground - https://www.llama2.ai/
HuggingFace Link: https://huggingface.co/TheBloke/Llama...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
All Interesting Videos:
Everything LangChain:    • LangChain  

Everything LLM:    • Large Language Models  

Everything Midjourney:    • MidJourney Tutorials  

AI Image Generation:    • AI Image Generation Tutorials  

Комментарии

Информация по комментариям в разработке