This is how I run my OWN Custom-Models using OLLAMA

Описание к видео This is how I run my OWN Custom-Models using OLLAMA

In this video, we are going to push our own models on Ollama. Specifically, you will learn how to
Run ollama models, how to run models not available in model library, host your own models in model library, all in Cloud GPU service.

Run custom model on ollama
Import hugging face models on ollama

Access Codes here: https://github.com/PromptEngineer48/O...
Websites:
https://ollama.com/


Let’s do this!
Join the AI Revolution!

#custom_models #ollama #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels


CHANNEL LINKS:
🕵️‍♀️ Join my Patreon:   / promptengineer975  
☕ Buy me a coffee: https://ko-fi.com/promptengineer
📞 Get on a Call with me - Calendly: https://calendly.com/prompt-engineer4...
❤️ Subscribe:    / @promptengineer48  
💀 GitHub Profile: https://github.com/PromptEngineer48
🔖 Twitter Profile:   / prompt48  


TIME STAMPS:
0:00 - Intro
0:58 - Objectives
2:29 - Usefullness
2:48 - What is Ollama?
3:30 - RunPod Intro and Use
4:36 - How to use Ollama?
6:38 - Connect to Jupyter Notebooks in Ollama
7:10 - Log in to Ollama Account
8:21 - Main Section for Code
9:23 - Download models from Huggingface
11:52 - Create a Modelfile in Ollama
14:36 - Creating Custom Ollama Model
17:04 - Ollama Keys
18:22 - Pushing the Models
20:12 - Testing on Local Systems
21:00 - Conclusion

🎁Subscribe to my channel:    / @promptengineer48  

If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!

Комментарии

Информация по комментариям в разработке