Ollama as Container | Offline GPT with Open Source LLMs in Local | Secure GPT Interaction

Описание к видео Ollama as Container | Offline GPT with Open Source LLMs in Local | Secure GPT Interaction

In this video, We'll explore, how to run Ollama as a container. Load multiple open-source models and chat with them.

Ollama can run with GPU acceleration inside Docker containers for Nvidia GPUs. It can also run on the CPU only. In the current demo, I am running in CPUs.

Links:

Docker Compose 🗞️ : https://github.com/JinnaBalu/infinite...

Ollama - https://ollama.ai/
ChatGPT-Style Web Interface for Ollama 🦙 - https://github.com/ollama-webui/ollam...

Комментарии

Информация по комментариям в разработке