I Tried Running Offline AI Chat LLM Apps on Android

Описание к видео I Tried Running Offline AI Chat LLM Apps on Android

What to expect In this video:

🚀 Intro: Scaling issues with big frontier AI and the need for small models
🚀 Tutorial: How to run open source AI language models on Android phone
🚀 Tests: LLMs that range from around 3 billion parameters or less!
🚀 Reviews: The pros and cons of different local AI chat apps

Ollama on Android Tutorial:    • How to Run Ollama on Android Phone us...  

Follow me on X for Extras: https://x.com/vectro


Summary:

The race to scale AI is hitting its limits, with tech giants like Meta and OpenAI building massive data centers and even considering their own power plants. Meanwhile, the rise of smaller, more efficient on-device models is gaining momentum. In this video, I explore the two diverging paths of AI development—massive frontier models and compact local models—and show how you can run open-source AI locally on your phone, fully private and offline.

I'll review apps, test their performance with various models, and share pros and cons. Whether it's running a 3.8B parameter model or testing Alibaba's faster 1.5B model, I break it all down for you. Plus, I'll touch on emerging tools like Ollama, running tinyllama, and even some unexpected players in the field like JP Morgan.

If you're interested in practical ways to use local AI, stay tuned for detailed tutorials, performance tests, and tips on getting started. Don’t forget to like, comment, and subscribe for more open-source AI!

Комментарии

Информация по комментариям в разработке