-
🤩 You better keep termux with ollama server running in background and use this gui app called Ollama App - This is my personal favorite https://github.com/Mobile-Artificial-Intelligence/maid because it can run model with llama cpp and use hardware acceleration with vulkan / while this one is with clean interface https://github.com/JHubi1/ollama-app. It just works out of the box. Note: Once you run ollama server in termux using this command "ollama serve" keep termux in background, open maid or Ollama App in the app go to the settings, there you will see server host/url if this url is not already there then paste any one of these - http://localhost:11434 or http://127.0.0.1:11434. Save it and start talking with your favourite llm.
🤗 Edited by Sunil Rai -
@noasky5 I got notified about your comment because I was watching this page and I was going to test it on my termux, but I noticed that I didn't have ollama setup in this way anymore. I also noticed, however, that I could just do
pkg install ollama
and it works:pkg install ollama ollama serve & ollama run smollm:135m
Just saying for readers that just want to get it running without building it from source.
Please register or sign in to comment