Skip to content
  • Lucas Matuszewski @lucas.matuszewski ·

    Thanks a lot for this, especially the tips with go folder cleanup and moving to /bin!

  • thanks , but i have a question is that , if i want update ollama , should i delete ollama folder,and reclone ollama and build it from begin?

  • 🤩 You better keep termux with ollama server running in background and use this gui app called Ollama App - This is my personal favorite https://github.com/Mobile-Artificial-Intelligence/maid because it can run model with llama cpp and use hardware acceleration with vulkan / while this one is with clean interface https://github.com/JHubi1/ollama-app. It just works out of the box. Note: Once you run ollama server in termux using this command "ollama serve" keep termux in background, open maid or Ollama App in the app go to the settings, there you will see server host/url if this url is not already there then paste any one of these - http://localhost:11434 or http://127.0.0.1:11434. Save it and start talking with your favourite llm. 🤗

    Edited by Sunil Rai
  • Houston @sasseenh ·

    Did this stop working for anyone else within the last month? Getting an "i8mm" error flag during the go generate step.

  • do you guys get ''Error: llama runner process has terminated: signal: broken pipe'' when trying to load the model

  • @noasky5 I got notified about your comment because I was watching this page and I was going to test it on my termux, but I noticed that I didn't have ollama setup in this way anymore. I also noticed, however, that I could just do pkg install ollama and it works:

    pkg install ollama
    ollama serve &
    ollama run smollm:135m

    Just saying for readers that just want to get it running without building it from source.

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment