@casaper/llm-util
v1.7.0
Published
Personal experimental cli LLM utilities running on local Ollama models.
Readme
llm-util
Personal CLI util that does LLM experiments of mine with locally running OSS models like LLAMA or Mistral via Ollama.
Requirements
- Ollama - for running local models like Llama2, Mistral, etc.
- A not so terribly slow computer with a performant GPU.
I use it with a M4-Max MacBook Pro with 128GB RAM, and it works quite fast. Your hardware doesn't have to be as performant and expensive as mine, but the further you are from it, the slower and less useful it will be.
