smollm2-1.7b-runner
v1.1.0
Published
A Node.js package to run SmolLM locally using [`node-llama-cpp`](https://www.npmjs.com/package/node-llama-cpp). This package enables text generation using a lightweight LLM model.
Readme
smollm2-1.7b-runner
A Node.js package to run SmolLM locally using node-llama-cpp. This package enables text generation using a lightweight LLM model.
Model
This package uses the SmolLM2-1.7B-Instruct model from Hugging Face.
Installation
npm install smollm2-1.7b-runnerUsage
const smol = require("smollm2-1.7b-runner");
async function chat() {
const response = await smol.prompt("Hello, how are you!");
console.log(response);
}
chat();Functions
prompt()
smol.prompt(userPrompt, options);- userPrompt (string): The input text to generate a response from the model.
- options (object, optional):
maxTokens(number): Maximum number of tokens to generate (default: 200). Range: 1-2048.temperature(number): Controls randomness in output (default: 0.8). Range: 0.1-1.0.
- Returns: Generated text response from the model.
- Throws: Error if initialization failed or invalid options are provided.
