@hardlydifficult/ai
v1.0.176
Published
Opinionated AI helpers for local tools and automations.
Readme
@hardlydifficult/ai
Opinionated AI helpers for local tools and automations.
The package is optimized for the simple path:
- object config instead of positional setup
ask()for textaskFor()for structured output- string shorthands for streaming and agents
- optional logger, required usage tracking
Installation
npm install @hardlydifficult/aiQuick Start
import { createAI, claude } from "@hardlydifficult/ai";
import type { AITracker } from "@hardlydifficult/ai";
import { z } from "zod";
const tracker: AITracker = {
record(usage) {
console.log(usage.inputTokens + usage.outputTokens);
},
};
const ai = createAI({
model: claude("sonnet"),
tracker,
systemPrompt: "Be concise. Prefer direct answers.",
});
const summary = await ai.ask("Summarize this diff");
const labels = await ai.askFor(
"Classify this pull request",
z.object({
type: z.enum(["bugfix", "feature", "refactor"]),
confidence: z.number(),
})
);
await ai.stream("Draft the release note", (chunk) => {
process.stdout.write(chunk);
});createAI
Preferred form:
const ai = createAI({
model: claude("sonnet"),
tracker,
logger,
systemPrompt: "You are a careful coding assistant.",
maxTokens: 8192,
temperature: 0.2,
});Config:
model: AI SDK language modeltracker: required usage trackerlogger: optional, silent by defaultsystemPrompt: default system prompt forask,askFor,stream, andagentmaxTokens: defaults to4096temperature: optional
The older positional form still works:
const ai = createAI(claude("sonnet"), tracker, logger, {
maxTokens: 8192,
temperature: 0.2,
});AI
ask(prompt, options?)
Use this for the common case.
const answer = await ai.ask("What changed in this commit?");askFor(prompt, schema, options?)
Use this when you want validated structured output.
const result = await ai.askFor(
"Extract the repo name and branch",
z.object({
repo: z.string(),
branch: z.string(),
})
);withSystemPrompt(systemPrompt)
Create a scoped client without rebuilding the whole config.
const reviewer = ai.withSystemPrompt("Review code for bugs and regressions.");
const review = await reviewer.ask("Review this patch");chat(prompt, systemPrompt?)
Use chat() when you want follow-up turns.
const first = await ai.chat("Summarize the bug");
const second = await first.reply("Now propose a fix");stream(input, onText, options?)
input can be a plain string or a full Message[].
await ai.stream("Write the commit message", (chunk) => {
process.stdout.write(chunk);
});Agents
Agents inherit the AI client's defaults and accept plain text prompts.
const agent = ai.agent({
readFile: {
description: "Read a file from disk",
inputSchema: z.object({ path: z.string() }),
execute: async ({ path }) => {
return {
path,
contents: "file contents here",
};
},
},
listFiles: {
description: "List files in a directory",
inputSchema: z.object({ directory: z.string() }),
execute: async ({ directory }) => ({ directory, files: ["src/index.ts"] }),
},
});
const result = await agent.run("Inspect src/index.ts and explain it");agent.run(input, options?)
const result = await agent.run("Find the bug");
console.log(result.text);agent.stream(input, handler, options?)
await agent.stream("Refactor this module", {
onText: (chunk) => process.stdout.write(chunk),
onToolCall: (name, input) => console.log("tool", name, input),
onToolResult: (name, result) => console.log("result", name, result),
});Tool results can be strings or structured values. You do not need to stringify objects yourself.
Prompt Loader
import { createPromptLoader } from "@hardlydifficult/ai";
const loadReviewPrompt = createPromptLoader("prompts", "review.md");
const reviewPrompt = loadReviewPrompt();Providers
claude(variant)
const model = claude("sonnet");Supported variants:
sonnethaikuopus
ollama(model)
import { ollama } from "@hardlydifficult/ai";
const model = ollama("qwen3-coder-next:15b");The Ollama helper keeps models warm and uses long HTTP timeouts so local models can take time to load without breaking requests.
Extraction Utilities
import {
extractCodeBlock,
extractJson,
extractTag,
extractTyped,
} from "@hardlydifficult/ai";These are useful when you already have model output and want to recover structured data after the fact.
