llmatch-js
v2025.6.16739
Published
A JavaScript library for invoking LLMs and matching their responses with patterns
Downloads
17
Maintainers
Readme
llmatch-js
A JavaScript library for invoking LLMs and matching their responses with patterns. This is a JavaScript port of the Python llmatch package.
Installation
npm install llmatch-jsFeatures
- Pattern-matching on LLM responses using regular expressions
- Automatic retries with exponential backoff
- Support for any LLM client with a compatible interface
- Detailed verbose logging option
Usage
Basic Example
const { llmatch } = require('llmatch-js');
import { ChatLLM7 } from "langchain-llm7";
// Initialize your LLM client
const chat = new ChatLLM7()
// Create a wrapper for the LLM that matches the expected interface
const llm = {
invoke: async (messages, options = {}) => {
const response = await chat.invoke({
messages: messages,
...options
});
return {
content: response.content,
raw: response
};
}
};
// Use llmatch to extract structured data
async function extractJson() {
const result = await llmatch({
llm,
query: "Generate a JSON object with information about a random book.",
pattern: /```json\n([\s\S]*?)```/,
verbose: true
});
if (result.success) {
// Parse the extracted JSON
const bookData = JSON.parse(result.extractedData[0]);
console.log("Book data:", bookData);
} else {
console.error("Failed to extract JSON:", result.errorMessage);
}
}
extractJson().catch(console.error);API Reference
llmatch(options)
Main function that invokes an LLM and matches the response against a pattern.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| llm | Object | null | An already initialized instance of the LLM client |
| query | string | "Extract relevant data." | The main query or instruction for the LLM |
| pattern | string | RegExp | /(.*?)/s | A regex string or compiled RegExp object |
| context | string | null | Optional additional text to provide context to the LLM |
| promptTemplate | string | "{query}\n\n{context}\nWrite the answer in the next format: {format}" | Template string |
| maxRetries | number | 15 | Maximum number of attempts to make |
| initialDelay | number | 1.0 | Seconds to wait before the first retry |
| backoffFactor | number | 1.5 | Multiplier for the delay between retries |
| verbose | boolean | false | If true, prints detailed logs of the process |
| passThrough | Object | {} | Additional parameters to pass to the LLM invocation |
Return Value
The function returns a Promise that resolves to an object with the following properties:
| Property | Type | Description |
|----------|------|-------------|
| success | boolean | True if a valid response was found |
| extractedData | Array | null | Array of strings matching the pattern (capturing groups), or null if no pattern was provided or matched |
| finalContent | string | null | The content of the last successful or final failed LLM response |
| retriesAttempted | number | Number of retries made (0 means success on first try) |
| errorMessage | string | null | Description of the error if success is false |
| rawResponse | any | The raw response object from the last LLM call |
LLM Interface Requirements
Your LLM client should implement the following interface:
interface LLMClient {
invoke(messages: Array<{role: string, content: string}>, options?: any): Promise<{
content: string;
[key: string]: any;
}>;
}License
MIT
