langchain-llm7
v2025.4.291003
Published
LangChain integration for the LLM7 Chat API
Readme
langchain-llm7
LangChain integration for the LLM7 Chat API.
This package provides a ChatLLM7 class that implements the LangChain SimpleChatModel interface, allowing seamless integration with the LangChain JS/TS ecosystem for both standard invocation and streaming responses.
Installation
npm install langchain-llm7 @langchain/coreor
yarn add langchain-llm7 @langchain/coreNote: @langchain/core is a peer dependency.
Usage
Here's how to use the ChatLLM7 model:
import { ChatLLM7 } from "langchain-llm7";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
// Initialize the model (defaults or provide specific options)
const chat = new ChatLLM7({
// modelName: "gpt-4.1-nano", // Default
// temperature: 0.8,
// maxTokens: 150,
});
const messages = [
new SystemMessage("You are a helpful assistant."),
new HumanMessage("What is the capital of France?"),
];
// --- Basic Invocation (Non-streaming) ---
console.log("--- Testing invoke() ---");
try {
const response = await chat.invoke(messages);
console.log("Response:", response.content);
// Example Output: Response: Paris
} catch (error) {
console.error("Invoke Error:", error);
}
// --- Streaming ---
console.log("\n--- Testing stream() ---");
try {
const stream = await chat.stream(messages);
let fullResponse = "";
process.stdout.write("Streamed Response: ");
for await (const chunk of stream) {
process.stdout.write(chunk.text); // Use .text for string content of the chunk
fullResponse += chunk.text; // Accumulate the text part
}
process.stdout.write("\n");
console.log("(Full streamed content length:", fullResponse.length, ")");
// Example Output: Streamed Response: Paris
// (Full streamed content length: 5)
} catch (error) {
console.error("Stream Error:", error);
}Configuration
You can configure the ChatLLM7 model by passing parameters to its constructor:
| Parameter | Type | Default | Description |
|--------------|--------------|------------------------------|-----------------------------------------------------------------------------|
| baseUrl | string | "https://api.llm7.io/v1" | Base URL for the LLM7 API. |
| modelName | string | "gpt-4.1-nano" | The specific LLM7 model to use. |
| temperature| number | 1.0 | Sampling temperature (usually between 0 and 2). Higher values mean more randomness. |
| maxTokens | number | undefined | Maximum number of tokens to generate in the completion. |
| timeout | number | 120 | Request timeout in seconds. |
| maxRetries | number | 3 | Maximum number of retries for failed API requests (network errors, 5xx, 429). |
| stop | string[] | undefined | Optional list of sequences where the API should stop generating tokens. |
All standard BaseChatModelParams like callbacks, verbose, etc., are also accepted.
Development
- Clone the repository:
git clone https://github.com/chigwell/npm-langchain-llm7.git - Install dependencies:
cd npm-langchain-llm7 && npm install - Build the package:
npm run build - Run tests (using the example):
npx ts-node test.ts
Contributing
Contributions are welcome! Please feel free to open an issue or submit a pull request on the GitHub repository.
License
This package is licensed under the Apache License 2.0.
