@launchdarkly/server-sdk-ai-langchain
v0.7.0
Published
LaunchDarkly AI SDK LangChain Provider for Server-Side JavaScript
Maintainers
Readme
LaunchDarkly AI SDK LangChain Provider for Server-Side JavaScript
[!CAUTION] This SDK is in pre-release and not subject to backwards compatibility guarantees. The API may change based on feedback.
Pin to a specific minor version and review the changelog before upgrading.
LaunchDarkly overview
LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!
Quick Setup
This package provides LangChain integration for the LaunchDarkly AI SDK. The simplest way to use it is with the LaunchDarkly AI SDK's createModel method:
- Install the required packages:
npm install @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-langchain --save
# or
yarn add @launchdarkly/server-sdk-ai @launchdarkly/server-sdk-ai-langchain- Create a managed model and run it:
import { init } from '@launchdarkly/node-server-sdk';
import { initAi } from '@launchdarkly/server-sdk-ai';
// Initialize LaunchDarkly client
const ldClient = init(sdkKey);
const aiClient = initAi(ldClient);
// Create a managed model
const defaultConfig = {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' }
};
const model = await aiClient.createModel('my-chat-config', context, defaultConfig);
if (model) {
const result = await model.run('What is the capital of France?');
console.log(result.content);
}For more information about using the LaunchDarkly AI SDK, see the LaunchDarkly AI SDK documentation.
Langchain Provider Installation
Important: You will need to install additional provider packages for the specific AI models you want to use. LangChain requires separate packages for each provider.
When creating a new LangChain model, LaunchDarkly uses an AI Config and the initChatModel provided by LangChain to create a model instance. You should install all LangChain providers for each provider you plan to use in your AI Config to ensure they can be properly instantiated.
Installing a LangChain Provider
To use specific AI models, install the corresponding provider package:
# For OpenAI models
npm install @langchain/openai --save
# or
yarn add @langchain/openaiFor a complete list of available providers and installation instructions, see the LangChain JavaScript Integrations documentation.
Advanced Usage
For more control, you can use the LangChain provider package directly with LaunchDarkly configurations:
import { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain';
import { HumanMessage } from '@langchain/core/messages';
// Create a LangChain model from LaunchDarkly configuration
const llm = await LangChainProvider.createLangChainModel(aiConfig);
// Convert LaunchDarkly messages to LangChain format and add user message
const configMessages = aiConfig.messages || [];
const userMessage = new HumanMessage('What is the capital of France?');
const allMessages = [...LangChainProvider.convertMessagesToLangChain(configMessages), userMessage];
// Track the model call with LaunchDarkly tracking
const tracker = aiConfig.createTracker();
const response = await tracker.trackMetricsOf(
LangChainProvider.getAIMetricsFromResponse,
() => llm.invoke(allMessages)
);
console.log('AI Response:', response.content);Observability
This provider automatically instruments LangChain calls for OpenTelemetry tracing when the optional @traceloop/instrumentation-langchain package is installed. No additional configuration is required — instrumentation is applied the first time the provider is used.
To enable automatic tracing, install the instrumentation package:
npm install @traceloop/instrumentation-langchain @opentelemetry/api --saveWhen these packages are available, the provider patches the LangChain ESM modules so that model invocations produce OpenTelemetry spans. These spans are emitted through whatever TracerProvider is active in your application (for example, one configured by @launchdarkly/observability-node or any other OpenTelemetry setup).
If the instrumentation packages are not installed, the provider operates normally without tracing.
Contributing
We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.
About LaunchDarkly
- LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
- Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
- Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
- Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
- Grant access to certain features based on user attributes, like payment plan (eg: users on the 'gold' plan get access to more features than users in the 'silver' plan).
- Disable parts of your application to facilitate maintenance, without taking everything offline.
- LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
- Explore LaunchDarkly
- launchdarkly.com for more information
- docs.launchdarkly.com for our documentation and SDK reference guides
- apidocs.launchdarkly.com for our API documentation
- blog.launchdarkly.com for the latest product updates
