@launchdarkly/server-sdk-ai
v0.20.0
Published
LaunchDarkly AI SDK for Server-Side JavaScript
Readme
LaunchDarkly AI SDK for Server-Side JavaScript
[!CAUTION] This SDK is in pre-release and not subject to backwards compatibility guarantees. The API may change based on feedback.
Pin to a specific minor version and review the changelog before upgrading.
LaunchDarkly overview
LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!
Quick Setup
This assumes that you have already installed the LaunchDarkly Node.js (server-side) SDK, or a compatible edge SDK.
- Install this package with
npmoryarn:
npm install @launchdarkly/server-sdk-ai --save
# or
yarn add @launchdarkly/server-sdk-ai- Create an AI SDK instance:
// The ldClient instance should be created based on the instructions in the relevant SDK.
const aiClient = initAi(ldClient);Setting Default AI Configurations
When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:
Fully Configured Default
const defaultConfig = {
enabled: true,
model: {
name: 'gpt-4',
parameters: { temperature: 0.7, maxTokens: 1000 }
},
messages: [
{ role: 'system', content: 'You are a helpful assistant.' }
]
};Default value
The defaultValue parameter is optional. When omitted, a disabled default is used.
Retrieving AI Configurations
The completionConfig method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:
const aiConfig = await aiClient.completionConfig(
aiConfigKey,
context,
defaultConfig,
{ myVariable: 'My User Defined Variable' } // Variables for template interpolation
);
// Ensure configuration is enabled
if (aiConfig.enabled) {
const { messages, model, tracker } = aiConfig;
// Use with your AI provider
}ManagedModel for Tracked Model Invocations
ManagedModel provides a high-level interface for invoking AI models with automatic metrics tracking and judge evaluation:
- Automatically configures models based on AI configuration
- Automatically tracks token usage, latency, and success rates
- Runs configured judges asynchronously and reports their results
- Works with any supported AI provider (see AI Providers for available packages)
Using ManagedModel
// Use the same defaultConfig from the retrieval section above
const model = await aiClient.createModel(
'customer-support-chat',
context,
defaultConfig,
{ customerName: 'John' }
);
if (model) {
// Metrics are automatically tracked by run()
const result = await model.run('I need help with my order');
console.log(result.content);
// Judge evaluations run asynchronously; await if you need their results
const evals = await result.evaluations;
console.log('Judge results:', evals);
}Advanced Usage with Providers
For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:
Using AI Provider Packages
import { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain';
const aiConfig = await aiClient.completionConfig(aiConfigKey, context, defaultValue);
// Create LangChain model from configuration
const llm = await LangChainProvider.createLangChainModel(aiConfig);
// Use with tracking
const response = await aiConfig.tracker.trackMetricsOf(
LangChainProvider.getAIMetricsFromResponse,
() => llm.invoke(messages)
);
console.log('AI Response:', response.content);Using Custom Providers
import { LDAIMetrics } from '@launchdarkly/server-sdk-ai';
const aiConfig = await aiClient.completionConfig(aiConfigKey, context, defaultValue);
// Define custom metrics mapping for your provider
const mapCustomProviderMetrics = (response: any): LDAIMetrics => ({
success: true,
usage: {
total: response.usage?.total_tokens || 0,
input: response.usage?.prompt_tokens || 0,
output: response.usage?.completion_tokens || 0,
}
});
// Use with custom provider and tracking
const result = await aiConfig.tracker.trackMetricsOf(
mapCustomProviderMetrics,
() => customProvider.generate({
messages: aiConfig.messages || [],
model: aiConfig.model?.name || 'custom-model',
temperature: aiConfig.model?.parameters?.temperature ?? 0.5,
})
);
console.log('AI Response:', result.content);Contributing
We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.
About LaunchDarkly
- LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
- Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
- Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
- Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
- Grant access to certain features based on user attributes, like payment plan (eg: users on the ‘gold’ plan get access to more features than users in the ‘silver’ plan).
- Disable parts of your application to facilitate maintenance, without taking everything offline.
- LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
- Explore LaunchDarkly
- launchdarkly.com for more information
- docs.launchdarkly.com for our documentation and SDK reference guides
- apidocs.launchdarkly.com for our API documentation
- blog.launchdarkly.com for the latest product updates
