@launchdarkly/server-sdk-ai
v0.15.2
Published
LaunchDarkly AI SDK for Server-Side JavaScript
Readme
LaunchDarkly AI SDK for Server-Side JavaScript
⛔️⛔️⛔️⛔️
[!CAUTION] This library is a alpha version and should not be considered ready for production use while this message is visible.
☝️☝️☝️☝️☝️☝️
LaunchDarkly overview
LaunchDarkly is a feature management platform that serves over 100 billion feature flags daily to help teams build better software, faster. Get started using LaunchDarkly today!
Quick Setup
This assumes that you have already installed the LaunchDarkly Node.js (server-side) SDK, or a compatible edge SDK.
- Install this package with
npmoryarn:
npm install @launchdarkly/server-sdk-ai --save
# or
yarn add @launchdarkly/server-sdk-ai- Create an AI SDK instance:
// The ldClient instance should be created based on the instructions in the relevant SDK.
const aiClient = initAi(ldClient);Setting Default AI Configurations
When retrieving AI configurations, you need to provide default values that will be used if the configuration is not available from LaunchDarkly:
Fully Configured Default
const defaultConfig = {
enabled: true,
model: {
name: 'gpt-4',
parameters: { temperature: 0.7, maxTokens: 1000 }
},
messages: [
{ role: 'system', content: 'You are a helpful assistant.' }
]
};Disabled Default
const defaultConfig = {
enabled: false
};Retrieving AI Configurations
The config method retrieves AI configurations from LaunchDarkly with support for dynamic variables and fallback values:
const aiConfig = await aiClient.config(
aiConfigKey,
context,
defaultConfig,
{ myVariable: 'My User Defined Variable' } // Variables for template interpolation
);
// Ensure configuration is enabled
if (aiConfig.enabled) {
const { messages, model, tracker } = aiConfig;
// Use with your AI provider
}TrackedChat for Conversational AI
TrackedChat provides a high-level interface for conversational AI with automatic conversation management and metrics tracking:
- Automatically configures models based on AI configuration
- Maintains conversation history across multiple interactions
- Automatically tracks token usage, latency, and success rates
- Works with any supported AI provider (see AI Providers for available packages)
Using TrackedChat
// Use the same defaultConfig from the retrieval section above
const chat = await aiClient.createChat(
'customer-support-chat',
context,
defaultConfig,
{ customerName: 'John' }
);
if (chat) {
// Simple conversation flow - metrics are automatically tracked by invoke()
const response1 = await chat.invoke('I need help with my order');
console.log(response1.message.content);
const response2 = await chat.invoke("What's the status?");
console.log(response2.message.content);
// Access conversation history
const messages = chat.getMessages();
console.log(`Conversation has ${messages.length} messages`);
}Advanced Usage with Providers
For more control, you can use the configuration directly with AI providers. We recommend using LaunchDarkly AI Provider packages when available:
Using AI Provider Packages
import { LangChainProvider } from '@launchdarkly/server-sdk-ai-langchain';
const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
// Create LangChain model from configuration
const llm = await LangChainProvider.createLangChainModel(aiConfig);
// Use with tracking
const response = await aiConfig.tracker.trackMetricsOf(
LangChainProvider.getAIMetricsFromResponse,
() => llm.invoke(messages)
);
console.log('AI Response:', response.content);Using Custom Providers
import { LDAIMetrics } from '@launchdarkly/server-sdk-ai';
const aiConfig = await aiClient.config(aiConfigKey, context, defaultValue);
// Define custom metrics mapping for your provider
const mapCustomProviderMetrics = (response: any): LDAIMetrics => ({
success: true,
usage: {
total: response.usage?.total_tokens || 0,
input: response.usage?.prompt_tokens || 0,
output: response.usage?.completion_tokens || 0,
}
});
// Use with custom provider and tracking
const result = await aiConfig.tracker.trackMetricsOf(
mapCustomProviderMetrics,
() => customProvider.generate({
messages: aiConfig.messages || [],
model: aiConfig.model?.name || 'custom-model',
temperature: aiConfig.model?.parameters?.temperature ?? 0.5,
})
);
console.log('AI Response:', result.content);Contributing
We encourage pull requests and other contributions from the community. Check out our contributing guidelines for instructions on how to contribute to this SDK.
About LaunchDarkly
- LaunchDarkly is a continuous delivery platform that provides feature flags as a service and allows developers to iterate quickly and safely. We allow you to easily flag your features and manage them from the LaunchDarkly dashboard. With LaunchDarkly, you can:
- Roll out a new feature to a subset of your users (like a group of users who opt-in to a beta tester group), gathering feedback and bug reports from real-world use cases.
- Gradually roll out a feature to an increasing percentage of users, and track the effect that the feature has on key metrics (for instance, how likely is a user to complete a purchase if they have feature A versus feature B?).
- Turn off a feature that you realize is causing performance problems in production, without needing to re-deploy, or even restart the application with a changed configuration file.
- Grant access to certain features based on user attributes, like payment plan (eg: users on the ‘gold’ plan get access to more features than users in the ‘silver’ plan).
- Disable parts of your application to facilitate maintenance, without taking everything offline.
- LaunchDarkly provides feature flag SDKs for a wide variety of languages and technologies. Check out our documentation for a complete list.
- Explore LaunchDarkly
- launchdarkly.com for more information
- docs.launchdarkly.com for our documentation and SDK reference guides
- apidocs.launchdarkly.com for our API documentation
- blog.launchdarkly.com for the latest product updates
