@spyglass_ai/sdk
v0.1.0
Published
TypeScript SDK for shipping telemetry data to the Spyglass AI platform
Maintainers
Readme
Spyglass TypeScript SDK
TypeScript SDK for shipping telemetry data to the Spyglass AI platform.
Installation
npm install @spyglass-ai/sdkor
yarn add @spyglass-ai/sdkConfiguration
Set the following environment variables:
Required
SPYGLASS_API_KEY: Your Spyglass API keySPYGLASS_DEPLOYMENT_ID: Unique identifier for your deployment
Optional
SPYGLASS_OTEL_EXPORTER_OTLP_ENDPOINT: Custom endpoint for development
Example Configuration
export SPYGLASS_API_KEY="your-api-key"
export SPYGLASS_DEPLOYMENT_ID="user-service-v1.2.0"Usage
Basic Function Tracing
Use spyglassTrace to wrap functions:
import { spyglassTrace } from '@spyglass-ai/sdk';
const calculateTotal = spyglassTrace()((price: number, taxRate: number) => {
return price * (1 + taxRate);
});
const result = calculateTotal(100, 0.08);With a custom span name:
const processPayment = spyglassTrace({ name: 'payment_processing' })(
(amount: number, cardInfo: any) => {
return { status: 'success', transactionId: 'tx_123' };
}
);Async Function Tracing
Async functions work the same way:
const fetchUser = spyglassTrace()(async (userId: string) => {
const response = await fetch(`/api/users/${userId}`);
return response.json();
});OpenAI Integration
Wrap your OpenAI client to trace API calls:
import OpenAI from 'openai';
import { spyglassOpenai } from '@spyglass-ai/sdk';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const tracedClient = spyglassOpenai(client);
const response = await tracedClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100,
});Complete Example
import OpenAI from 'openai';
import { spyglassTrace, spyglassOpenai } from '@spyglass-ai/sdk';
const client = spyglassOpenai(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
})
);
const haveConversation = spyglassTrace({ name: 'ai_conversation' })(
async (userMessage: string) => {
const response = await client.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: userMessage }],
max_tokens: 150,
});
return response.choices[0].message.content;
}
);
const answer = await haveConversation('What is the capital of France?');
console.log(answer);What Gets Traced
Function Tracing (spyglassTrace)
- Function name and qualified name
- Input arguments
- Return values
- Execution time
- Exceptions
OpenAI Integration (spyglassOpenai)
- Model used
- Number of messages
- Request parameters (max_tokens, temperature, etc.)
- Token usage (prompt, completion, total)
- Response model
- API errors
Development
Install Dependencies
npm installBuild
npm run buildRun Tests
npm testRun Tests with Coverage
npm run test:coverageLint
npm run lintFormat
npm run formatTypeScript Support
Full TypeScript definitions included.
License
MIT
