@trustgateai/sdk
v1.4.0
Published
TrustGate Node.js SDK with Shadow AI detection and workflow context
Maintainers
Readme
trustgate-node
TrustGate Node.js SDK with Shadow AI detection: workflow context, middleware for Vercel AI SDK and LangChain, n8n header helper, and SSE streaming.
Install
npm install trustgate-nodeOptional peer dependencies for middleware:
npm install ai @ai-sdk/openai # Vercel AI SDK
npm install @langchain/openai # LangChainCore client
import { TrustGate, getWorkflowContextFromEnv } from 'trustgate-node';
const client = new TrustGate({
baseUrl: process.env.TRUSTGATE_BASE_URL!,
apiKey: process.env.TRUSTGATE_API_KEY,
// workflowContext optional; otherwise read from process.env (n8n, Vercel, GitHub)
});
// Non-streaming
const res = await client.fetch('/v1/chat/completions', {
method: 'POST',
body: JSON.stringify({ model: 'gpt-4', messages: [...] }),
});
// SSE streaming (workflow_id sent in request for background attribution)
const streamRes = await client.fetchStream('/v1/chat/completions', {
method: 'POST',
body: JSON.stringify({ model: 'gpt-4', messages: [...], stream: true }),
});
// Consume streamRes.body (ReadableStream) as usual for SSEContext provider
Workflow metadata is read from process.env so TrustGate can attribute traffic (Shadow AI detection):
| Source | Example env vars |
|---------|-------------------|
| n8n | N8N_WORKFLOW_ID, N8N_EXECUTION_ID, N8N_WORKFLOW_NAME |
| Vercel | VERCEL, VERCEL_ENV, VERCEL_URL, VERCEL_GIT_* |
| GitHub | GITHUB_ACTION, GITHUB_WORKFLOW, GITHUB_RUN_ID, GITHUB_REPOSITORY |
import { getWorkflowContextFromEnv } from 'trustgate-node';
const ctx = getWorkflowContextFromEnv();
// { source: 'n8n', workflow_id: '...', execution_id: '...', metadata: {...} }Middleware: Vercel AI SDK
Point the OpenAI provider at TrustGate so all calls go through the gateway with workflow context:
import { createOpenAI } from '@ai-sdk/openai';
import { createTrustGateOpenAIOptions } from 'trustgate-node/middleware/vercel-ai';
const openai = createOpenAI(
createTrustGateOpenAIOptions({
baseUrl: process.env.TRUSTGATE_BASE_URL!,
apiKey: process.env.TRUSTGATE_API_KEY,
})
);
// Use with streamText / generateText as usual
import { streamText } from 'ai';
const result = streamText({
model: openai('gpt-4'),
messages: [{ role: 'user', content: 'Hello' }],
});Middleware: LangChain
Use TrustGate as the OpenAI endpoint for LangChain with one config object:
import { ChatOpenAI } from '@langchain/openai';
import { createTrustGateLangChainConfig } from 'trustgate-node/middleware/langchain';
const config = createTrustGateLangChainConfig({
baseUrl: process.env.TRUSTGATE_BASE_URL!,
apiKey: process.env.TRUSTGATE_API_KEY,
});
const llm = new ChatOpenAI({
...config,
modelName: 'gpt-4',
temperature: 0,
});n8n HTTP Request node: header snippet
Get a pre-formatted JSON object of headers for n8n “HTTP Request” nodes so each request includes TrustGate auth and workflow context:
import { getN8nHeaderSnippet } from 'trustgate-node';
const { headers, json } = getN8nHeaderSnippet();
// headers: [ { name: 'X-Trustgate-Source', value: 'n8n' }, ... ]
// json: '{"X-Trustgate-Source":"n8n","X-Trustgate-Workflow-Id":"...", ...}'In n8n, set Send Headers (or equivalent) to the headers array or paste the json into your node config. You can pass apiKey and baseUrl explicitly or rely on TRUSTGATE_API_KEY and TRUSTGATE_BASE_URL in the environment.
Streaming (SSE)
- Use
client.fetchStream(path, init, options)for SSE endpoints. The client setsAccept: text/event-streamand sends workflow context (e.g.workflow_id) in the initial request, so the gateway can attribute the background stream. - The returned
Responsebody is the raw stream; consume it withresponse.body.getReader()or pass through to your framework’s SSE handling.
License
MIT
