npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

backboard-sdk

v1.5.14

Published

JavaScript SDK for the Backboard API - Build conversational AI applications with persistent memory and intelligent document processing

Readme

Backboard JavaScript SDK

A developer-friendly JavaScript SDK for the Backboard API. Build conversational AI applications with persistent memory and intelligent document processing.

New to Backboard? We include $5 in free credits to get you started and support 1,800+ LLMs across major providers.

New in v1.5.14

  • Primary messaging API: Use sendMessage() — calls POST /threads/messages. Omit threadId to start a new conversation; pass threadId to continue; pass assistantId to pin new threads to an existing assistant.
  • Tool outputs without runId: Use submitToolOutputsSimple() — calls POST /threads/tool-outputs with threadId and toolOutputs only. The server resolves the active run.
  • Legacy unchanged: addMessage() / submitToolOutputs() still use the thread-scoped message and run-scoped submit endpoints.

Earlier (v1.5.13)

  • Thinking (Reasoning Models): Pass a thinking object to addMessage / sendMessage on supported models. Access reasoning via response.reasoning or reasoning_streaming stream events.
  • JSON Output: Pass json_output: true (or jsonOutput: true) on addMessage / sendMessage when RAG, web search, and custom tools are not active.

Installation

npm install backboard-sdk

Or with yarn:

yarn add backboard-sdk

TypeScript

This package now ships first-class TypeScript types. Simply install and import; typings are included with the published package. No separate @types install is required.

Quick Start

import { BackboardClient } from 'backboard-sdk';

// Initialize the client
const client = new BackboardClient({
  apiKey: 'your_api_key_here'
});

// Create an assistant
const assistant = await client.createAssistant({
  name: 'Support Bot',
  system_prompt: 'You are a helpful customer support assistant'
});

// Create a conversation thread
const thread = await client.createThread(assistant.assistantId);

// Send a message
const response = await client.addMessage(thread.threadId, {
  content: 'Hello! Can you help me with my account?',
  llm_provider: 'openai',
  model_name: 'gpt-4o'
});

console.log(response.content);

TypeScript usage

Types are bundled; you can rely on autocompletion and type checking:

import { BackboardClient, MessageRole } from 'backboard-sdk';

const client = new BackboardClient({ apiKey: process.env.BACKBOARD_API_KEY! });
const thread = await client.createThread('assistant-id');
const resp = await client.addMessage(thread.threadId, { content: 'Ping?', stream: false });

if (resp.role === MessageRole.ASSISTANT) {
  console.log(resp.content);
}

Features

Memory (NEW in v1.4.0)

  • Persistent Memory: Store and retrieve information across conversations
  • Automatic Context: Enable memory to automatically search and use relevant context
  • Manual Management: Full control with add, update, delete, and list operations
  • Memory Modes: Auto (search + write), Readonly (search only), or off

Assistants

  • Create, list, get, update, and delete assistants
  • Configure custom tools and capabilities
  • Upload documents for assistant-level context

Threads

  • Create conversation threads under assistants
  • Maintain persistent conversation history
  • Support for message attachments

Documents

  • Upload documents to assistants or threads
  • Automatic processing and indexing for RAG
  • Support for PDF, Office files, text, and more
  • Real-time processing status tracking

Messages

  • Send messages with optional file attachments
  • Streaming and non-streaming responses
  • Tool calling support
  • Custom LLM provider and model selection

API Reference

Client Initialization

import { BackboardClient } from 'backboard-sdk';

const client = new BackboardClient({
  apiKey: 'your_api_key'
});

Assistants

// Create assistant
const assistant = await client.createAssistant({
  name: 'My Assistant',
  system_prompt: 'System prompt that guides your assistant',
  tools: [toolDefinition], // Optional
  tok_k: 15, // Optional: document chunks retrieved per query (1-100, default 10)
  custom_fact_extraction_prompt: 'Extract only preferences.', // Optional
  custom_update_memory_prompt: 'Only update on corrections.', // Optional
  // Embedding configuration (optional - defaults to OpenAI text-embedding-3-large with 3072 dims)
  embedding_provider: 'cohere', // Optional: openai, google, cohere, etc.
  embedding_model_name: 'embed-english-v3.0', // Optional
  embedding_dims: 1024 // Optional
});

// List assistants (limit: 1–200, skip: 0–10,000)
const assistants = await client.listAssistants({ skip: 0, limit: 100 });

// Get assistant
const assistant = await client.getAssistant(assistantId);

// Update assistant
const assistant = await client.updateAssistant(assistantId, {
  name: 'New Name',
  system_prompt: 'Updated system prompt'
});

// Delete assistant
const result = await client.deleteAssistant(assistantId);

Threads

// Create thread
const thread = await client.createThread(assistantId);

// List threads for a specific assistant (limit: 1–200, skip: 0–10,000)
const assistantThreads = await client.listThreadsForAssistant(assistantId, { skip: 0, limit: 100 });

// List all threads (limit: 1–200, skip: 0–10,000)
const threads = await client.listThreads({ skip: 0, limit: 100 });

// Get thread with messages
const thread = await client.getThread(threadId);

// Delete thread
const result = await client.deleteThread(threadId);

Messages

// Send message
const response = await client.addMessage(threadId, {
  content: 'Your message here',
  files: ['path/to/file.pdf'], // Optional attachments
  llm_provider: 'openai', // Optional
  model_name: 'gpt-4o', // Optional
  stream: false, // Set to true for streaming
  memory: 'Auto', // Optional: "Auto", "Readonly", or "off" (default)
  // memory_pro: 'Auto' // Optional: Memory Pro — higher accuracy (cannot combine with memory)
  json_output: true, // Optional: request JSON object output from the model
});

// Streaming messages
const stream = await client.addMessage(threadId, {
  content: 'Hello',
  stream: true
});

for await (const chunk of stream) {
  if (chunk.type === 'content_streaming') {
    process.stdout.write(chunk.content || '');
  }
}

Thinking (Reasoning Models)

Pass a thinking object to addMessage to activate extended reasoning on supported models (e.g. claude-3-7-sonnet, o3, o4-mini). Control reasoning depth with effort ("low" / "medium" / "high") or a token budget via budget_tokens.

Non-streaming – read response.reasoning

const response = await client.addMessage(threadId, {
  content: 'Solve: if 2x + 3 = 11, what is x?',
  llm_provider: 'anthropic',
  model_name: 'claude-3-7-sonnet-20250219',
  stream: false,
  thinking: { effort: 'high' },
});

// response.content   → final answer
// response.reasoning → full reasoning/thinking text (null if model produced none)
console.log('Answer:', response.content);
if (response.reasoning) {
  console.log('Reasoning:', response.reasoning);
}

Streaming – handle reasoning_streaming events

const stream = await client.addMessage(threadId, {
  content: 'Solve: if 2x + 3 = 11, what is x?',
  llm_provider: 'anthropic',
  model_name: 'claude-3-7-sonnet-20250219',
  stream: true,
  thinking: { effort: 'high' },
});

for await (const chunk of stream) {
  if (chunk.type === 'reasoning_streaming') {
    // `chunk.content` is the incremental reasoning delta for this chunk.
    // `chunk.accumulated_reasoning` is the full reasoning text received so far —
    // use this instead of manually concatenating deltas.
    process.stdout.write(chunk.content || '');
  } else if (chunk.type === 'content_streaming') {
    process.stdout.write(chunk.content || '');
  }
}

accumulated_reasoning grows monotonically with each reasoning_streaming event so you can always hand the latest value to a UI without keeping your own buffer.

TypeScript types

import type { MessageResponse, ToolOutputsResponse } from 'backboard-sdk';

// Both interfaces include:
//   reasoning?: string | null

Memory

// Add a memory
await client.addMemory(assistantId, {
  content: 'User prefers JavaScript programming',
  metadata: { category: 'preference' }
});

// Get all memories (supports page/pageSize pagination)
const memories = await client.getMemories(assistantId, { page: 1, pageSize: 25 });
for (const memory of memories.memories) {
  console.log(`${memory.id}: ${memory.content}`);
}

// Get specific memory
const memory = await client.getMemory(assistantId, memoryId);

// Update memory
await client.updateMemory(assistantId, memoryId, {
  content: 'Updated content'
});

// Delete memory
await client.deleteMemory(assistantId, memoryId);

// Get memory stats
const stats = await client.getMemoryStats(assistantId);
console.log(`Total memories: ${stats.totalMemories}`);

// Use memory in conversation
const response = await client.addMessage(threadId, {
  content: 'What do you know about me?',
  memory: 'Auto' // Enable memory search and automatic updates
});

Tool Integration (Simplified in v1.3.3)

Tool Definitions

// Use plain JSON objects (no verbose SDK classes needed!)
const tools = [
  {
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get current weather",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {"type": "string", "description": "City name"}
        },
        "required": ["location"]
      }
    }
  }
];

const assistant = await client.createAssistant({
  name: 'Weather Assistant',
  system_prompt: 'You are a helpful weather assistant',
  tools: tools
});

Tool Call Handling

// Enhanced object-oriented access with automatic JSON parsing
const response = await client.addMessage(threadId, {
  content: "What's the weather in San Francisco?"
});

if (response.status === 'REQUIRES_ACTION' && response.toolCalls) {
  const toolOutputs = [];
  
  // Process each tool call
  for (const tc of response.toolCalls) {
    if (tc.function.name === 'get_current_weather') {
      // Get parsed arguments (required parameters are guaranteed by API)
      const args = tc.function.parsedArguments;
      const location = args.location;
      
      // Execute your function and format the output
      const weatherData = {
        temperature: '68°F',
        condition: 'Sunny',
        location: location
      };
      
      toolOutputs.push({
        tool_call_id: tc.id,
        output: JSON.stringify(weatherData)
      });
    }
  }
  
  // Submit the tool outputs back to continue the conversation
  const finalResponse = await client.submitToolOutputs(
    threadId,
    response.runId,
    toolOutputs
  );
  
  console.log(finalResponse.content);
}

Documents

// Upload document to assistant
const document = await client.uploadDocumentToAssistant(
  assistantId,
  'path/to/document.pdf'
);

// Upload document to thread
const document = await client.uploadDocumentToThread(
  threadId,
  'path/to/document.pdf'
);

// List assistant documents
const documents = await client.listAssistantDocuments(assistantId);

// List thread documents
const documents = await client.listThreadDocuments(threadId);

// Get document status
const document = await client.getDocumentStatus(documentId);

// Delete document
const result = await client.deleteDocument(documentId);

Error Handling

The SDK includes comprehensive error handling:

import {
  BackboardAPIError,
  BackboardValidationError,
  BackboardNotFoundError,
  BackboardRateLimitError,
  BackboardServerError
} from 'backboard-sdk';

try {
  const assistant = await client.getAssistant('invalid_id');
} catch (error) {
  if (error instanceof BackboardNotFoundError) {
    console.log('Assistant not found');
  } else if (error instanceof BackboardValidationError) {
    console.log(`Validation error: ${error.message}`);
  } else if (error instanceof BackboardAPIError) {
    console.log(`API error: ${error.message}`);
  }
}

Advanced Tool Example

Here's a more comprehensive tool definition example:

const weatherTool = {
  type: 'function',
  function: {
    name: 'get_weather',
    description: 'Get current weather for a location',
    parameters: {
      type: 'object',
      properties: {
        location: {
          type: 'string',
          description: 'The city and state, e.g. San Francisco, CA'
        },
        unit: {
          type: 'string',
          enum: ['celsius', 'fahrenheit'],
          description: 'Temperature unit'
        }
      },
      required: ['location']
    }
  }
};

const assistant = await client.createAssistant({
  name: 'Weather Assistant',
  system_prompt: 'You are an AI assistant that can check weather information',
  tools: [weatherTool]
});

// Handle tool calls with the new simplified approach
const response = await client.addMessage(threadId, {
  content: "What's the weather in San Francisco?"
});

if (response.toolCalls) {
  for (const tc of response.toolCalls) {
    if (tc.function.name === 'get_weather') {
      const { location, unit } = tc.function.parsedArguments;
      const weather = await getWeatherData(location, unit);
      
      await client.submitToolOutputs(threadId, response.runId, [{
        tool_call_id: tc.id,
        output: JSON.stringify(weather)
      }]);
    }
  }
}

Supported File Types

The SDK supports uploading the following file types:

  • Documents: .pdf, .doc, .docx, .ppt, .pptx, .xls, .xlsx
  • Text / Data: .txt, .csv, .md, .markdown, .json, .jsonl, .xml
  • Code: .py, .js, .ts, .jsx, .tsx, .html, .css, .cpp, .c, .h, .java, .go, .rs, .rb, .php, .sql
  • Images (with embedded-image RAG support): .png, .jpg, .jpeg, .webp, .gif, .bmp, .tiff, .tif

Requirements

  • Node.js 16.0.0 or higher
  • ES modules support
  • TypeScript users: npm run build emits dist/ with .d.ts files

Local Development

# install deps
npm install

# build TypeScript -> dist/ (runs on publish via npm prepare)
npm run build

# lint/tests (if present)
npm run lint
npm test

License

MIT License - see LICENSE file for details.

Support