npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mothership/rebill-sdk

v0.1.4

Published

TypeScript SDK for querying Rebill model predictions from Databricks

Readme

@mothership/rebill-sdk

TypeScript SDK for querying Rebill model predictions from Databricks.

Installation

pnpm add @mothership/rebill-model-sdk

Or with npm:

npm install @mothership/rebill-model-sdk

Prerequisites

1. Create a Service Principal in Databricks

You'll need OAuth M2M credentials (service principal) to authenticate with Databricks:

  1. Go to your Databricks workspace
  2. Navigate to SettingsIdentity and AccessService Principals
  3. Click Add Service Principal
  4. Note the Application ID (this is your clientId)
  5. Generate an OAuth Secret (this is your clientSecret)
  6. Grant the service principal appropriate permissions:
    • CAN USE on the SQL Warehouse
    • SELECT permission on rebill_model.v1.predictions table

2. Get Your SQL Warehouse ID

  1. Navigate to SQL Warehouses in Databricks
  2. Select teh rebill_model warehouse
  3. Copy the Warehouse ID from the URL or settings

Usage

Basic Example

import { RebillClient, getRecordByEdiInvoiceInboundId } from '@mothership/rebill-sdk';

async function main() {
  // Create client with OAuth M2M credentials
  const client = new RebillClient({
    workspaceUrl: 'https://mothership-production.cloud.databricks.com',
    clientId: process.env.DATABRICKS_CLIENT_ID!,
    clientSecret: process.env.DATABRICKS_CLIENT_SECRET!,
    warehouseId: process.env.DATABRICKS_WAREHOUSE_ID!,
  });

  try {
    // Connect to Databricks
    await client.connect();

    // Query a prediction by EDI invoice ID
    const record = await getRecordByEdiInvoiceInboundId(client, '12345');
    
    if (record) {
      console.log('Model:', record.model_name, 'v' + record.model_version);
      console.log('Customer Summary:', record.prediction.customer_summary);
      console.log('Line Items:', record.prediction.rebill_line_items.length);
      
      // Show details of each rebill line item
      record.prediction.rebill_line_items.forEach((item, index) => {
        console.log(`\nLine Item ${index + 1}:`);
        console.log('  Type:', item.type);
        console.log('  Amount: $', item.amount);
        console.log('  Description:', item.line_item_description);
        console.log('  Root Cause:', item.root_cause);
      });
      
      // Calculate total amount from line items
      const totalAmount = record.prediction.rebill_line_items.reduce(
        (sum, item) => sum + item.amount, 
        0
      );
      console.log('\nTotal Amount: $', totalAmount.toFixed(2));
    } else {
      console.log('No prediction found');
    }
  } finally {
    // Always close the connection
    await client.close();
  }
}

main().catch(console.error);

Query Multiple Records

The SDK efficiently batches multiple IDs into a single Databricks query for optimal performance:

import { RebillClient, getRecordsByEdiInvoiceInboundIds } from '@mothership/rebill-sdk';

const client = new RebillClient({ /* config */ });
await client.connect();

const ids = ['12345', '67890', '11111'];
// Makes a single batched query to Databricks
const records = await getRecordsByEdiInvoiceInboundIds(client, ids);

console.log(`Found ${records.length} predictions`);
records.forEach(record => {
  const types = record.prediction.rebill_line_items.map(item => item.type).join(', ');
  const total = record.prediction.rebill_line_items.reduce((sum, item) => sum + item.amount, 0);
  console.log(`${record.edi_invoice_inbound_id}: ${types} ($${total.toFixed(2)})`);
});

await client.close();

Query Options

// Don't parse prediction JSON (return as string)
const record = await getRecordByEdiInvoiceInboundId(client, '12345', { 
  parsePrediction: false 
});

// Custom timeout
const record = await getRecordByEdiInvoiceInboundId(client, '12345', { 
  timeout: 60000 // 60 seconds
});

API Reference

RebillClient

Main client class for interacting with Databricks.

Constructor

new RebillClient(config: OAuthConfig)

Methods

  • connect(): Promise<void> - Initialize connection to Databricks
  • close(): Promise<void> - Close the connection

Query Functions

  • getRecordByEdiInvoiceInboundId(client, ediInvoiceInboundId, options?) - Query a single prediction by EDI invoice inbound ID
  • getRecordsByEdiInvoiceInboundIds(client, ediInvoiceInboundIds, options?) - Query multiple records in a single batched query

Types

interface OAuthConfig {
  workspaceUrl: string;
  clientId: string;
  clientSecret: string;
  warehouseId: string;
}

interface PredictionRecord {
  inference_id: string;
  edi_invoice_inbound_id: string;
  model_name: string;
  model_version: string;
  input_hash: string;
  prediction_json: string;
  processed_at: Date;
}

interface ParsedPrediction extends Omit<PredictionRecord, 'prediction_json'> {
  prediction: RebillPrediction;
}

interface RebillLineItem {
  type: string;
  amount: number;
  line_item_description: string;
  root_cause: string;
  supporting_documentation: string;
}

interface RebillPrediction {
  rebill_line_items: RebillLineItem[];
  customer_summary: string;
}

interface QueryOptions {
  parsePrediction?: boolean;
  timeout?: number;
}

Error Handling

The SDK throws RebillSDKError for all errors:

import { RebillClient, getRecordByEdiInvoiceInboundId, RebillSDKError } from '@mothership/rebill-sdk';

try {
  const record = await getRecordByEdiInvoiceInboundId(client, '12345');
} catch (error) {
  if (error instanceof RebillSDKError) {
    console.error('SDK Error:', error.code, error.message);
    console.error('Original Error:', error.originalError);
  }
}

Error codes:

  • CONNECTION_ERROR - Failed to connect to Databricks
  • NOT_CONNECTED - Attempted to query without connecting
  • QUERY_ERROR - Query execution failed
  • PARSE_ERROR - Failed to parse prediction JSON
  • CLOSE_ERROR - Failed to close connection

Environment Variables

It's recommended to store credentials in environment variables:

# .env
DATABRICKS_CLIENT_ID=your-service-principal-id
DATABRICKS_CLIENT_SECRET=your-service-principal-secret
DATABRICKS_WAREHOUSE_ID=your-warehouse-id
DATABRICKS_WORKSPACE_URL=https://mothership-production.cloud.databricks.com

Then use with dotenv:

import 'dotenv/config';
import { RebillClient } from '@mothership/rebill-sdk';

const client = new RebillClient({
  workspaceUrl: process.env.DATABRICKS_WORKSPACE_URL!,
  clientId: process.env.DATABRICKS_CLIENT_ID!,
  clientSecret: process.env.DATABRICKS_CLIENT_SECRET!,
  warehouseId: process.env.DATABRICKS_WAREHOUSE_ID!,
});

Development

Setup

# Install dependencies
pnpm install

# Set up git hooks (husky)
pnpm prepare

Building

# Build the SDK
pnpm build

# Type check without emitting files
pnpm build:check

# Watch mode for development
pnpm watch

Testing

# Run tests once
pnpm test

# Run tests in watch mode
pnpm test:watch

# Run tests with coverage report
pnpm test:coverage

Linting & Formatting

The SDK uses Biome for linting and formatting:

# Check formatting
pnpm format:check

# Fix formatting issues
pnpm format:fix

# Check for linting issues
pnpm lint:check

# Fix linting issues automatically
pnpm lint:fix

# Run both linting and formatting checks
pnpm lint

Pre-commit Hooks

The project uses Husky to run linting automatically before commits. The pre-commit hook will:

  • Run pnpm lint:fix to automatically fix linting and formatting issues
  • Prevent the commit if there are unfixable issues

Clean

# Remove build artifacts and coverage reports
pnpm clean