npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@lti-tool/postgresql

v1.0.2

Published

PostgreSQL storage for LTI 1.3 @lti-tool

Readme

@lti-tool/postgresql

Installation

npm install @lti-tool/postgresql

Quick Start

import { PostgresStorage } from '@lti-tool/postgresql';
import { LTITool } from '@lti-tool/core';

const storage = new PostgresStorage({
  connectionUrl: process.env.DATABASE_URL!,
});

const ltiTool = new LTITool({
  storage,
  // ... other config
});

Features

  • Production Ready - Handles high-scale LTI deployments
  • Built-in Caching - LRU cache for frequently accessed data
  • Type-safe - Uses Drizzle ORM for database operations
  • Transaction Support - Handles data integrity on deletes
  • Tuned Connection Pool Defaults - Connection pool defaults based on hosting environment

API Reference

Configuration

Using Drizzle Kit Push (Recommended for Development)

# Set your DATABASE_URL
export DATABASE_URL="postgresql://user:password@host:port/database"

# Push schema to database
npx drizzle-kit push

Using Migrations (Recommended for Production)

# Apply migrations
npx drizzle-kit migrate

PostgresStorageConfig

  • connectionUrl (required): PostgreSQL connection URL Format: postgresql://user:password@host:port/database

  • poolOptions (optional): postgres.js connection options

    • max: Max connections (auto: 1 for serverless, 10 for servers)
    • idleTimeout: Idle timeout in seconds before connection is closed (default: 20)
  • nonceExpirationSeconds (optional): Nonce TTL in seconds (default: 600)

  • logger (optional): Pino logger for debugging

Database Schema

The adapter uses these tables:

  • clients: LTI platform clients Unique constraint: (iss, clientId)
  • deployments: Platform deployments (many-to-one with clients) Unique constraint: (clientId, deploymentId)
  • sessions: LTI sessions with expiration Indexed: expiresAt
  • nonces: One-time use nonces Primary key: nonce Indexed: expiresAt
  • registration_sessions: Dynamic registration sessions Indexed: expiresAt

All tables use native PostgreSQL UUIDs for primary keys and include indexes for performance.

clients

| Column | Type | Constraints | Description | | ---------- | ------------ | --------------------- | ------------------------------ | | id | UUID | PRIMARY KEY, NOT NULL | Internal UUID for the client | | name | VARCHAR(255) | NOT NULL | Human-readable platform name | | iss | VARCHAR(255) | NOT NULL | Issuer URL (LMS platform) | | clientId | VARCHAR(255) | NOT NULL | LMS-provided client identifier | | authUrl | TEXT | NOT NULL | OAuth2 authorization endpoint | | tokenUrl | TEXT | NOT NULL | OAuth2 token endpoint | | jwksUrl | TEXT | NOT NULL | JWKS endpoint for public keys |

Indexes:

  • issuer_client_idx: (clientId, iss) - For fast client lookups
  • iss_client_id_unique: (iss, clientId) - Unique constraint preventing duplicate clients

deployments

| Column | Type | Constraints | Description | | -------------- | ------------ | --------------------- | ---------------------------------- | | id | UUID | PRIMARY KEY, NOT NULL | Internal UUID for the deployment | | deploymentId | VARCHAR(255) | NOT NULL | LMS-provided deployment identifier | | name | VARCHAR(255) | NULL | Optional human-readable name | | description | TEXT | NULL | Optional description | | clientId | UUID | NOT NULL, FOREIGN KEY | References clients.id |

Indexes:

  • deployment_id_idx: (deploymentId) - For fast deployment lookups
  • client_deployment_unique: (clientId, deploymentId) - Unique constraint per client

sessions

| Column | Type | Constraints | Description | | ----------- | ------------------------ | --------------------- | ---------------------------- | | id | UUID | PRIMARY KEY, NOT NULL | Session UUID | | data | JSONB | NOT NULL | Complete LTI session data | | expiresAt | TIMESTAMP WITH TIME ZONE | NOT NULL | Session expiration timestamp |

Indexes:

  • sessions_expires_at_idx: (expiresAt) - For cleanup queries and expiration checks

nonces

| Column | Type | Constraints | Description | | ----------- | ------------------------ | --------------------- | -------------------------- | | nonce | VARCHAR(255) | PRIMARY KEY, NOT NULL | One-time use nonce value | | expiresAt | TIMESTAMP WITH TIME ZONE | NOT NULL | Nonce expiration timestamp |

registration_sessions

| Column | Type | Constraints | Description | | ----------- | ------------------------ | --------------------- | --------------------------------- | | id | UUID | PRIMARY KEY, NOT NULL | Registration session UUID | | data | JSONB | NOT NULL | Dynamic registration session data | | expiresAt | TIMESTAMP WITH TIME ZONE | NOT NULL | Session expiration timestamp |

Indexes:

  • reg_sessions_expires_at_idx: (expiresAt) - For cleanup queries and expiration checks

Connection Pool Behavior

The adapter automatically detects your deployment environment:

  • Serverless (Lambda, Cloud Functions, Vercel, Netlify): max: 1
  • Traditional Servers (EC2, containers, VMs): max: 10

How Connection Pooling Works

  • Starts with 0 connections
  • Creates connections on-demand when queries execute
  • Reuses idle connections before creating new ones
  • Increases to max based on concurrent load
  • Closes idle connections after idleTimeout seconds (default: 20)

Manual Override

const storage = new PostgresStorage({
  connectionUrl: process.env.DATABASE_URL!,
  poolOptions: {
    max: 20, // Override auto-detection
    idleTimeout: 30, // Keep connections alive longer
  },
});

Deployment Patterns

Long-Running Servers

import { PostgresStorage } from '@lti-tool/postgresql';

export const storage = new PostgresStorage({
  connectionUrl: process.env.DATABASE_URL!,
});

// Optional: Graceful shutdown
const shutdown = async () => {
  await storage.close();
  process.exit(0);
};

process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);

Connection Limits:

  • Low traffic: 5-10 connections
  • Medium traffic: 10-20 connections
  • High traffic: 20-50 connections
  • Never exceed PostgreSQL max_connections

AWS Lambda / Serverless

import { PostgresStorage } from '@lti-tool/postgresql';

let storage: PostgresStorage | undefined;

export const handler = async (event) => {
  if (!storage) {
    storage = new PostgresStorage({
      connectionUrl: process.env.DATABASE_URL!,
      // Auto-detects Lambda, uses max: 1
    });
  }

  // Use storage...
};

Why max: 1? Lambda containers handle one request at a time. The connection is reused across warm invocations.

Do I need close()? No! Lambda freezes containers efficiently. Calling close() destroys reusable connections.

Edge Runtime Warning

⚠️ Not supported!

Periodic Cleanup

The adapter requires periodic cleanup of expired nonces and sessions.

// Example - AWS Lambda with EventBridge (every 30 minutes)
export const handler = async () => {
  const result = await storage.cleanup();
  console.log('Cleanup:', result);
  // { noncesDeleted: 42, sessionsDeleted: 15, registrationSessionsDeleted: 3 }
};

Development & Testing

Start Local PostgreSQL

# Using Docker
docker-compose up -d

# Using Podman
podman-compose up -d

# Or Podman directly
podman run -d \
  --name lti-postgres \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_DB=lti_test \
  -e POSTGRES_USER=lti_user \
  -p 5432:5432 \
  postgres:16

Run Tests

DATABASE_URL="postgresql://lti_user:postgres@localhost:5432/lti_test" npm test

Important: Always close the pool after tests:

afterAll(async () => {
  await storage.close();
});

Environment Detection

Auto-detects serverless by checking:

  • AWS Lambda: AWS_LAMBDA_FUNCTION_NAME, AWS_EXECUTION_ENV
  • Google Cloud: FUNCTION_NAME, K_SERVICE
  • Azure: FUNCTIONS_WORKER_RUNTIME
  • Vercel: VERCEL
  • Netlify: NETLIFY