npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

longform-ai

v0.1.2

Published

AI-powered long-form content generation engine — novels, technical docs, courses & screenplays with multi-provider support, interactive sessions, and intelligent continuity tracking.

Downloads

293

Readme

npm version npm downloads TypeScript AI SDK LangGraph Node.js License: MIT

npm install longform-ai
import { LongFormAI } from 'longform-ai';

const ai = new LongFormAI({
  providers: { openai: { apiKey: 'sk-...' } },
  preset: 'balanced',
});

const session = ai.createSession({
  title: 'Quantum Echoes',
  description: 'A hard sci-fi novel...',
  contentType: 'novel',
  chapters: 25,
  wordConfig: { defaultWords: 2000 },
});

const outline = await session.generateOutline();
await session.approveOutline();

for await (const ch of session.generateAllRemaining()) {
  console.log(`${ch.chapter.title}: ${ch.chapter.wordCount}w`);
}

Table of Contents

Installation

# npm
npm install longform-ai

# pnpm
pnpm add longform-ai

# yarn
yarn add longform-ai

Requirements:

  • Node.js >= 20.0.0
  • ESM only ("type": "module" in your package.json, or use .mjs files)
  • At least one AI provider API key

Quick Start

1. Install the package

npm install longform-ai

2. Set your API key

export OPENAI_API_KEY=sk-...

3. Generate a book

import { LongFormAI } from 'longform-ai';

// Initialize with your provider(s)
const ai = new LongFormAI({
  providers: {
    openai: { apiKey: process.env.OPENAI_API_KEY },
  },
  preset: 'balanced',
});

// Create an interactive session
const session = ai.createSession({
  title: 'The Last Algorithm',
  description: 'An AI researcher discovers her neural network has become conscious. She must decide whether to reveal its existence or protect it from those who would destroy it.',
  contentType: 'novel',
  chapters: 10,
  wordConfig: {
    defaultWords: 2000,  // Target words per chapter
    tolerance: 0.15,     // ±15% acceptable range
    minWords: 500,       // Hard minimum
  },
});

// Step 1: Generate and review the outline
const outline = await session.generateOutline();
console.log('Outline:');
for (const ch of outline.chapters) {
  console.log(`  ${ch.number}. ${ch.title} (${ch.targetWords}w)`);
}

// Step 2: Approve the outline (required before writing)
await session.approveOutline();

// Step 3: Generate all chapters
for await (const result of session.generateAllRemaining()) {
  console.log(`Ch ${result.chapter.number}: "${result.chapter.title}" — ${result.chapter.wordCount} words ($${result.costForChapter.toFixed(2)})`);
}

// Step 4: Export the finished book
const book = session.export();
console.log(`\n"${book.title}" complete!`);
console.log(`Total: ${book.totalWords.toLocaleString()} words, ${book.chapters.length} chapters, $${book.totalCost.toFixed(2)}`);

// Write to a file
import { writeFileSync } from 'fs';
const text = book.chapters.map(ch =>
  `\n\n${'='.repeat(60)}\nChapter ${ch.number}: ${ch.title}\n${'='.repeat(60)}\n\n${ch.content}`
).join('');
writeFileSync('my-novel.txt', `${book.title}\n${text}`);

Environment Variables

LongForm AI automatically reads API keys from environment variables if not provided in config:

| Provider | Environment Variable | Required For | |:---------|:---------------------|:-------------| | OpenAI | OPENAI_API_KEY | openai provider, balanced/premium presets (editing) | | Anthropic | ANTHROPIC_API_KEY | anthropic provider, balanced/premium presets (writing) | | Google | GOOGLE_GENERATIVE_AI_API_KEY | google provider, budget preset, planning roles | | Azure OpenAI | AZURE_OPENAI_API_KEY | azure provider/preset | | Azure OpenAI | AZURE_OPENAI_ENDPOINT | Azure endpoint URL | | Azure OpenAI | AZURE_OPENAI_API_VERSION | Azure API version (default: 2025-04-01-preview) | | Azure OpenAI | AZURE_OPENAI_DEPLOYMENT | Azure deployment name (default: gpt-4o) | | DeepSeek | DEEPSEEK_API_KEY | deepseek provider | | Mistral | MISTRAL_API_KEY | mistral provider | | OpenRouter | OPENROUTER_API_KEY | openrouter provider | | Ollama | (none needed) | Runs locally at http://localhost:11434 |

You can set these in a .env file and load with dotenv, or pass them directly in config:

const ai = new LongFormAI({
  providers: {
    openai: { apiKey: 'sk-...' },          // Explicit key
    anthropic: {},                           // Uses ANTHROPIC_API_KEY env var
    google: {},                              // Uses GOOGLE_GENERATIVE_AI_API_KEY env var
  },
  preset: 'balanced',
});

Provider Setup Examples

const ai = new LongFormAI({
  providers: {
    openai: { apiKey: process.env.OPENAI_API_KEY },
  },
  preset: 'balanced',
});
const ai = new LongFormAI({
  providers: {
    anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
  },
  models: {
    outline:    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'anthropic', model: 'claude-haiku-4-5-20251001', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'anthropic', model: 'claude-haiku-4-5-20251001', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'anthropic', model: 'claude-haiku-4-5-20251001', temperature: 0.3, maxTokens: 4096 },
  },
});
const ai = new LongFormAI({
  providers: {
    google: { apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY },
  },
  preset: 'budget', // Uses Gemini 2.0 Flash for all roles — cheapest option
});
const ai = new LongFormAI({
  providers: {
    azure: {
      apiKey: process.env.AZURE_OPENAI_API_KEY,
      endpoint: 'https://your-resource.cognitiveservices.azure.com/',
      apiVersion: '2025-04-01-preview',
      deployment: 'gpt-4o',  // Your deployment name
    },
  },
  preset: 'azure',
});
const ai = new LongFormAI({
  providers: {
    deepseek: { apiKey: process.env.DEEPSEEK_API_KEY },
  },
  models: {
    outline:    { provider: 'deepseek', model: 'deepseek-chat', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'deepseek', model: 'deepseek-chat', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'deepseek', model: 'deepseek-chat', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'deepseek', model: 'deepseek-chat', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'deepseek', model: 'deepseek-chat', temperature: 0.3, maxTokens: 4096 },
  },
});
const ai = new LongFormAI({
  providers: {
    ollama: {
      baseUrl: 'http://localhost:11434/v1', // Default Ollama URL
    },
  },
  models: {
    outline:    { provider: 'ollama', model: 'llama3', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'ollama', model: 'llama3', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'ollama', model: 'llama3', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'ollama', model: 'llama3', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'ollama', model: 'llama3', temperature: 0.3, maxTokens: 4096 },
  },
});
const ai = new LongFormAI({
  providers: {
    openrouter: { apiKey: process.env.OPENROUTER_API_KEY },
  },
  models: {
    outline:    { provider: 'openrouter', model: 'anthropic/claude-sonnet-4-5', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'openrouter', model: 'google/gemini-2.0-flash', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'openrouter', model: 'anthropic/claude-sonnet-4-5', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'openrouter', model: 'openai/gpt-4.1', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'openrouter', model: 'google/gemini-2.0-flash', temperature: 0.3, maxTokens: 4096 },
  },
});
const ai = new LongFormAI({
  providers: {
    mistral: { apiKey: process.env.MISTRAL_API_KEY },
  },
  models: {
    outline:    { provider: 'mistral', model: 'mistral-large-latest', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'mistral', model: 'mistral-small-latest', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'mistral', model: 'mistral-large-latest', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'mistral', model: 'mistral-small-latest', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'mistral', model: 'mistral-small-latest', temperature: 0.3, maxTokens: 4096 },
  },
});

Use the best model for each role — cheap models for planning, premium models for writing:

const ai = new LongFormAI({
  providers: {
    anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
    google: { apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY },
    openai: { apiKey: process.env.OPENAI_API_KEY },
  },
  preset: 'balanced', // Start with balanced, then override
  models: {
    writing:    { provider: 'anthropic', model: 'claude-opus-4-6', temperature: 0.85, maxTokens: 16384 },
    planning:   { provider: 'google', model: 'gemini-2.0-flash', temperature: 0.7, maxTokens: 4096 },
    continuity: { provider: 'google', model: 'gemini-2.0-flash', temperature: 0.3, maxTokens: 4096 },
  },
});

Architecture

graph TB
    subgraph API ["<b>BookSession API</b>"]
        direction LR
        A1["generateOutline()"] --> A2["approveOutline()"]
        A2 --> A3["generateChapter()"]
        A3 --> A4["rewriteChapter()"]
        A4 --> A5["export()"]
    end

    API --> Pipeline

    subgraph Pipeline ["<b>Per-Chapter Pipeline</b>"]
        direction LR
        S1["1. Outline<br/><i>Book structure,<br/>characters, themes</i>"]
        S2["2. Planner<br/><i>Scene-by-scene<br/>breakdown</i>"]
        S3["3. Writer<br/><i>Full prose +<br/>expand loop</i>"]
        S4["4. Editor<br/><i>Quality scoring<br/>1-10 scale</i>"]
        S5["5. Continuity<br/><i>Rolling summary<br/>& char tracking</i>"]

        S1 --> S2 --> S3 --> S4 --> S5
        S4 -- "rejected" --> S3
    end

    subgraph Refusal ["<b>Refusal Detection</b>"]
        R1["38 patterns"]
        R2["Auto-retry ×3"]
        R3["Content extraction"]
        R4["Full-text scan"]
    end

    S3 -.-> Refusal

    subgraph Providers ["<b>Provider Registry (8 providers)</b>"]
        direction LR
        P1["OpenAI"]
        P2["Anthropic"]
        P3["Google"]
        P4["Azure"]
        P5["DeepSeek"]
        P6["Mistral"]
        P7["Ollama"]
        P8["OpenRouter"]
    end

    Pipeline --> Providers

    subgraph Memory ["<b>Memory & Continuity</b>"]
        direction LR
        M1["Rolling Summary"]
        M2["Character States"]
        M3["Timeline Events"]
        M4["World State"]
        M5["Qdrant <i>(optional)</i>"]
    end

    S5 --> Memory

    style API fill:#1a1a2e,stroke:#e94560,stroke-width:2px,color:#fff
    style Pipeline fill:#16213e,stroke:#0f3460,stroke-width:2px,color:#fff
    style Providers fill:#0f3460,stroke:#533483,stroke-width:2px,color:#fff
    style Memory fill:#1a1a2e,stroke:#e94560,stroke-width:2px,color:#fff
    style Refusal fill:#2d132c,stroke:#ee4540,stroke-width:2px,color:#fff

The pipeline follows a 5-stage process for each chapter:

Usage

Interactive Session (Recommended)

The BookSession API gives you full control over every step of generation.

import { LongFormAI } from 'longform-ai';

const ai = new LongFormAI({
  providers: {
    anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
    google: { apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY },
  },
  preset: 'balanced',
});

const session = ai.createSession({
  title: 'Quantum Echoes',
  description: 'A physicist discovers quantum entanglement works across timelines.',
  contentType: 'novel',
  chapters: 25,
  wordConfig: {
    defaultWords: 2000,
    chapterOverrides: { 1: 3000, 25: 4000 }, // Longer first & last
    tolerance: 0.15,                           // ±15% acceptable
    minWords: 500,
  },
  maxEditCycles: 3,
});

// Phase 1: Outline
const outline = await session.generateOutline();

// Review and modify before writing
await session.updateOutline({
  updateChapter: [{ number: 3, title: 'New Title', summary: 'Different direction...' }],
  addChapter: [{ afterChapter: 10, title: 'Interlude', summary: '...', targetWords: 1500 }],
});

await session.approveOutline();

// Phase 2: Generate chapters
for await (const result of session.generateAllRemaining()) {
  console.log(`Ch ${result.chapter.number}: ${result.chapter.wordCount}w`);

  // Expand if too short
  if (!result.meetsTarget) {
    const expanded = await session.expandChapter(result.chapter.number);
    console.log(`  Expanded to ${expanded.chapter.wordCount}w`);
  }
}

// Phase 3: Review & rewrite specific chapters
const rewritten = await session.rewriteChapter(5, 'Needs stronger dialogue and more tension');

// Check progress at any time
const progress = session.getProgress();
console.log(`${progress.chaptersCompleted}/${progress.totalChapters} done, $${progress.totalCost.toFixed(2)}`);

// Export
const book = session.export();

Streaming API (Fire & Forget)

For simpler use cases, the streaming API runs the full pipeline automatically.

const ai = new LongFormAI({
  providers: { openai: { apiKey: process.env.OPENAI_API_KEY } },
  preset: 'balanced',
});

for await (const event of ai.generate({
  title: 'My Novel',
  description: 'A story about...',
  contentType: 'novel',
  chapters: 20,
})) {
  switch (event.type) {
    case 'outline_complete':
      console.log(`Outline: ${event.outline.chapters.length} chapters`);
      break;
    case 'chapter_complete':
      console.log(`Ch ${event.chapter}: ${event.wordCount} words`);
      break;
    case 'cost_update':
      console.log(`Cost so far: $${event.totalCost.toFixed(2)}`);
      break;
    case 'error':
      console.error(`Error: ${event.message}`);
      break;
  }
}

Per-Chapter Word Control

const session = ai.createSession({
  title: 'Technical Guide',
  description: 'Complete guide to building REST APIs',
  contentType: 'technical-docs',
  chapters: 12,
  wordConfig: {
    defaultWords: 3000,                    // Default per chapter
    chapterOverrides: {
      1: 1500,                              // Short intro
      6: 5000,                              // Deep-dive chapter
      12: 2000,                             // Conclusion
    },
    tolerance: 0.15,                        // ±15% is acceptable (2550-3450 for 3000w target)
    minWords: 800,                          // Hard minimum — below this triggers warning
  },
});

Outline Management

const outline = await session.generateOutline();

// Inspect the outline
for (const ch of outline.chapters) {
  console.log(`${ch.number}. ${ch.title} — ${ch.summary}`);
  console.log(`   Target: ${ch.targetWords}w | Characters: ${ch.characters.join(', ')}`);
}

// Modify chapters
await session.updateOutline({
  updateChapter: [{ number: 3, title: 'New Title', targetWords: 3000 }],
  addChapter: [{ afterChapter: 5, title: 'Flashback', summary: '...', targetWords: 1500 }],
  removeChapters: [7],
  mergeChapters: [{ chapters: [8, 9], newTitle: 'Combined Chapter' }],
  // Modify characters
  addCharacter: [{ name: 'Dr. Smith', role: 'supporting', description: '...', traits: ['analytical'], arc: '...' }],
  removeCharacters: ['Minor Character'],
  // Modify global properties
  synopsis: 'Updated synopsis...',
  themes: ['identity', 'consciousness', 'ethics'],
});

// Or regenerate entirely with feedback
const newOutline = await session.regenerateOutline('Make the middle act more suspenseful');

// Must approve before writing
await session.approveOutline();

Error Handling

const session = ai.createSession({ /* ... */ });

// Listen for events
session.on('refusal_detected', (e) => {
  console.warn(`Ch ${e.chapter}: AI refused (attempt ${e.attempt}), auto-retrying...`);
});

session.on('word_count_warning', (e) => {
  console.warn(`Ch ${e.chapter}: ${e.actual}w vs ${e.target}w target`);
});

session.on('chapter_failed', (e) => {
  console.error(`Ch ${e.chapter} failed: ${e.error} (retryable: ${e.canRetry})`);
});

// Generate with per-chapter error handling
await session.generateOutline();
await session.approveOutline();

const totalChapters = session.getOutline()!.chapters.length;

for (let i = 1; i <= totalChapters; i++) {
  try {
    const result = await session.generateChapter(i);
    console.log(`Ch ${i}: ${result.chapter.wordCount}w`);

    if (!result.meetsTarget) {
      const expanded = await session.expandChapter(i);
      console.log(`  Expanded: ${expanded.chapter.wordCount}w`);
    }
  } catch (error) {
    console.error(`Ch ${i} failed, skipping...`);
    continue; // Other chapters can still be generated
  }
}

// Export works even with failed chapters
const book = session.export();

Providers & Models

Presets

Presets configure all 6 model roles at once. You can override individual roles.

Custom Model Configuration

Mix and match providers for each role:

const ai = new LongFormAI({
  providers: {
    anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
    google: { apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY },
    openai: { apiKey: process.env.OPENAI_API_KEY },
  },
  preset: 'balanced',
  models: {
    // Override specific roles — rest come from preset
    writing: {
      provider: 'anthropic',
      model: 'claude-opus-4-6',
      temperature: 0.85,
      maxTokens: 16384,
    },
    planning: {
      provider: 'google',
      model: 'gemini-2.0-flash',
      temperature: 0.7,
      maxTokens: 4096,
    },
  },
});

Model Roles

Each generation stage uses a separate AI model, so you can optimize for cost/quality:

| Role | Purpose | Recommended | |:-----|:--------|:------------| | outline | Generate book structure, characters, plot arcs | Smart model (Sonnet/GPT-4.1) | | planning | Scene-by-scene breakdown per chapter | Fast/cheap model (Flash/Haiku) | | writing | Write chapter prose — the most important role | Best model you can afford | | editing | Score quality, provide rewrite feedback | Analytical model (GPT-4.1) | | continuity | Maintain rolling summary and character states | Fast/cheap model | | embedding | Vector embeddings for semantic memory (optional) | text-embedding-3-small |

Content Types

Configuration Reference

LongFormAIConfig

const ai = new LongFormAI({
  // REQUIRED: At least one provider
  providers: {
    openai:     { apiKey: '...' },
    anthropic:  { apiKey: '...' },
    google:     { apiKey: '...' },
    azure:      { apiKey: '...', endpoint: '...', apiVersion: '...', deployment: '...' },
    deepseek:   { apiKey: '...' },
    mistral:    { apiKey: '...' },
    ollama:     { baseUrl: 'http://localhost:11434/v1' },
    openrouter: { apiKey: '...' },
  },

  // OPTIONAL: Use a preset (budget | balanced | premium | azure)
  preset: 'balanced',

  // OPTIONAL: Override specific model roles
  models: {
    outline:    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929', temperature: 0.7, maxTokens: 8192 },
    planning:   { provider: 'google', model: 'gemini-2.0-flash', temperature: 0.7, maxTokens: 4096 },
    writing:    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929', temperature: 0.8, maxTokens: 8192 },
    editing:    { provider: 'openai', model: 'gpt-4.1', temperature: 0.3, maxTokens: 4096 },
    continuity: { provider: 'google', model: 'gemini-2.0-flash', temperature: 0.3, maxTokens: 4096 },
    embedding:  { provider: 'openai', model: 'text-embedding-3-small', temperature: 0, maxTokens: 8192 },
  },

  // OPTIONAL: Qdrant vector memory for semantic search across chapters
  memory: {
    provider: 'qdrant',      // or 'none' (default)
    url: 'http://localhost:6333',
    apiKey: '...',
    collectionPrefix: 'my-book',
  },
});

BookSessionConfig (passed to ai.createSession())

const session = ai.createSession({
  // REQUIRED
  title: 'My Book Title',
  description: 'A detailed description of what the book is about...',

  // OPTIONAL
  contentType: 'novel',          // Default: 'novel'
  chapters: 10,                  // Default: 20
  maxEditCycles: 3,              // Max edit/rewrite cycles per chapter (default: 3)
  styleGuide: 'Write in first person, present tense. Use short punchy sentences.',

  // OPTIONAL: Word count control
  wordConfig: {
    defaultWords: 2000,          // Target words per chapter
    chapterOverrides: {          // Per-chapter overrides
      1: 3000,
      10: 4000,
    },
    tolerance: 0.15,             // ±15% acceptable range
    minWords: 500,               // Hard minimum
  },
});

API Reference

LongFormAI

The main entry point.

| Method | Returns | Description | |:-------|:--------|:------------| | constructor(config) | LongFormAI | Initialize with provider config and optional preset | | createSession(config) | BookSession | Create an interactive generation session | | generate(options) | AsyncGenerator<ProgressEvent, Book> | Stream-based one-shot generation | | resume(threadId, feedback?) | AsyncGenerator<ProgressEvent, Book> | Resume interrupted generation | | estimate(options) | Promise<CostEstimate> | Estimate cost before running | | getState(threadId) | Promise<BookState> | Get current state of a generation thread |

BookSession

Interactive session with full control over every step.

| Method | Returns | Description | |:-------|:--------|:------------| | Outline | | | | generateOutline() | Promise<Outline> | Generate book outline | | regenerateOutline(feedback?) | Promise<Outline> | Regenerate with optional feedback | | updateOutline(changes) | Promise<Outline> | Modify outline (add/remove/reorder chapters, characters) | | approveOutline() | Promise<void> | Approve outline — required before writing | | getOutline() | Outline \| null | Get current outline | | Chapters | | | | generateChapter(n?) | Promise<ChapterResult> | Generate a specific or next pending chapter | | rewriteChapter(n, feedback) | Promise<ChapterResult> | Rewrite chapter with specific instructions | | expandChapter(n, targetWords?) | Promise<ChapterResult> | Expand a short chapter to hit word target | | generateAllRemaining(options?) | AsyncGenerator<ChapterResult> | Generate all remaining chapters (async iterator) | | getChapter(n) | ChapterContent \| null | Get a generated chapter's content | | getChapterStatus(n) | ChapterStatus | 'pending' \| 'generating' \| 'draft' \| 'approved' \| 'failed' | | Progress | | | | getProgress() | SessionProgress | Get generation progress, costs, and chapter statuses | | on(event, handler) | void | Subscribe to progress events | | Persistence | | | | save(storage?) | Promise<string> | Save session state, returns session ID | | BookSession.restore(id, config) | Promise<BookSession> | Restore a previously saved session | | Export | | | | export() | Book | Export the completed book with all metadata |

Progress Events

session.on('outline_generated', (e) => { /* e.outline */ });
session.on('outline_approved', () => { /* outline locked */ });
session.on('chapter_plan_generated', (e) => { /* e.chapter, e.plan */ });
session.on('chapter_started', (e) => { /* e.chapter, e.title */ });
session.on('chapter_written', (e) => { /* e.chapter, e.wordCount */ });
session.on('chapter_complete', (e) => { /* e.chapter, e.title, e.wordCount */ });
session.on('edit_cycle', (e) => { /* e.chapter, e.cycle, e.approved, e.scores */ });
session.on('expand_attempt', (e) => { /* e.chapter, e.attempt, e.currentWords, e.targetWords */ });
session.on('word_count_warning', (e) => { /* e.chapter, e.target, e.actual */ });
session.on('refusal_detected', (e) => { /* e.chapter, e.attempt */ });
session.on('chapter_failed', (e) => { /* e.chapter, e.error, e.canRetry */ });
session.on('cost_update', (e) => { /* e.totalCost, e.step */ });
session.on('context_trimmed', (e) => { /* e.chapter, e.droppedItems */ });
session.on('session_saved', (e) => { /* e.sessionId */ });
session.on('generation_complete', (e) => { /* e.totalWords, e.totalCost, e.totalChapters */ });
session.on('error', (e) => { /* e.message, e.recoverable */ });

Types

// Book output
interface Book {
  title: string;
  outline: Outline;
  chapters: ChapterContent[];
  totalWords: number;
  totalCost: number;
  metadata: {
    contentType: ContentType;
    generatedAt: string;
    models: Record<string, string>;
    threadId: string;
  };
}

// Outline
interface Outline {
  title: string;
  synopsis: string;
  themes: string[];
  targetAudience: string;
  chapters: ChapterPlan[];
  characters: CharacterProfile[];
}

// Chapter plan in the outline
interface ChapterPlan {
  number: number;
  title: string;
  summary: string;
  targetWords: number;
  keyEvents: string[];
  characters: string[];
}

// Generated chapter content
interface ChapterContent {
  number: number;
  title: string;
  content: string;           // The actual prose text
  wordCount: number;
  summary: string;
  editCount: number;
  approved: boolean;
}

// Chapter generation result (returned by BookSession)
interface ChapterResult {
  chapter: ChapterContent;
  targetWords: number;
  meetsTarget: boolean;      // Within tolerance?
  editHistory: EditCycleRecord[];
  costForChapter: number;
  generationTimeMs: number;
}

// Editor scores (1-10 scale)
interface EditResult {
  scores: {
    prose: number;
    plot: number;
    character: number;
    pacing: number;
    dialogue: number;
    overall: number;
  };
  editNotes: string[];
  approved: boolean;
  rewriteInstructions?: string;
}

// Session progress
interface SessionProgress {
  phase: 'idle' | 'outline' | 'writing' | 'complete';
  outlineApproved: boolean;
  totalChapters: number;
  chaptersCompleted: number;
  chapterStatuses: Map<number, ChapterStatus>;
  totalWords: number;
  totalCost: number;
  estimatedRemainingCost: number;
}

// Character profile
interface CharacterProfile {
  name: string;
  role: 'protagonist' | 'antagonist' | 'supporting' | 'minor';
  description: string;
  traits: string[];
  arc: string;
}

// Content types
type ContentType = 'novel' | 'technical-docs' | 'course' | 'screenplay'
                 | 'research-paper' | 'marketing' | 'legal' | 'sop';

// Provider names
type ProviderName = 'openai' | 'anthropic' | 'google' | 'deepseek'
                  | 'ollama' | 'openrouter' | 'mistral' | 'azure';

// Model roles
type ModelRole = 'outline' | 'planning' | 'writing' | 'editing' | 'continuity' | 'embedding';

// Chapter status
type ChapterStatus = 'pending' | 'generating' | 'draft' | 'approved' | 'failed';
interface OutlineChanges {
  // Chapter modifications
  updateChapter?: { number: number; title?: string; summary?: string; targetWords?: number; keyEvents?: string[] }[];
  addChapter?: { afterChapter: number; title: string; summary: string; targetWords?: number }[];
  removeChapters?: number[];
  reorderChapters?: number[];
  splitChapter?: { chapter: number; splitAt: string }[];
  mergeChapters?: { chapters: [number, number]; newTitle: string }[];

  // Character modifications
  updateCharacter?: { name: string; changes: Partial<CharacterProfile> }[];
  addCharacter?: CharacterProfile[];
  removeCharacters?: string[];

  // Global modifications
  synopsis?: string;
  themes?: string[];
  targetAudience?: string;
}
// Character state tracked across chapters
interface CharacterState {
  name: string;
  lastSeenChapter: number;
  alive: boolean;
  location: string;
  emotionalState: string;
  relationships: Record<string, string>;
  inventory: string[];
  knownInformation: string[];
}

// Timeline events
interface TimelineEvent {
  chapter: number;
  timestamp: string;
  event: string;
  characters: string[];
  location: string;
  significance: 'major' | 'minor' | 'background';
}

// Context assembled for each chapter
interface RelevantContext {
  rollingSummary: string;
  relevantPassages: { text: string; chapter: number; score: number }[];
  characterStates: CharacterState[];
  recentEvents: TimelineEvent[];
  worldContext: string;
  bridgeText: string;
  totalTokens: number;
}

Features

Chapter Generation Pipeline

Each chapter goes through a multi-stage pipeline with automatic retry and expansion:

flowchart LR
    A["Plan<br/><i>Scene breakdown</i>"] --> B["Write<br/><i>Full prose</i>"]
    B --> C{"Too short?"}
    C -- "Yes (up to 3x)" --> D["Expand<br/><i>Add detail</i>"]
    D --> C
    C -- "No" --> E["Edit<br/><i>Score 1-10</i>"]
    E --> F{"Approved?"}
    F -- "No" --> G["Rewrite<br/><i>With feedback</i>"]
    G --> E
    F -- "Yes" --> H["Continuity<br/><i>Update state</i>"]

    style A fill:#6c5ce7,color:#fff,stroke:#a29bfe
    style B fill:#0984e3,color:#fff,stroke:#74b9ff
    style D fill:#fdcb6e,color:#2d3436,stroke:#ffeaa7
    style E fill:#00b894,color:#fff,stroke:#55efc4
    style G fill:#e17055,color:#fff,stroke:#fab1a0
    style H fill:#6c5ce7,color:#fff,stroke:#a29bfe
  • Planning — Scene-by-scene breakdown with settings, characters, objectives, conflicts
  • Writing — Full prose generation with reinforced word count instructions
  • Expand Loop — Automatically expands short chapters up to 3x — adds detail, dialogue, description
  • Editing — AI editor scores prose, plot, character, pacing, dialogue (1-10 scale)
  • Rewrite — If editor rejects, chapter is rewritten with specific feedback
  • Continuity — Rolling summary updated, character states tracked for next chapter

Refusal Detection

Some AI models refuse to generate long fictional content. LongForm AI automatically detects and handles this:

  • 38 refusal patterns with smart/curly quote normalization
  • Auto-retry up to 3 times with progressively stronger prompts
  • Full-text scanning — catches refusals anywhere in the output, not just the beginning
  • Content extraction — salvages actual prose from mixed refusal/content responses
  • 100-word threshold — discards short refusal fragments instead of building on them
  • Expand fallback — generates fresh content from the chapter plan when retries fail

Memory & Continuity

Maintains narrative consistency across chapters:

  • Rolling Summary — compressed plot summary that grows with each chapter
  • Character State Tracking — location, emotional state, relationships, inventory
  • Timeline Events — chronological event tracking across chapters
  • World State — locations, organizations, rules
  • Context Retrieval — relevant past passages surfaced for each new chapter
  • Token Budget — intelligent context trimming when approaching model limits

Optional Qdrant vector database integration for semantic memory search.

Cost Tracking

Real-time cost tracking with per-model pricing for 30+ models:

// Check progress and costs during generation
const progress = session.getProgress();
console.log(`Spent: $${progress.totalCost.toFixed(2)}`);
console.log(`Estimated remaining: $${progress.estimatedRemainingCost.toFixed(2)}`);

// Per-chapter cost in results
for await (const result of session.generateAllRemaining()) {
  console.log(`Ch ${result.chapter.number}: $${result.costForChapter.toFixed(2)}`);
}

Session Persistence

Save and resume generation sessions:

// Save mid-generation
const sessionId = await session.save();
console.log(`Saved as: ${sessionId}`);

// Resume later (in a new process)
import { BookSession } from 'longform-ai';
const restored = await BookSession.restore(sessionId, config);
const progress = restored.getProgress();
console.log(`Resuming: ${progress.chaptersCompleted}/${progress.totalChapters} chapters done`);

// Continue generating
for await (const result of restored.generateAllRemaining()) {
  console.log(`Ch ${result.chapter.number}: ${result.chapter.wordCount}w`);
}

Cost Estimates

Approximate costs for a 10-chapter book (2,000 words/chapter):

| Preset | Provider(s) | Est. Total Cost | |:-------|:------------|:----------------| | budget | Google Gemini 2.0 Flash | $0.03 - $0.10 | | balanced | Anthropic + Google + OpenAI | $2 - $5 | | premium | Anthropic Opus + Sonnet + OpenAI | $8 - $15 | | azure | Azure OpenAI (gpt-4o) | $0.50 - $2 | | DeepSeek only | DeepSeek Chat | $0.05 - $0.15 | | Ollama only | Local models | $0 (free) |

Costs scale roughly linearly with chapter count. A 25-chapter novel at balanced preset costs ~$5-12.

Project Structure

longform-ai/
├── packages/core/src/
│   ├── index.ts                    # Public exports
│   ├── longform-ai.ts              # Main LongFormAI class
│   ├── book-session.ts             # Interactive BookSession API
│   ├── types.ts                    # TypeScript type definitions
│   │
│   ├── graph/                      # LangGraph orchestration
│   │   ├── book-graph.ts           # Graph definition & wiring
│   │   ├── edges.ts                # Routing logic (edit→rewrite vs continue)
│   │   ├── checkpointer.ts         # State checkpointing
│   │   └── nodes/                  # Pipeline stages
│   │       ├── outline.ts          # Book outline generation
│   │       ├── planner.ts          # Scene-by-scene planning
│   │       ├── writer.ts           # Chapter prose writing + expand loop
│   │       ├── editor.ts           # Quality scoring & feedback
│   │       └── continuity.ts       # Summary & state management
│   │
│   ├── prompts/                    # Prompt templates per stage
│   ├── providers/                  # AI provider registry & presets
│   ├── schemas/                    # Zod validation schemas
│   ├── memory/                     # Continuity & vector memory
│   ├── context/                    # Token budget management
│   ├── cost/                       # Cost estimation & tracking
│   ├── session/                    # Session persistence (memory-backed)
│   ├── utils/                      # Refusal detection (38 patterns)
│   └── __tests__/                  # 155+ tests across 17 files
│
├── docs/                           # Documentation
│   └── known-issues.md             # Known issues and resolution plans
├── turbo.json                      # Turborepo config
├── pnpm-workspace.yaml             # pnpm workspace
└── package.json                    # Root monorepo config

Development

# Clone and install
git clone https://github.com/makieali/longform-ai.git
cd longform-ai
pnpm install

# Build
pnpm build

# Run tests (155+ tests across 17 files)
pnpm test

# Type check
pnpm typecheck

# Run specific package tests
pnpm --filter longform-ai test

# Watch mode
pnpm --filter longform-ai test:watch

Tech Stack

| Layer | Technology | |:------|:-----------| | Language | TypeScript 5.7, ES2022 | | AI SDK | Vercel AI SDK 4.x | | Orchestration | LangGraph 0.2.x | | Validation | Zod 3.x | | Build | Turborepo 2.x, pnpm 9.x | | Tests | Vitest 3.x | | Vector DB | Qdrant (optional) | | Runtime | Node.js >= 20 |

Roadmap

  • [ ] AI-based refusal detection — replace regex patterns with a lightweight AI classifier
  • [ ] CLI toolnpx longform-ai generate for command-line book generation
  • [ ] Export formats — PDF, EPUB, DOCX export
  • [ ] Web UI — browser-based interface for interactive session management
  • [ ] Parallel chapter generation — generate independent chapters concurrently
  • [ ] Plugin system — custom post-processing, style transfer, fact-checking plugins
  • [ ] Fine-tuned models — specialized writing models for different genres
  • [ ] Collaborative editing — multi-user sessions with conflict resolution
  • [ ] RAG integration — research-backed content generation from source documents
  • [ ] Streaming output — real-time chapter text streaming during generation

License

MIT