npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@chinchillaenterprises/mcp-recall

v1.1.0

Published

Event-driven MCP server for Recall.ai meeting transcription with enhanced speaker identification and local storage

Downloads

9

Readme

MCP Recall - Event-Driven Meeting Transcription Server

NPM Version License: MIT

Event-driven MCP server for Recall.ai meeting transcription with enhanced speaker identification and local storage

🚀 Features

  • Async Event-Driven Architecture: Start transcription jobs and check progress without blocking
  • Enhanced Speaker Identification: Combines Recall.ai's speaker timeline with Whisper transcription
  • Local Temporary Storage: Stores transcripts locally with automatic cleanup
  • Multi-Region Support: Works with all Recall.ai regions (US, EU, Asia)
  • Intelligent Summarization: Extracts key decisions, action items, and topics
  • Chunked Access: Get specific time ranges or search within transcripts
  • Background Processing: Uses OpenAI Whisper for high-quality transcription
  • Job Management: Track multiple transcription jobs with detailed progress

📋 Prerequisites

  • Node.js 18+
  • Python 3.8+ with the following packages:
    pip install openai-whisper requests
  • Recall.ai API Key - Get yours at recall.ai

🛠️ Installation

Quick Install for Claude Code

# Install globally for all projects
claude mcp add recall -s user -e RECALL_API_KEY=your_api_key -- npx @chinchillaenterprises/mcp-recall

# Or install for specific project
claude mcp add recall -s project -e RECALL_API_KEY=your_api_key -- npx @chinchillaenterprises/mcp-recall

Manual Installation

  1. Install the package:

    npm install -g @chinchillaenterprises/mcp-recall
  2. Set up environment variables:

    export RECALL_API_KEY="your_recall_api_key"
    export RECALL_REGION="us-west-2"  # Optional: us-east-1, us-west-2, eu-central-1, ap-northeast-1
    export WHISPER_MODEL="base"       # Optional: tiny, base, small, medium, large
  3. Configure Claude Code (~/.claude.json):

    {
      "mcpServers": {
        "recall": {
          "type": "stdio",
          "command": "npx",
          "args": ["-y", "@chinchillaenterprises/mcp-recall"],
          "env": {
            "RECALL_API_KEY": "your_api_key_here",
            "RECALL_REGION": "us-west-2",
            "WHISPER_MODEL": "base"
          }
        }
      }
    }

🎯 Quick Start

  1. List available recordings:

    Use recall_list_bots to see all Recall.ai bots and their recording status
  2. Start transcription:

    Use recall_start_transcription with bot_id "c6ce75ce-ffa6-489a-8ebd-41fcfd4e17d8" to begin async transcription
  3. Check progress:

    Use recall_get_job_status with the job_id to see transcription progress
  4. Get results:

    Use recall_get_transcript_summary to get key decisions, action items, and participants

🔧 Available Tools

Core Operations

| Tool | Description | Parameters | |------|-------------|------------| | recall_list_bots | List all Recall.ai bots and recording status | None | | recall_start_transcription | Start async transcription job | bot_id: string | | recall_get_job_status | Get job status and progress | job_id: string | | recall_list_jobs | List all transcription jobs | limit?: number |

Transcript Access

| Tool | Description | Parameters | |------|-------------|------------| | recall_get_transcript_summary | Get condensed summary with key points | job_id: string | | recall_get_transcript_chunk | Get specific time range from transcript | job_id: string, start_time?: number, end_time?: number | | recall_search_transcript | Search within transcript for keywords | job_id: string, query: string |

Maintenance

| Tool | Description | Parameters | |------|-------------|------------| | recall_cleanup_old_jobs | Manually trigger cleanup of old jobs | days_old?: number |

📊 Workflow Example

// 1. Check available recordings
await recall_list_bots()
// Returns: List of bots with recording status

// 2. Start transcription (async)
const jobId = await recall_start_transcription({ bot_id: "abc123" })
// Returns: job_id for tracking

// 3. Monitor progress
await recall_get_job_status({ job_id: jobId })
// Returns: status, progress %, metadata

// 4. Get condensed summary (when complete)
await recall_get_transcript_summary({ job_id: jobId })
// Returns: executive summary, decisions, action items, participants

// 5. Search specific content
await recall_search_transcript({ job_id: jobId, query: "action item" })
// Returns: matching lines with context

// 6. Get time-specific chunk
await recall_get_transcript_chunk({ 
  job_id: jobId, 
  start_time: 300,  // 5 minutes
  end_time: 600     // 10 minutes
})

🗂️ Data Storage

Local Storage Structure

~/.mcp-recall/
├── jobs.json                    # Job tracking database
├── recordings/
│   └── {job_id}.mp4            # Downloaded recordings
├── transcripts/
│   ├── enhanced_{job_id}.txt   # Speaker-aligned transcripts
│   └── enhanced_{job_id}.json  # Raw transcription data
└── summaries/
    └── {job_id}.json          # Processed summaries

Automatic Cleanup

  • Daily cleanup at 2 AM removes jobs older than 3 days
  • Manual cleanup available via recall_cleanup_old_jobs
  • Graceful storage with configurable retention periods

⚙️ Configuration

Environment Variables

| Variable | Description | Default | Options | |----------|-------------|---------|---------| | RECALL_API_KEY | Recall.ai API key | Required | Your API key | | RECALL_REGION | API region | us-west-2 | us-east-1, us-west-2, eu-central-1, ap-northeast-1 | | WHISPER_MODEL | Whisper model size | base | tiny, base, small, medium, large |

Model Comparison

| Model | Size | Speed | Accuracy | RAM Usage | |-------|------|-------|----------|-----------| | tiny | ~39 MB | Fastest | Good | ~390 MB | | base | ~74 MB | Fast | Better | ~500 MB | | small | ~244 MB | Medium | Good | ~1 GB | | medium | ~769 MB | Slow | Very Good | ~2 GB | | large | ~1550 MB | Slowest | Best | ~4 GB |

🔍 Example Output

Job Status Response

{
  "job_id": "550e8400-e29b-41d4-a716-446655440000",
  "status": "completed",
  "progress": 100,
  "bot_id": "c6ce75ce-ffa6-489a-8ebd-41fcfd4e17d8",
  "metadata": {
    "duration": 2234,
    "speaker_count": 5,
    "file_size": 45189510
  }
}

Transcript Summary

{
  "executive_summary": "37-minute team meeting with 5 participants discussing MCP server development, AI Academy launch, and Discord setup.",
  "participants": ["Abel", "Ricardo", "Soroush", "Kael C.", "Hailee"],
  "key_decisions": [
    "Use event-driven architecture for MCP recall server",
    "Implement dual storage pattern for credential persistence"
  ],
  "action_items": [
    "Ricardo: Finish Discord setup by Friday",
    "Hailee: Post AI Academy job on LinkedIn"
  ],
  "duration": "37 minutes",
  "topics": ["MCP servers", "AI Academy", "Discord setup", "LinkedIn posting"],
  "sentiment": "positive"
}

Enhanced Transcript Sample

=== ENHANCED TRANSCRIPT WITH SPEAKERS ===

**Abel** [0.0s]: That is recall. Oh, it's so cool. He just joined automatically or wonder. So we've pushed him in here.

**Soroush** [36.0s]: Hey, how's it going? Good. Good.

**Abel** [39.0s]: Recall join. Did you push him or did you jump by himself?

**Soroush** [42.0s]: I just added it to the channel and it joins automatically.

🚨 Error Handling

The server provides comprehensive error handling for:

  • API Connection Issues: Automatic retry with exponential backoff
  • Missing Dependencies: Clear error messages for Python/Whisper setup
  • Storage Failures: Graceful degradation with temporary storage
  • Transcription Errors: Detailed error reporting with context
  • Job State Management: Robust state tracking across restarts

🧪 Development

Local Development Setup

# Clone the repository
git clone https://github.com/ChinchillaEnterprises/ChillMCP.git
cd ChillMCP/mcp-recall

# Install dependencies
npm install

# Install Python dependencies
pip install openai-whisper requests

# Build the project
npm run build

# Test locally
claude mcp add recall-local -s user -- node $(pwd)/dist/index.js

Testing

# Run unit tests
npm test

# Run with coverage
npm run test:coverage

# Run integration tests
npm run test:integration

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Build and test: npm run build && npm test
  5. Commit your changes: git commit -m 'Add amazing feature'
  6. Push to the branch: git push origin feature/amazing-feature
  7. Open a pull request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔗 Links

🆘 Support


Built with ❤️ by Chinchilla Enterprises for the MCP ecosystem.