npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@alexanderfortin/pi-tavily-tools

v0.3.1

Published

Pi coding agent extension for web search and content extraction using Tavily

Readme

Pi Tavily Web Search & Extract Extension

codecov

Add web search and content extraction capabilities to Pi coding agent using the Tavily API.

This extension provides two tools:

  • web_search: Find current information, recent news, documentation, and time-sensitive data
  • web_extract: Extract raw content from one or more web pages for detailed analysis

Requires a valid TAVILY_API_KEY exported in the enviornment, e.g.

TAVILY_API_KEY=tvly-xxxx-xxxxxxx-xxxxxxxxxx pi

You can get a free one at https://tavily.com

Features

Web Search

  • Real-time Search: Query the web for current information
  • AI-Generated Answers: Get direct answers to questions powered by Tavily
  • Configurable Depth: Choose between "basic" and "advanced" search modes
  • Time Filtering: Limit results to recent timeframes (e.g., last 7 days)
  • Image Support: Include relevant images in search results
  • Raw Content: Optional raw content for deeper analysis

Web Extract

  • Content Extraction: Extract full content from one or more URLs
  • Batch Processing: Extract from up to 20 URLs in a single request
  • Configurable Depth: Choose between "basic" and "advanced" extraction
  • Multiple Formats: Output in markdown or plain text
  • Image Extraction: Optionally include images from pages
  • Query Filtering: Focus extraction on specific content within pages

Shared Features

  • Usage in Footer: Show current percentage of Tavily quota used in the footer
  • Proper Truncation: Output truncated to 50KB / 2000 lines to avoid context overflow
  • Custom TUI Rendering: Beautiful display with expandable results
  • Error Handling: Graceful failures with helpful error messages

Installation

Option 1: Install from npm (Recommended)

pi install npm:@alexanderfortin/pi-tavily-tools

Option 2: Install from Git

pi install git:github.com/shaftoe/pi-tavily-tools

Option 3: Quick Test with -e flag

git clone https://github.com/shaftoe/pi-tavily-tools
cd pi-tavily-tools
pi -e ./src/index.ts

Setup

1. Get a Tavily API Key

Visit https://tavily.com and sign up for a free account. The free tier includes:

  • 1,000 requests per month
  • Basic search depth
  • Standard rate limits

2. Configure the API Key

Set the TAVILY_API_KEY environment variable:

export TAVILY_API_KEY="your-api-key-here"

Add to your shell profile (~/.zshrc, ~/.bashrc, etc.) for persistent access:

echo 'export TAVILY_API_KEY="your-api-key-here"' >> ~/.zshrc
source ~/.zshrc

Usage

Basic Web Search

Simply ask Pi to search the web:

Search for the latest version of React
What are the new features in TypeScript 6.0?
Find recent news about artificial intelligence

Time-Limited Search

Limit results to a specific timeframe:

Search for AI news from the last 7 days
Show me the latest JavaScript updates from the past 30 days

Advanced Search

Use advanced search depth for more detailed results:

Search for quantum computing trends using advanced search

Image Search

Include relevant images in results:

Find cute cats with images

Raw Content

Get detailed content from search results:

Search for Bun test runner documentation and include raw content

Available Parameters

The web_search tool accepts the following parameters:

| Parameter | Type | Required | Default | Description | | --------------------- | ------- | -------- | ------- | ---------------------------------------------- | | query | string | Yes | - | The search query string | | max_results | number | No | 5 | Number of results to return (1-20) | | search_depth | string | No | "basic" | Search depth: "basic" or "advanced" | | include_answer | boolean | No | true | Include AI-generated answer | | include_raw_content | boolean | No | false | Include raw page content (markdown or text) | | include_images | boolean | No | false | Include relevant images | | days | number | No | - | Limit results to last N days (e.g., 7, 30, 90) |

Parameter Examples

// Basic search
{ query: "TypeScript 6" }

// Time-limited search
{ query: "AI news", days: 7 }

// Advanced search with more results
{ query: "quantum computing", search_depth: "advanced", max_results: 10 }

// Search with images
{ query: "cute cats", include_images: true }

// Search with raw content
{ query: "Bun documentation", include_raw_content: true }

Web Extract Usage

Basic Content Extraction

Extract the full content of a specific page:

Extract the content from https://example.com/article
Read the full article at https://docs.example.com/guide

Batch Extraction

Extract content from multiple URLs at once:

Extract content from these pages:
- https://wikipedia.org/wiki/Artificial_intelligence
- https://wikipedia.org/wiki/Machine_learning
- https://wikipedia.org/wiki/Data_science

Extract with Images

Include images when extracting content:

Extract the article and images from https://example.com/visual-guide

Plain Text Format

Get content in plain text instead of markdown:

Extract https://example.com/article as plain text

Content Filtering

Focus extraction on specific content:

Extract content about "security" from https://example.com/docs

Advanced Extraction

Use advanced extraction for more comprehensive content:

Extract detailed content from https://example.com/long-article using advanced mode

Web Extract Parameters

The web_extract tool accepts the following parameters:

| Parameter | Type | Required | Default | Description | | ---------------- | ------- | -------- | ---------- | ---------------------------------------------- | | urls | array | Yes | - | Array of URLs to extract content from (max 20) | | extract_depth | string | No | "basic" | Extraction depth: "basic" or "advanced" | | include_images | boolean | No | false | Include images from pages | | format | string | No | "markdown" | Output format: "markdown" or "text" | | query | string | No | - | Optional query to focus extraction on content |

Parameter Examples

// Single URL extraction
{ urls: ["https://example.com/article"] }

// Multiple URLs
{ urls: ["https://site1.com", "https://site2.com", "https://site3.com"] }

// Advanced extraction with images
{ urls: ["https://example.com"], extract_depth: "advanced", include_images: true }

// Plain text format
{ urls: ["https://example.com"], format: "text" }

// Content filtering
{ urls: ["https://docs.example.com"], query: "API reference" }

Output Truncation

To prevent overwhelming the LLM context, tool output is truncated to:

  • 50KB of data
  • 2,000 lines of text

Whichever limit is hit first triggers truncation.

When output is truncated:

  • A warning is displayed in the results
  • Full output is saved to a temp file in your project directory:
    • .pi-tavily-temp/search-{timestamp}.txt for web_search
    • .pi-tavily-temp/extract-{timestamp}.txt for web_extract
  • The LLM is informed where to find the complete output

Troubleshooting

"TAVILY_API_KEY is not set"

Error Message:

Error: TAVILY_API_KEY environment variable is not set.

Solution:

  1. Get an API key from https://tavily.com
  2. Set the environment variable:
    export TAVILY_API_KEY="your-api-key-here"
  3. Add to your shell profile for persistence:
    echo 'export TAVILY_API_KEY="your-api-key-here"' >> ~/.zshrc
    source ~/.zshrc

"Failed to initialize Tavily client"

Error Message:

Error: Failed to initialize Tavily client: ...

Possible Causes:

  1. Invalid API key format
  2. Network connectivity issues
  3. Tavily API is down

Solutions:

  1. Verify your API key is correct (should start with "tvly-")
  2. Check your internet connection
  3. Try a simple curl test:
    curl -X POST https://api.tavily.com/search \\
      -H "Content-Type: application/json" \\
      -d '{"api_key":"YOUR_KEY","query":"test","max_results":1}'

Rate Limit Errors

Error Message:

Error: You have exceeded your monthly request limit

Solution:

  • The free tier includes 1,000 requests per month
  • Upgrade your Tavily plan if you need more requests
  • Visit https://tavily.com/pricing for details

No Results Found

Symptoms:

  • Search returns "No results found."
  • Empty results list

Solutions:

  1. Try a broader or different search query
  2. Check spelling of your query
  3. Remove any special characters or complex filters
  4. Try basic search depth instead of advanced

No Content Extracted

Symptoms:

  • Extract returns "No content was extracted successfully."
  • All URLs failed

Solutions:

  1. Check that URLs are accessible (try opening in a browser)
  2. Verify URLs start with http:// or https://
  3. Some websites may block automated extraction
  4. Try with different URLs
  5. Check failed results for specific error messages

URL Validation Errors

Error Messages:

Error: URLs array cannot be empty
Error: Invalid URL format
Error: Maximum 20 URLs allowed

Solutions:

  1. Provide at least one URL in the urls array
  2. Ensure all URLs are valid and start with http:// or https://
  3. Limit to 20 URLs maximum per request

Development

Project Structure

pi-tavily-tools/
├── .github/
│   └── dependabot.yml    # Dependency update configuration
├── .envrc                # Direnv configuration for API keys
├── .gitignore
├── .prettierignore
├── .prettierrc           # Prettier code formatter config
├── AGENTS.md             # Project guidelines for Pi agents
├── LICENSE
├── README.md
├── bun.lock
├── eslint.config.js      # ESLint linting config
├── lefthook.yml          # Git hooks configuration
├── package.json          # Package manifest
├── tsconfig.json         # TypeScript compiler config
├── src/
│   ├── index.ts          # Extension entry point
│   └── tools/
│       ├── index.ts      # Tool exports
│       ├── web-search.ts # Web search tool implementation
│       ├── web-extract.ts # Web extract tool implementation
│       ├── tavily/       # Tavily API integration
│       │   ├── client.ts     # Tavily client & initialization
│       │   ├── details.ts    # Result details builders
│       │   ├── formatters.ts # Response formatting
│       │   ├── renderers.ts  # TUI rendering utilities
│       │   ├── schemas.ts    # TypeBox parameter schemas
│       │   └── types.ts      # Type definitions
│       └── shared/       # Shared utilities
│           └── truncation.ts # Output truncation utilities
└── tests/
    ├── integration/      # Integration tests
    │   ├── web-search.test.ts
    │   └── web-extract.test.ts
    ├── client.test.ts
    ├── create-error-output.test.ts
    ├── details.test.ts
    ├── formatters.test.ts
    ├── renderers.test.ts
    ├── schemas.test.ts
    ├── truncation.test.ts
    ├── web-search.test.ts
    └── web-extract.test.ts

Running Type Checks

bun run check

Watch mode for instant feedback during development:

bun run check:watch

Running Tests

bun run test

Watch mode for continuous testing:

bun run test:watch

Run only integration tests (requires valid API key):

bun test tests/integration/

Running Linting

bun run lint

Formatting Code

bun run format:fix

All Checks

Run all checks before committing:

bun run check && bun run lint && bun run test

License

MIT License - see LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

Releasing

This project uses automated publishing to NPM via GitHub Actions. The workflow will:

  • Run all CI checks
  • Build the package
  • Publish to NPM with provenance (signed) via trusted publishing

Acknowledgments

Built with: