npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

effective-indexer

v0.2.10

Published

Lightweight EVM smart contract event indexer built with Effect

Downloads

1,071

Readme

Effective Indexer

EVM event indexing without hosted lock-in.

One worker process, one config file, your own SQLite database. No subgraph deployment pipeline, no token staking, no PhD required.

Works with any EVM chain: Ethereum, Rootstock, Polygon, Arbitrum, Base, you name it.

Why

  • Own your data — events land in your DB, not in someone else's cloud.
  • Fast backfill — parallel eth_getLogs with deterministic chunk ordering.
  • Production-safe — checkpoint resume, reorg detection, retry with backoff, live polling.
  • Self-healing — automatic crash recovery with configurable alerting.
  • Typed DX — TypeScript-first config and query API, Hardhat-style config files.

Install

npm install effective-indexer effect

effect is a peer dependency — the only runtime dependency besides viem.

Quick start

1. Config file

Create indexer.config.ts:

import { defineIndexerConfig } from "effective-indexer"
import type { Abi } from "viem"

const abi: Abi = [
  {
    type: "event",
    name: "Transfer",
    inputs: [
      { indexed: true, name: "from", type: "address" },
      { indexed: true, name: "to", type: "address" },
      { indexed: false, name: "value", type: "uint256" },
    ],
  },
]

export default defineIndexerConfig({
  rpcUrl: "https://rpc.mainnet.rootstock.io/{{EVM_RPC_API_KEY}}",
  dbPath: "./data/events.db",
  contracts: [
    {
      name: "Token",
      address: "0xYourContractAddress",
      abi,
      events: ["Transfer"],
      startBlock: 0n,
    },
  ],
})

{{EVM_RPC_API_KEY}} is resolved from env at runtime. Secrets stay in .env, config stays typed.

2. Worker script

Create scripts/indexer.ts:

import config from "../indexer.config"
import { resolveIndexerConfigFromEnv, runIndexerWorker } from "effective-indexer"

const resolved = resolveIndexerConfigFromEnv(config)

runIndexerWorker(resolved).catch(error => {
  console.error("Indexer worker failed:", error)
  process.exit(1)
})

3. Environment and run

.env:

EVM_RPC_API_KEY=your-api-key
# Full URL override (takes priority over template):
# EVM_RPC_URL=https://eth.llamarpc.com
npm install -D tsx
node --import tsx ./scripts/indexer.ts

That's it. The worker creates the DB directory, connects, backfills, and switches to live polling.

Query data

import config from "../indexer.config"
import { Indexer, resolveIndexerConfigFromEnv } from "effective-indexer"

const indexer = Indexer.create(resolveIndexerConfigFromEnv(config))

const events = await indexer.query({
  contractName: "Token",
  eventName: "Transfer",
  order: "desc",
  limit: 50,
})

console.log(events)
await indexer.stop()

API

Indexer.create(config): IndexerHandle

Creates an indexer instance. Returns:

| Method | Description | |--------|-------------| | start() | Start indexing (non-blocking, runs in background) | | stop() | Gracefully stop and dispose runtime (idempotent) | | waitForExit() | Await the indexing loop (rejects on crash) | | query(q?) | Query stored events → Promise<ParsedEvent[]> | | count(q?) | Count stored events → Promise<number> |

defineIndexerConfig(config)

Identity function for typed config files. Zero runtime cost, pure DX.

resolveIndexerConfigFromEnv(config, options?)

Resolves {{ENV_VAR}} placeholders in rpcUrl from process.env. Supports:

  • Sensitive placeholders — read via Config.redacted (default: EVM_RPC_API_KEY)
  • Full URL overrideEVM_RPC_URL env var takes priority over the template
  • Custom env source — pass { env: myEnvMap } for testing

runIndexerWorker(config, options?)

Long-lived worker with batteries included:

  • Creates DB directory if missing
  • Registers SIGINT/SIGTERM handlers for graceful shutdown
  • Keeps the process alive during live polling
  • Auto-restarts on crash with exponential backoff
  • Calls onRecoveryFailure webhook when recovery window is exhausted
  • Always re-throws the original error (notification failures are logged, never mask the cause)

createWebhookNotifier(url, init?)

Helper that returns an onRecoveryFailure callback — POSTs a JSON payload to the given URL.

import { createWebhookNotifier } from "effective-indexer"

const notify = createWebhookNotifier("https://hooks.slack.com/...")

Or configure it in the config file directly (see worker.alert.webhookUrl below).

EventQuery

| Field | Type | Description | |-------|------|-------------| | contractName | string? | Filter by contract name | | eventName | string? | Filter by event name | | fromBlock | bigint? | Min block number | | toBlock | bigint? | Max block number | | txHash | string? | Filter by transaction hash | | limit | number? | Max results | | offset | number? | Skip first N results | | order | "asc" \| "desc"? | Sort by block number |

ParsedEvent

| Field | Type | |-------|------| | id | number | | contractName | string | | eventName | string | | blockNumber | bigint | | txHash | string | | logIndex | number | | timestamp | number \| null | | args | Record<string, unknown> |

Full config reference

All fields except rpcUrl and contracts are optional — sensible defaults are applied.

Top-level

| Field | Type | Default | Description | |-------|------|---------|-------------| | rpcUrl | string | — | RPC endpoint (supports {{ENV}} placeholders) | | dbPath | string | "./indexer.db" | SQLite database path | | contracts | ContractConfig[] | — | At least one contract required | | network | NetworkConfig | see below | RPC and chain tuning | | telemetry | TelemetryConfig | see below | Progress rendering | | worker | WorkerConfig | see below | Recovery and alerting | | logLevel | string | "info" | trace \| debug \| info \| warning \| error \| none | | logFormat | string | "pretty" | pretty \| json \| structured | | enableTelemetry | boolean | true | false = errors only, no progress bar |

contracts[]

| Field | Type | Description | |-------|------|-------------| | name | string | Unique contract name (used in queries) | | address | string | Contract address (hex) | | abi | Abi | Viem-compatible ABI (only event entries needed) | | events | [string, ...string[]] | Event names to index (non-empty) | | startBlock | bigint? | Block to start indexing from (default: 0n) |

network

network: {
  polling: {
    intervalMs: 12000,        // block poll interval (ms)
    confirmations: 1,         // blocks behind tip = "confirmed"
  },
  logs: {
    chunkSize: 5000,          // blocks per eth_getLogs request
    maxRetries: 5,            // retries per failed RPC call
    parallelRequests: 1,      // concurrent eth_getLogs during backfill
    retry: {
      baseDelayMs: 1000,      // initial retry delay
      maxDelayMs: 30000,      // backoff cap
    },
  },
  reorg: {
    depth: 20,                // block hash buffer for reorg detection
  },
}

telemetry

telemetry: {
  progress: {
    enabled: true,            // show live progress bar during backfill
    intervalMs: 3000,         // render frequency (min 500ms)
  },
}

When enabled, the terminal shows a live progress line:

[Backfill] Token 42.8% | 1,234,000/2,880,000 blocks | 3,450 blk/s | 12.4 ev/s | ETA 00:07:43 | p=3 | chunk=5000

On non-TTY (CI, logs), periodic info messages are emitted instead.

worker

worker: {
  recovery: {
    enabled: true,               // auto-restart on crash
    maxRecoveryDurationMs: 900000, // give up after 15 min of failures
    initialRetryDelayMs: 1000,   // first retry delay
    maxRetryDelayMs: 30000,      // backoff cap
    backoffFactor: 2,            // exponential multiplier
  },
  alert: {
    webhookUrl: "",              // POST failure notification here
  },
}

When the worker crashes, it automatically restarts with exponential backoff. If it keeps failing beyond maxRecoveryDurationMs, it sends a JSON notification to alert.webhookUrl (if set) and exits with the original error.

The notification payload (WorkerFailureNotification):

{
  attempts: number          // total restart attempts
  recoveryDurationMs: number // time since first failure
  error: unknown            // the error that killed it
  timestamp: string         // ISO timestamp
}

Chain tuning profiles

| Chain | polling.intervalMs | polling.confirmations | logs.chunkSize | reorg.depth | |-------|---------------------|------------------------|------------------|---------------| | Ethereum | 12000 | 2 | 2000 | 64 | | Rootstock | 30000 | 1 | 5000 | 20 | | Polygon | 2000 | 32 | 2000 | 128 | | Arbitrum | 1000 | 0 | 5000 | 1 |

Parallel backfill

Set network.logs.parallelRequests to speed up historical indexing. Chunk ordering is preserved regardless of concurrency.

network: {
  logs: {
    chunkSize: 2000,
    parallelRequests: 4,
  },
}

Start with 3 and increase if the RPC allows. Public endpoints may rate-limit above 5–10.

Logging

Uses Effect's native logging — no console.log in source code.

| Level | What you see | |-------|-------------| | error | RPC failures, storage errors | | warning | Reorg detection, parent hash mismatches | | info | Start/stop, backfill progress, reorg handled | | debug | Per-chunk detail, queries, storage init | | trace | Every RPC call and poll tick |

Production: logLevel: "info". Debugging: "debug". Silent: enableTelemetry: false.

Operational notes

  • One writer process per SQLite file. This is not a suggestion.
  • Keep the DB on persistent storage.
  • On restart, the indexer resumes from the last checkpoint.
  • RPC must support eth_getLogs — if it doesn't, nothing will work.
  • Graceful shutdown: Ctrl+C or kill <pid> — the worker finishes the current operation and writes the checkpoint.

Development

npm run build
npm run typecheck
npm run test
npm run check

License

MIT — do whatever you want.

Repository: github.com/cybervoid0/effective-indexer