npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@aeranko/ship

v0.2.0

Published

Drop-in AI crawler tracking and AEO telemetry for Node.js apps. Ships data to Aeranko for AI-visibility analytics.

Readme

@aeranko/ship

AI-search observability and AEO uplift for JavaScript apps, in a single SDK. Drop in, and your site:

  1. Exposes JSON-LD (Organization, WebSite, BreadcrumbList, Article, FAQPage, SpeakableSpecification) so LLM crawlers can parse what you do.
  2. Serves /llms.txt, /llms-full.txt, robots.txt and sitemap.xml with AI-bots explicitly allow-listed.
  3. Emits canonical + OpenGraph + Twitter metadata with zero ceremony.
  4. Reports every AI-crawler hit (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, …) to your Aeranko dashboard.
npm install @aeranko/ship

Quick start (Next.js 15 / 16 — App Router)

1. Add <Ship /> to your root layout

// app/layout.tsx
import { Ship } from "@aeranko/ship/react";
import { aerankoMetadata } from "@aeranko/ship/next";

export const metadata = aerankoMetadata({
  siteName: "Acme Inc.",
  title: "Acme — widgets for serious people",
  description: "We make widgets that just work.",
  canonical: "https://acme.example",
});

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <head>
        <Ship
          config={{
            organization: {
              name: "Acme Inc.",
              url: "https://acme.example",
              logo: "https://acme.example/logo.png",
              sameAs: ["https://twitter.com/acme"],
            },
            website: { name: "Acme Inc.", url: "https://acme.example" },
          }}
        />
      </head>
      <body>{children}</body>
    </html>
  );
}

<Ship /> is a Server Component — no "use client", no hydration cost, and LLM crawlers (which don't execute JS) see the JSON-LD in plain HTML.

2. Add app/robots.ts

import type { MetadataRoute } from "next";
import { aerankoRobots } from "@aeranko/ship/next";

export default function robots(): MetadataRoute.Robots {
  return aerankoRobots({ siteUrl: "https://acme.example" });
}

Every major AI crawler (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, Bytespider, CCBot, …) is explicitly allowed.

3. Add app/sitemap.ts

import type { MetadataRoute } from "next";
import { aerankoSitemap } from "@aeranko/ship/next";

export default function sitemap(): MetadataRoute.Sitemap {
  return aerankoSitemap([
    { url: "https://acme.example", lastModified: new Date(), priority: 1 },
    { url: "https://acme.example/pricing", changeFrequency: "weekly" },
  ]);
}

lastModified feeds the Aeranko freshness score — keep it current.

4. Add app/llms.txt/route.ts

import { createLlmsTxtHandler } from "@aeranko/ship/next";

export const GET = createLlmsTxtHandler({
  siteName: "Acme Inc.",
  siteUrl: "https://acme.example",
  description: "Widgets for serious people.",
  sections: [
    { title: "Pricing", url: "https://acme.example/pricing" },
    { title: "Docs", url: "https://acme.example/docs" },
  ],
});

Pair it with app/llms-full.txt/route.ts using createLlmsFullTxtHandler when you have long-form content that LLMs should index.

5. Track AI-crawler hits via proxy.ts

// proxy.ts (Next.js 16) or middleware.ts (Next.js 14/15)
import { createAerankoProxy } from "@aeranko/ship/next";

export default createAerankoProxy({
  apiKey: process.env.AERANKO_API_KEY!,
});

export const config = {
  matcher: ["/((?!_next/static|_next/image|favicon.ico).*)"],
};

Subpath entry points

| Import | What you get | React dep | | ----------------------- | ------------------------------------------------------------ | --------- | | @aeranko/ship | createClient, detectCrawler, isAICrawler | none | | @aeranko/ship/next | createAerankoProxy, aerankoMetadata, aerankoRobots, aerankoSitemap, createLlmsTxtHandler, createLlmsFullTxtHandler | none (next is peer) | | @aeranko/ship/express | aerankoMiddleware | none | | @aeranko/ship/react | <Ship />, <ShipFAQ />, <ShipSpeakable /> | peer | | @aeranko/ship/schema | buildOrganization, buildWebSite, buildBreadcrumbList, buildArticle, buildFAQPage, buildSpeakable, bundleJsonLd | none | | @aeranko/ship/seo | buildLlmsTxt, buildLlmsFullTxt, buildRobotsRules, renderRobotsTxt, DEFAULT_AI_BOTS | none |

Every subpath ships both import and require variants, with full .d.ts types. sideEffects: false so tree-shaking works.


API reference

@aeranko/ship/schema

Pure JSON-LD builders. Every function returns a plain object — you can stringify it yourself, or pass it to bundleJsonLd which produces a string safe to drop into <script type="application/ld+json"> (</script> is XSS-escaped automatically).

import { buildOrganization, bundleJsonLd } from "@aeranko/ship/schema";

const html = `<script type="application/ld+json">${bundleJsonLd(
  buildOrganization({ name: "Acme", url: "https://acme.example" }),
)}</script>`;

@aeranko/ship/seo

import {
  buildLlmsTxt,
  buildRobotsRules,
  renderRobotsTxt,
  DEFAULT_AI_BOTS,
} from "@aeranko/ship/seo";

const txt = buildLlmsTxt({
  siteName: "Acme",
  siteUrl: "https://acme.example",
  description: "Widgets.",
  sections: [{ title: "Docs", url: "https://acme.example/docs" }],
});

const robots = buildRobotsRules({ siteUrl: "https://acme.example" });
const robotsTxt = renderRobotsTxt(robots);

DEFAULT_AI_BOTS is exported as a readonly string[] — extend or filter as needed, never hardcode your own list.

@aeranko/ship/react

All components are Server Components. Do not add "use client" — the API key must not leak into the client bundle and the JSON-LD must render in the server HTML.

import { Ship, ShipFAQ, ShipSpeakable } from "@aeranko/ship/react";

// Root layout
<Ship config={{ organization: { name, url } }} />

// FAQ page
<ShipFAQ items={[{ question: "Why?", answer: "Because." }]}>
  {/* your UI */}
</ShipFAQ>

// A "read this out loud" region
<ShipSpeakable className="summary">
  <p>TL;DR paragraph.</p>
</ShipSpeakable>

Why Ship?

AI search (ChatGPT, Claude, Perplexity, Google AI Overviews) parses your site differently from Google of 2015. It wants:

  • Structured facts in JSON-LD so it knows what you are.
  • /llms.txt so it knows where to get the summary without crawling a JS bundle.
  • An explicit bot-allow list — AI crawlers treat a missing rule as conservative "probably don't".
  • Canonical URLs and lastmod so the answer engine doesn't cite stale versions.

Ship gives you every one of those in ≤ 50 lines of code, with a single dependency, zero client-bundle cost, and full TypeScript types.


Compatibility

| Runtime | Supported | | ------------- | --------- | | Next.js 15 | yes | | Next.js 16 | yes (App Router, proxy.ts) | | React 18 / 19 | yes | | Node.js | ≥ 18.17 | | Edge runtime | yes (/schema, /seo, /react have no Node-only APIs) | | Cloudflare Workers | yes (/schema, /seo) |


Privacy

Ship does not capture query strings, headers, cookies, or request bodies. Only method, path, status, and the crawler's user-agent header are shipped.

License

MIT © Aeranko