npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

notion-md-crawler

v1.0.0

Published

A library to recursively retrieve and serialize Notion pages with customization for machine learning applications.

Downloads

53,618

Readme

notion-md-crawler

A library to recursively retrieve and serialize Notion pages and databases with customization for machine learning applications.

NPM Version

🌟 Features

  • 🕷️ Crawling Pages and Databases: Dig deep into Notion's hierarchical structure with ease.
  • 📝 Serialize to Markdown: Seamlessly convert Notion pages to Markdown for easy use in machine learning and other.
  • 🛠️ Custom Serialization: Adapt the serialization process to fit your specific machine learning needs.
  • ⏳ Async Generator: Yields results on a page-by-page basis, so even huge documents can be made memory efficient.

🛠️ Installation

@notionhq/client must also be installed.

Using npm 📦:

npm install notion-md-crawler @notionhq/client

Using yarn 🧶:

yarn add notion-md-crawler @notionhq/client

Using pnpm 🚀:

pnpm add notion-md-crawler @notionhq/client

🚀 Quick Start

⚠️ Note: Before getting started, create an integration and find the token. Details on methods can be found in API section

Leveraging the power of JavaScript generators, this library is engineered to handle even the most extensive Notion documents with ease. It's designed to yield results page-by-page, allowing for efficient memory usage and real-time processing.

import { Client } from "@notionhq/client";
import { crawler, pageToString } from "notion-md-crawler";

// Need init notion client with credential.
const client = new Client({ auth: process.env.NOTION_API_KEY });

const crawl = crawler({ client });

const main = async () => {
  const rootPageId = "****";
  for await (const result of crawl(rootPageId)) {
    if (result.success) {
      const pageText = pageToString(result.page);
      console.log(pageText);
    }
  }
};

main();

🌐 API

crawler

Recursively crawl the Notion Page. dbCrawler should be used if the Root is a Notion Database.

Note: It tries to continue crawling as much as possible even if it fails to retrieve a particular Notion Page.

Parameters:

  • options (CrawlerOptions): Crawler options.
  • rootPageId (string): Id of the root page to be crawled.

Returns:

  • AsyncGenerator<CrawlingResult>: Crawling results with failed information.

dbCrawler

Recursively crawl the Notion Database. crawler should be used if the Root is a Notion Page.

Parameters:

  • options (CrawlerOptions): Crawler options.
  • rootDatabaseId (string): Id of the root page to be crawled.

Returns:

  • AsyncGenerator<CrawlingResult>: Crawling results with failed information.

CrawlerOptions

| Option | Description | Type | Default | | ------------------------ | --------------------------------------------------------------------------------------------------- | --------------------------------------------- | ----------- | | client | Instance of Notion Client. Set up an instance of the Client class imported from @notionhq/client. | Notion Client | - | | serializers? | Used for custom serialization of Block and Property objects. | Object | undefined | | serializers?.block? | Map of Notion block type and BlockSerializer. | BlockSerializers | undefined | | serializers?.property? | Map of Notion Property Type and PropertySerializer. | PropertySerializers | undefined | | metadataBuilder? | The metadata generation process can be customize. | MetadataBuilder | undefined | | urlMask? | If specified, the url is masked with the string. | string | false | false | | skipPageIds? | List of page Ids to skip crawling (also skips descendant pages) | string[] | undefined |

BlockSerializers

Map with Notion block type (like "heading_1", "to_do", "code") as key and BlockSerializer as value.

BlockSerializer

BlockSerializer that takes a Notion block object as argument. Returning false will skip serialization of that Notion block.

[Type]

type BlockSerializer = (
  block: NotionBlock,
) => string | false | Promise<string | false>;

PropertySerializers

Map with Notion Property Type (like "heading_1", "to_do", "code") as key and PropertySerializer as value.

PropertySerializer

PropertySerializer that takes a Notion property object as argument. Returning false will skip serialization of that Notion property.

[Type]

type PropertySerializer = (
  name: string,
  block: NotionBlock,
) => string | false | Promise<string | false>;

MetadataBuilder

Retrieving metadata is sometimes very important, but the information you want to retrieve will vary depending on the context. MetadataBuilder allows you to customize it according to your use case.

[Example]

import { crawler, MetadataBuilderParams } from "notion-md-crawler";

const getUrl = (id: string) => `https://www.notion.so/${id.replace(/-/g, "")}`;

const metadataBuilder = ({ page }: MetadataBuilderParams) => ({
  url: getUrl(page.metadata.id),
});

const crawl = crawler({ client, metadataBuilder });

for await (const result of crawl("notion-page-id")) {
  if (result.success) {
    console.log(result.page.metadata.url); // "https://www.notion.so/********"
  }
}

📊 Use Metadata

Since crawler returns Page objects and Page object contain metadata, you can be used it for machine learning.

🛠️ Custom Serialization

notion-md-crawler gives you the flexibility to customize the serialization logic for various Notion objects to cater to the unique requirements of your machine learning model or any other use case.

Define your custom serializer

You can define your own custom serializer. You can also use the utility function for convenience.

import { BlockSerializer, crawler, serializer } from "notion-md-crawler";

const customEmbedSerializer: BlockSerializer<"embed"> = (block) => {
  if (block.embed.url) return "";

  // You can use serializer utility.
  const caption = serializer.utils.fromRichText(block.embed.caption);

  return `<figure>
  <iframe src="${block.embed.url}"></iframe>
  <figcaption>${caption}</figcaption>
</figure>`;
};

const serializers = {
  block: {
    embed: customEmbedSerializer,
  },
};

const crawl = crawler({ client, serializers });

Skip serialize

Returning false in the serializer allows you to skip the serialize of that block. This is useful when you want to omit unnecessary information.

const image: BlockSerializer<"image"> = () => false;
const crawl = crawler({ client, serializers: { block: { image } } });

Advanced: Use default serializer in custom serializer

If you want to customize serialization only in specific cases, you can use the default serializer in a custom serializer.

import { BlockSerializer, crawler, serializer } from "notion-md-crawler";

const defaultImageSerializer = serializer.block.defaults.image;

const customImageSerializer: BlockSerializer<"image"> = (block) => {
  // Utility function to retrieve the link
  const { title, href } = serializer.utils.fromLink(block.image);

  // If the image is from a specific domain, wrap it in a special div
  if (href.includes("special-domain.com")) {
    return `<div class="special-image">
      ${defaultImageSerializer(block)}
    </div>`;
  }

  // Use the default serializer for all other images
  return defaultImageSerializer(block);
};

const serializers = {
  block: {
    image: customImageSerializer,
  },
};

const crawl = crawler({ client, serializers });

🔍 Supported Blocks and Database properties

Blocks

| Block Type | Supported | | ------------------ | --------- | | Text | ✅ Yes | | Bookmark | ✅ Yes | | Bulleted List | ✅ Yes | | Numbered List | ✅ Yes | | Heading 1 | ✅ Yes | | Heading 2 | ✅ Yes | | Heading 3 | ✅ Yes | | Quote | ✅ Yes | | Callout | ✅ Yes | | Equation (block) | ✅ Yes | | Equation (inline) | ✅ Yes | | Todos (checkboxes) | ✅ Yes | | Table Of Contents | ✅ Yes | | Divider | ✅ Yes | | Column | ✅ Yes | | Column List | ✅ Yes | | Toggle | ✅ Yes | | Image | ✅ Yes | | Embed | ✅ Yes | | Video | ✅ Yes | | Figma | ✅ Yes | | PDF | ✅ Yes | | Audio | ✅ Yes | | File | ✅ Yes | | Link | ✅ Yes | | Page Link | ✅ Yes | | External Page Link | ✅ Yes | | Code (block) | ✅ Yes | | Code (inline) | ✅ Yes |

Database Properties

| Property Type | Supported | | ---------------- | --------- | | Checkbox | ✅ Yes | | Created By | ✅ Yes | | Created Time | ✅ Yes | | Date | ✅ Yes | | Email | ✅ Yes | | Files | ✅ Yes | | Formula | ✅ Yes | | Last Edited By | ✅ Yes | | Last Edited Time | ✅ Yes | | Multi Select | ✅ Yes | | Number | ✅ Yes | | People | ✅ Yes | | Phone Number | ✅ Yes | | Relation | ✅ Yes | | Rich Text | ✅ Yes | | Rollup | ✅ Yes | | Select | ✅ Yes | | Status | ✅ Yes | | Title | ✅ Yes | | Unique Id | ✅ Yes | | Url | ✅ Yes | | Verification | □ No |

💬 Issues and Feedback

For any issues, feedback, or feature requests, please file an issue on GitHub.

📜 License

MIT


Made with ❤️ by TomPenguin.