recipe-scrapers
v1.5.1
Published
A TypeScript library for scraping recipe data from various cooking websites
Downloads
966
Readme
Recipe Scrapers
A TypeScript library for scraping recipe data from various cooking websites. This is a JavaScript port inspired by the Python recipe-scrapers library.
Features
- Extract structured recipe data from cooking websites
- Support for multiple popular recipe sites
- Built with TypeScript for better developer experience
- Fast and lightweight using the Bun runtime for development and testing
- Comprehensive test coverage
Installation
Add the recipe-scrapers package and its peer dependencies.
npm install recipe-scrapers cheerio zod
# or
yarn add recipe-scrapers cheerio zod
# or
pnpm add recipe-scrapers cheerio zod
# or
bun add recipe-scrapers cheerio zodUsage
Basic Usage
import { getScraper, scrapeRecipe } from 'recipe-scrapers'
const html = `<html>The html to scrape...</html>`
const url = 'https://allrecipes.com/recipe/example'
// Get a scraper for a specific URL
// This function throws by default if a scraper does not exist.
const MyScraper = getScraper(url)
const scraper = new MyScraper(html, url, /* { ...options } */)
// Get the recipe data
const rawRecipe = await scraper.toRecipeObject()
// Get the schema validated recipe data
const validatedRecipe = await scraper.parse()
// Enable fallback mode for unsupported hosts
const FallbackScraper = getScraper(url, { wildMode: true })
// One-shot helper (wild mode is enabled by default)
const parsed = await scrapeRecipe(html, url)
// One-shot helper with a safe parse result
const safeResult = await scrapeRecipe(html, url, { safeParse: true })Safe Parse Error Shape
When safeParse: true is used, failures return a structured error object:
type SafeParseError = {
type: 'validation' | 'extraction'
code:
| 'validation_failed'
| 'extractor_not_found'
| 'extraction_runtime_error'
| 'extraction_failed'
issues: Array<{
message: string
path?: PropertyKey[]
dotPath?: string | null
}>
cause?: unknown
context?: {
field?: string
source?: string
}
}This makes it easy to branch in UI code:
const result = await scrapeRecipe(html, url, { safeParse: true })
if (!result.success) {
if (result.error.code === 'extractor_not_found') {
// missing required field (result.error.context?.field)
} else if (result.error.code === 'extraction_runtime_error') {
// plugin/site extractor crashed (result.error.context?.source)
} else if (result.error.code === 'validation_failed') {
// schema validation failed after extraction
}
}Validation Schema
By default, recipe data is validated with the built-in Zod schema.
You can also validate with any Standard Schema compatible schema (for example Valibot).
import { scrapeRecipe } from 'recipe-scrapers'
// Example: a Standard Schema-compatible schema from another library
import { RecipeSchema as ValibotRecipeSchema } from './valibot-recipe-schema'
const result = await scrapeRecipe(html, url, {
safeParse: true,
schema: ValibotRecipeSchema,
})Options
interface ScraperOptions {
/**
* Additional extractors to be used by the scraper.
* These extractors will be added to the default set of extractors.
* Extractors are applied according to their priority.
* Higher priority extractors will run first.
* @default []
*/
extraExtractors?: ExtractorPlugin[]
/**
* Additional post-processors to be used by the scraper.
* These post-processors will be added to the default set of post-processors.
* Post-processors are applied after all extractors have run.
* Post-processors are also applied according to their priority.
* Higher priority post-processors will run first.
* @default []
*/
extraPostProcessors?: PostProcessorPlugin[]
/**
* Whether link scraping is enabled.
* @default false
*/
linksEnabled?: boolean
/**
* Logging level for the scraper.
* This controls the verbosity of logs produced by the scraper.
* @default LogLevel.WARN
*/
logLevel?: LogLevel
/**
* Enable ingredient parsing using the parse-ingredient library.
* When enabled, each ingredient item will include a `parsed` field
* containing structured data (quantity, unit, description, etc.).
* Can be `true` for defaults or an options object.
* @see https://github.com/jakeboone02/parse-ingredient
* @default false
*/
parseIngredients?: boolean | ParseIngredientOptions
/**
* Standard Schema-compatible schema used for validation.
* Useful when validating with libraries such as Valibot.
*/
schema?: StandardSchemaV1<unknown, RecipeObject>
}Supported Sites
This library supports recipe extraction from various popular cooking websites. The scraper automatically detects the appropriate parser based on the URL.
Copyright and Usage
This library is for educational and personal use. Please respect the robots.txt files and terms of service of the websites you scrape.
Development
Project policy documents:
Prerequisites
- Bun (latest version)
Setup
# Clone the repository
git clone https://github.com/recipe-scrapers/recipe-scrapers.git
cd recipe-scrapers
# Install dependencies
bun install
# Run tests
bun test
# Build the project
bun run buildScripts
bun run build- Build the library for distributionbun test- Run the test suitebun test:coverage- Run tests with a coverage reportbun fetch-test-data- Fetch test data from the original Python repositorybun lint- Run linting and type checkingbun lint:fix- Fix linting issues automatically
Adding New Scrapers
Fetch test data from the original Python repository
bun fetch-test-dataConvert the data into the expected JSON format (i.e. the
RecipeObjectinterface)bun process-test-data <host>Choose the scraper type:
- Schema.org-only host (no site-specific extraction needed): add the hostname to
SCHEMA_ORG_ONLY_HOSTSin src/scrapers/_index.ts - Custom scraper (site-specific extraction needed): create a new scraper class extending
AbstractScraper
- Schema.org-only host (no site-specific extraction needed): add the hostname to
If using a custom scraper, add it to
customScraperClassesin src/scrapers/_index.tsAdd optional host aliases to
scraperAliasesin src/scrapers/_index.ts when neededRun tests to ensure the extraction works as expected
Update documentation as needed
import { AbstractScraper } from './abstract-scraper'
import type { RecipeFields } from '@/types/recipe.interface'
export class NewSiteScraper extends AbstractScraper {
static host() {
return 'www.newsite.com'
}
extractors = {
ingredients: this.extractIngredients.bind(this),
}
protected extractIngredients(): RecipeFields['ingredients'] {
const items = this.$('.ingredient')
.map((_, el) => this.$(el).text().trim())
.get()
return [
{
name: null,
items: items.map((value) => ({ value })),
},
]
}
// ... implement other extraction methods
}Testing
The project uses test data from the original Python recipe-scrapers repository to ensure compatibility and accuracy. Tests are written using Bun's built-in test runner.
# Run all tests
bun test
# Run tests with coverage
bun test:coverageAcknowledgments
- Original recipe-scrapers Python library by hhursev
- Schema.org Recipe specification
- Cheerio for HTML parsing
- Zod for schema validation
- Standard Schema for schema interoperability
- parse-ingredient for ingredient parsing
Contributing
Please read CONTRIBUTING.md before opening a pull request.
Project direction and maintainer decision rules are documented in GOVERNANCE.md.
Contributors
License
This project is licensed under the MIT License - see the LICENSE file for details.
