glypto
v0.2.0
Published
An experiment using TypeScript to scrape <meta> tags.
Readme
Glypto
A TypeScript CLI tool for scraping metadata from a website using a provider-based architecture.
Table of Contents
- Overview
- Features
- Contributing
- Installation
- CLI Usage
- Programmatic Usage
- Architecture
- Creating Custom Providers
- Development
- Testing
- CI/CD
- API Reference
- Requirements
- License
Overview
Glypto scrapes metadata from websites including titles, descriptions, images, Open Graph data, Twitter Cards, and RSS/Atom feeds. It features a modular provider system that makes it easy to add support for new metadata formats.
Features
- 🔍 Comprehensive Metadata Scraping: Open Graph, Twitter Cards, standard meta tags, and more
- 🧩 Extensible Provider System: Plug-and-play architecture for adding new metadata sources
- 🚀 Auto-Discovery: Automatically loads providers from the providers directory
- ⚡ Modern TypeScript: Full type safety with ES modules
- 🎯 Priority-Based Resolution: Intelligent fallback system for metadata values
- 📦 Multiple Usage Patterns: CLI tool, programmatic API, or factory functions
Installation
# Global installation
npm i -g glypto
# Local project installation
npm i glyptoContributing
# Clone the repository
git clone <repository-url>
cd glypto
# Install dependencies
npm install
# Build the project
npm run buildCLI Usage
# Run the scraping command
./bin/glypto scrape
# Or use npm start for development
npm startThe CLI will prompt you for a URL and scrape all available metadata from the webpage.
Programmatic Usage
Simple Usage with Factory
import { scrapeMetadata, createScraper } from 'glypto';
import { JSDOM } from 'jsdom';
// Direct scraping
const dom = new JSDOM(htmlContent);
const metadata = await scrapeMetadata(dom.window.document);
console.log(metadata.title); // Page title
console.log(metadata.description); // Page description
console.log(metadata.image); // Featured image
console.log(metadata.url); // Canonical URL
console.log(metadata.siteName); // Site name
console.log(metadata.favicon); // Favicon URL
console.log(metadata.feeds); // RSS/Atom feedsAdvanced Usage with Custom Providers
import {
createScraperWithProviders,
OpenGraphProvider,
TwitterProvider,
} from 'glypto';
// Create scraper with only specific providers
const scraper = createScraperWithProviders([
new OpenGraphProvider(),
new TwitterProvider(),
]);
const metadata = await scraper.scrape(document);Manual Registry Setup
import { ProviderRegistry, ProviderLoader, Scraper } from 'glypto';
const loader = new ProviderLoader();
const providers = await loader.loadFromDirectory('./custom-providers');
const registry = new ProviderRegistry(providers);
const scraper = new Scraper(registry);Architecture
Glypto uses a modular provider architecture with clear separation of concerns:
Core Components
Scraper: Main scraping engine with fluent method chainingProviderRegistry: Manages and prioritizes metadata providersProviderLoader: Dynamically loads providers from directoriesMetadata: Result object with intelligent value resolution
Provider System
Built-in providers include:
OpenGraphProvider: Scrapesog:*properties (priority 1)TwitterProvider: Scrapestwitter:*properties (priority 2)StandardMetaProvider: Scrapes standard meta tags (priority 3)OtherElementsProvider: Scrapes<title>,<h1>,<link>tags (priority 4)JsonLdProvider: Example JSON-LD structured data provider
Project Structure
src/
├── scraper.ts # Main scraping engine
├── metadata.ts # Result data structure
├── provider-registry.ts # Provider management
├── provider-loader.ts # Dynamic provider loading
├── factory.ts # Convenience factory functions
├── exports.ts # Public API exports
├── types.ts # TypeScript interfaces and types
├── providers/ # Built-in providers
│ ├── open-graph-provider.ts
│ ├── twitter-provider.ts
│ ├── standard-meta-provider.ts
│ ├── other-elements-provider.ts
│ └── json-ld-provider.ts
└── index.ts # CLI entry pointCreating Custom Providers
Create a new provider by implementing the MetadataProvider interface:
// src/providers/my-custom-provider.ts
import { MetadataProvider } from '../types.js';
export class MyCustomProvider implements MetadataProvider {
readonly name = 'myCustom';
readonly priority = 1.5; // Between OpenGraph (1) and Twitter (2)
canHandle(element: Element): boolean {
// Return true if this provider can scrape from this element
return element.getAttribute('data-my-meta') !== null;
}
scrape(element: Element): { key: string; value: string } | null {
// Scrape data from the element
const value = element.getAttribute('data-my-meta');
return value ? { key: 'customField', value } : null;
}
getValue(key: string, data: Map<string, string[]>): string | undefined {
// Resolve value for a given key
const values = data.get(key);
return values && values.length > 0 ? values[0] : undefined;
}
}The provider will be automatically discovered and loaded when placed in the providers/ directory.
Development
# Development mode with auto-reload
npm run dev
# Build for production
npm run build
# Watch mode for development
npm run watch
# Lint and format
npm run lint
npm run formatTesting
Testing Package Installation
Please read Local Install Testing
Testing The Source
The project uses Vitest for testing with TypeScript and ESM support.
# Run tests once
npm test
# Run tests in watch mode (development)
npm run test:watch
# Run tests with coverage report
npm run test:coverage
# Run tests with interactive UI
npm run test:uiTest Structure
Tests are organized in the test/ directory with the following structure:
test/
├── setup.ts # Test setup and global mocks
├── metadata.test.ts # Tests for Metadata class
├── scraper.test.ts # Tests for Scraper class
├── factory.test.ts # Tests for factory functions
├── provider-registry.test.ts # Tests for ProviderRegistry
├── provider-loader.test.ts # Tests for ProviderLoader
├── open-graph-provider.test.ts # Tests for OpenGraph provider
├── twitter-provider.test.ts # Tests for Twitter provider
├── standard-meta-provider.test.ts # Tests for standard meta provider
├── other-elements-provider.test.ts # Tests for other elements provider
└── json-ld-provider.test.ts # Tests for JSON-LD providerWriting Tests
Tests use Vitest with jsdom for DOM testing:
import { describe, it, expect, beforeEach } from 'vitest';
import { JSDOM } from 'jsdom';
import { Scraper } from '../src/scraper.js';
describe('MyFeature', () => {
it('should work correctly', () => {
const dom = new JSDOM('<html><head><title>Test</title></head></html>');
const document = dom.window.document;
// Your test code here
expect(document.title).toBe('Test');
});
});Coverage Reports
Coverage reports are generated in multiple formats:
- Terminal: Shows coverage summary in the console
- HTML: Interactive report in
coverage/index.html - JSON: Machine-readable report in
coverage/coverage-final.json
The project maintains 97.73% statement coverage with comprehensive tests across all components. Interface-only files (types.ts) and CLI entry points are excluded from coverage for cleaner metrics.
The coverage directory is automatically excluded from git commits.
CI/CD
The project uses GitHub Actions for continuous integration and deployment:
Workflows
ci.yml: Main CI pipeline that runs on every push and pull request- Tests on Node.js 24.x
- Runs linting, type checking, building, and testing
- Uploads coverage reports as artifacts
- Includes security auditing and dependency checking
release.yml: Automated releases- Triggers on version tags (v*)
- Creates GitHub releases with assets
- Ready for npm publishing (commented out)
dependabot-auto-merge.yml: Automated dependency updates- Auto-merges minor and patch updates from Dependabot
- Requires tests to pass before merging
stale.yml: Issue and PR management- Marks inactive issues/PRs as stale
- Automatically closes after extended inactivity
labeler.yml: Automatic labeling- Labels PRs based on changed files
- Helps with project organization
Branch Protection
Configure branch protection rules for main:
- Require status checks to pass
- Require branches to be up to date
- Require review from code owners
- Dismiss stale reviews when new commits are pushed
Secrets Required
For full functionality, configure these secrets in your repository:
NPM_TOKEN: For npm publishing (if enabled)
API Reference
Factory Functions
createScraper(): Creates scraper with auto-loaded providerscreateScraperWithProviders(providers): Creates scraper with specific providersscrapeMetadata(document): One-shot scraping function
Classes
Scraper: Main scraping engineProviderRegistry: Provider management and resolutionProviderLoader: Dynamic provider loadingMetadata: Result object with getter methods
Interfaces
All TypeScript interfaces are located in src/types.ts:
MetadataProvider: Interface for implementing custom providersProviderData: Interface for provider data aggregationFeed: Interface for RSS/Atom feed data
Requirements
- Node.js 24.4.0+ (specified in
.node-version) - TypeScript 5.0+
License
MIT LICENSE - See LICENSE file for details
