npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@odda-ai/matching-core

v1.0.4

Published

Core AI provider library with support for OpenAI, Ollama, and Anthropic

Readme

🧠 @odda-ai/matching-core

Libreria TypeScript modulare per l'integrazione di provider AI multipli, parsing di CV in PDF e analisi intelligente di competenze e profili professionali.

Version License TypeScript


📋 Indice


🌟 Overview

@odda-ai/matching-core è una libreria core che fornisce:

  • 🤖 Astrazione unificata per provider AI multipli (OpenAI, Anthropic, Ollama)
  • 📄 PDF parsing ottimizzato per CV e documenti strutturati
  • 🎯 Analisi CV intelligente con estrazione automatica di skill, certificazioni, esperienza
  • 📊 Skill extraction da job description con matching semantico
  • 🔧 Type-safe con pieno supporto TypeScript
  • 🧩 Modulare con architettura a plugin per provider AI

Quando usarlo

  • ✅ Costruire sistemi di recruitment e talent matching
  • ✅ Automatizzare l'analisi di CV e profili professionali
  • ✅ Estrarre competenze da job description
  • ✅ Integrare AI in applicazioni esistenti con provider pluggable
  • ✅ Processare documenti PDF strutturati

✨ Features

🤖 AI Provider Abstraction

  • Multi-provider support: OpenAI, Anthropic (Claude), Ollama
  • Factory pattern per creazione provider
  • Registry centralizzato per adapter management
  • Configurazione per provider con modelli specifici
  • Error handling consistente tra provider

📄 CV Parsing

  • PDF extraction con pdf-parse
  • Text cleaning e normalizzazione
  • Metadata extraction automatica
  • Buffer-based API per flessibilità
  • Error handling robusto

🎯 CV Analysis

  • Structured extraction di:
    • Personal info (nome, email, phone, location, LinkedIn)
    • Technical skills con proficiency level
    • Soft skills con importanza
    • Work experience con ruoli e durata
    • Education con titoli e istituzioni
    • Certifications con provider e validità
    • Languages con livello di competenza
  • Seniority assessment automatico (Junior, Mid, Senior, Lead, Principal)
  • Years of experience calcolati
  • JSON-structured output per facile integrazione

📊 Job Matching

  • Skill extraction da job description
  • Semantic matching con skill esistenti
  • Importance weighting automatico
  • Categorization (Technical/Soft, Required/Nice-to-have)

📦 Installazione

npm install @odda-ai/matching-core

Peer Dependencies

# Se usi OpenAI
npm install openai

# Se usi Anthropic Claude
npm install @anthropic-ai/sdk

# Se usi Ollama (nessuna dipendenza extra)

🚀 Quick Start

1. Setup Base con OpenAI

import { createAIProvider, AiTalentService } from '@odda-ai/matching-core';

// Crea provider AI
const aiProvider = createAIProvider({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4-turbo-preview',
});

// Crea service principale
const aiTalent = new AiTalentService(aiProvider);

2. Analizza un CV

import fs from 'fs';

// Leggi CV PDF
const cvBuffer = fs.readFileSync('./cv-mario-rossi.pdf');

// Analizza CV
const analysis = await aiTalent.cv.analyzeCvResume(cvBuffer);

console.log('Nome:', analysis.personalInfo.name);
console.log('Seniority:', analysis.overallSeniority);
console.log('Skills:', analysis.technicalSkills.map(s => s.name));
console.log('Certificazioni:', analysis.certifications.length);

3. Estrai Skills da Job Description

const jobDescription = `
  Cerchiamo un Senior Full-Stack Developer con esperienza in:
  - React, TypeScript, Node.js
  - AWS, Docker, Kubernetes
  - PostgreSQL, MongoDB
`;

const existingSkills = [
  'React', 'Vue', 'Angular', 'TypeScript', 'JavaScript',
  'Node.js', 'Python', 'AWS', 'Azure', 'Docker'
];

const jobSkills = await aiTalent.jobs.getSkillsFromJobDescription(
  jobDescription,
  existingSkills
);

console.log('Skills richieste:', jobSkills);

🏗️ Architettura

@odda-ai/matching-core/
├── ai/                          # AI Provider Layer
│   ├── AIProvider.ts            # Interface unificata
│   ├── factory.ts               # Factory per creazione provider
│   ├── registry.ts              # Registry adapter
│   ├── types.ts                 # Types condivisi
│   └── adapters/
│       ├── OpenAIAdapter.ts     # Adapter OpenAI
│       ├── AnthropicAdapter.ts  # Adapter Anthropic
│       └── OllamaAdapter.ts     # Adapter Ollama
│
├── cv-parser/                   # PDF Parsing Layer
│   ├── PDFParserService.ts      # Service parsing PDF
│   └── types.ts                 # Types per parsing
│
└── features/                    # Business Logic Layer
    ├── ai-talent.service.ts     # Facade principale
    ├── ai-cv-resume.service.ts  # CV analysis
    ├── job-matcher.service.ts   # Job skill extraction
    ├── cv-chunking.service.ts   # CV chunking per RAG
    ├── prompts.ts               # AI prompts
    ├── system-messages.ts       # System messages
    └── types.ts                 # Response types

Data Flow

┌─────────────┐       ┌──────────────┐       ┌─────────────┐
│   PDF CV    │──────▶│ PDFParser    │──────▶│  Raw Text   │
└─────────────┘       └──────────────┘       └─────────────┘
                                                     │
                                                     ▼
┌─────────────┐       ┌──────────────┐       ┌─────────────┐
│ Structured  │◀──────│ AI Provider  │◀──────│  Prompts    │
│   Output    │       │ (GPT/Claude) │       │  + Context  │
└─────────────┘       └──────────────┘       └─────────────┘

🤖 AI Providers

Supported Providers

| Provider | Model Support | Streaming | Vision | Function Calling | |----------|---------------|-----------|--------|------------------| | OpenAI | GPT-3.5, GPT-4, GPT-4o | ✅ | ✅ | ✅ | | Anthropic | Claude 3 (Opus, Sonnet, Haiku) | ✅ | ✅ | ✅ | | Ollama | Llama 2, Mistral, CodeLlama, etc. | ✅ | ❌ | ⚠️ Limited |

Factory Pattern

import { createAIProvider } from '@odda-ai/matching-core';

// OpenAI
const openai = createAIProvider({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4-turbo-preview',
});

// Anthropic Claude
const claude = createAIProvider({
  provider: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY,
  model: 'claude-3-opus-20240229',
});

// Ollama (local)
const ollama = createAIProvider({
  provider: 'ollama',
  baseURL: 'http://localhost:11434',
  model: 'llama2',
});

Custom Provider Configuration

const provider = createAIProvider({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4',
  temperature: 0.3,          // Più deterministico
  maxTokens: 4000,           // Max token response
  responseFormat: 'json',    // JSON mode
});

Direct Provider Usage

import { AIProvider } from '@odda-ai/matching-core';

// Usa direttamente il provider
const response = await aiProvider.chat([
  {
    role: 'system',
    content: 'You are a helpful assistant.',
  },
  {
    role: 'user',
    content: 'Explain TypeScript generics.',
  },
]);

console.log(response.content);

📄 CV Parser

Basic Usage

import { PDFParserService } from '@odda-ai/matching-core';

const parser = new PDFParserService();

// Parse PDF
const result = await parser.parsePDF(pdfBuffer);

console.log('Text:', result.text);
console.log('Pages:', result.numPages);
console.log('Info:', result.info);

Advanced Options

const result = await parser.parsePDF(pdfBuffer, {
  max: 50,              // Max pages to parse
  version: '1.10.100',  // PDF.js version
  verbosity: 0,         // Logging level (0-5)
});

Error Handling

try {
  const result = await parser.parsePDF(buffer);
} catch (error) {
  if (error.message.includes('Invalid PDF')) {
    console.error('File non è un PDF valido');
  } else {
    console.error('Errore parsing:', error.message);
  }
}

🎯 Features & Services

AiTalentService

Facade principale che orchestra tutti i servizi.

import { AiTalentService } from '@odda-ai/matching-core';

const service = new AiTalentService(aiProvider);

// CV Analysis
const cvAnalysis = await service.cv.analyzeCvResume(buffer);

// CV Chunking (per RAG/Vector DB)
const chunks = await service.cv.chunkCvAnalysis(cvAnalysis, {
  chunkSize: 500,
  overlap: 50,
});

// Job Skills Extraction
const skills = await service.jobs.getSkillsFromJobDescription(
  jobDescription,
  existingSkills
);

CV Analysis Response Structure

interface CvAnalysisResponse {
  // Personal Information
  personalInfo: {
    name: string;
    email?: string;
    phone?: string;
    location?: string;
    linkedIn?: string;
    portfolio?: string;
    github?: string;
  };

  // Technical Skills
  technicalSkills: Array<{
    name: string;
    proficiency: number;        // 0-100
    yearsOfExperience?: number;
    lastUsed?: string;
  }>;

  // Soft Skills
  softSkills: Array<{
    name: string;
    importance: 'HIGH' | 'MEDIUM' | 'LOW';
  }>;

  // Work Experience
  workExperience: Array<{
    company: string;
    role: string;
    startDate: string;
    endDate?: string;
    description?: string;
    achievements?: string[];
  }>;

  // Education
  education: Array<{
    institution: string;
    degree: string;
    field?: string;
    startDate?: string;
    endDate?: string;
    grade?: string;
  }>;

  // Certifications
  certifications: Array<{
    name: string;
    provider: string;
    obtainedDate?: string;
    expiryDate?: string;
    credentialId?: string;
  }>;

  // Languages
  languages: Array<{
    name: string;
    proficiency: 'NATIVE' | 'FLUENT' | 'ADVANCED' | 'INTERMEDIATE' | 'BASIC';
  }>;

  // Summary
  professionalSummary?: string;
  overallSeniority: 'JUNIOR' | 'MID' | 'SENIOR' | 'LEAD' | 'PRINCIPAL';
  yearsOfExperience: number;
  strengths: string[];
  areasForGrowth: string[];
  keyAchievements: string[];
}

CV Chunking

Divide il CV analysis in chunk per vector databases o RAG systems.

const chunks = await service.cv.chunkCvAnalysis(cvAnalysis, {
  chunkSize: 500,      // Caratteri per chunk
  overlap: 50,         // Overlap tra chunk
  strategy: 'semantic' // 'semantic' | 'fixed'
});

// Ogni chunk contiene:
chunks.forEach(chunk => {
  console.log('Text:', chunk.text);
  console.log('Type:', chunk.type);          // 'personal' | 'skills' | 'experience' | 'education'
  console.log('Metadata:', chunk.metadata);
  console.log('Embedding:', chunk.embedding); // Optional, se generato
});

Job Skills Extraction

const skills = await service.jobs.getSkillsFromJobDescription(
  jobDescription,
  existingSkillsList
);

// Response structure
interface JobSkill {
  name: string;
  importance: number;           // 0-100
  skillType: 'TECHNICAL' | 'SOFT';
  category: 'REQUIRED' | 'NICE_TO_HAVE';
  context?: string;             // Contesto dalla job description
  matchedFrom?: string;         // Skill originale matchata
}

📚 API Reference

createAIProvider()

function createAIProvider(config: AIProviderConfig): AIProvider

Parameters:

  • provider: 'openai' | 'anthropic' | 'ollama'
  • apiKey: string (required per openai/anthropic)
  • model: string (es. 'gpt-4', 'claude-3-opus-20240229')
  • baseURL?: string (per Ollama o proxy)
  • temperature?: number (0-2)
  • maxTokens?: number
  • responseFormat?: 'text' | 'json'

AiTalentService

Constructor

new AiTalentService(aiProvider: AIProvider)

Methods

cv.analyzeCvResume()

async analyzeCvResume(
  pdfBuffer: Buffer,
  options?: AnalyzeResumeOptions
): Promise<CvAnalysisResponse>

cv.chunkCvAnalysis()

async chunkCvAnalysis(
  cvAnalysis: CvAnalysisResponse,
  options?: ChunkingOptions
): Promise<CvChunk[]>

jobs.getSkillsFromJobDescription()

async getSkillsFromJobDescription(
  jobDescription: string,
  existingSkills: string[]
): Promise<JobSkill[]>

PDFParserService

class PDFParserService {
  async parsePDF(
    buffer: Buffer,
    options?: ParseOptions
  ): Promise<ParsedPDF>
}

💡 Esempi Avanzati

Batch CV Analysis

async function batchAnalyze(cvPaths: string[]) {
  const results = [];
  
  for (const path of cvPaths) {
    try {
      const buffer = fs.readFileSync(path);
      const analysis = await aiTalent.cv.analyzeCvResume(buffer);
      
      results.push({
        path,
        status: 'success',
        data: analysis,
      });
    } catch (error) {
      results.push({
        path,
        status: 'error',
        error: error.message,
      });
    }
  }
  
  return results;
}

Skill Matching con Tolleranza

function fuzzyMatchSkills(
  cvSkills: string[],
  jobSkills: string[],
  threshold: number = 0.8
) {
  const matches = [];
  
  for (const cvSkill of cvSkills) {
    for (const jobSkill of jobSkills) {
      const similarity = calculateSimilarity(cvSkill, jobSkill);
      
      if (similarity >= threshold) {
        matches.push({
          cvSkill,
          jobSkill,
          similarity,
        });
      }
    }
  }
  
  return matches;
}

Vector Store Integration

import { Pinecone } from '@pinecone-database/pinecone';

const pinecone = new Pinecone({ apiKey: process.env.PINECONE_API_KEY });
const index = pinecone.index('cv-embeddings');

// Chunka e genera embeddings
const chunks = await service.cv.chunkCvAnalysis(cvAnalysis);

// Upsert in Pinecone
const vectors = chunks.map((chunk, i) => ({
  id: `cv-${cvAnalysis.personalInfo.name}-${i}`,
  values: chunk.embedding || [], // Generate with OpenAI embeddings API
  metadata: {
    text: chunk.text,
    type: chunk.type,
    ...chunk.metadata,
  },
}));

await index.upsert(vectors);

Switch Provider Dinamico

class AIService {
  private providers: Map<string, AIProvider>;
  
  constructor() {
    this.providers = new Map([
      ['fast', createAIProvider({ provider: 'openai', model: 'gpt-3.5-turbo' })],
      ['accurate', createAIProvider({ provider: 'openai', model: 'gpt-4' })],
      ['local', createAIProvider({ provider: 'ollama', model: 'llama2' })],
    ]);
  }
  
  async analyze(cv: Buffer, mode: 'fast' | 'accurate' | 'local') {
    const provider = this.providers.get(mode)!;
    const service = new AiTalentService(provider);
    return service.cv.analyzeCvResume(cv);
  }
}

⚙️ Configurazione

Environment Variables

# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4-turbo-preview

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-opus-20240229

# Ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2

Configuration File

// config/ai.config.ts
import { AIProviderConfig } from '@odda-ai/matching-core';

export const aiConfig: AIProviderConfig = {
  provider: process.env.AI_PROVIDER as any || 'openai',
  apiKey: process.env.AI_API_KEY || '',
  model: process.env.AI_MODEL || 'gpt-4',
  temperature: 0.3,
  maxTokens: 4000,
  responseFormat: 'json',
};

📘 TypeScript Support

Libreria fully-typed con completo supporto TypeScript.

Type Imports

import type {
  // Provider types
  AIProvider,
  AIProviderConfig,
  ChatMessage,
  ChatResponse,
  
  // CV Analysis types
  CvAnalysisResponse,
  PersonalInfo,
  TechnicalSkill,
  SoftSkill,
  WorkExperience,
  Education,
  Certification,
  Language,
  
  // Chunking types
  CvChunk,
  ChunkingOptions,
  ChunkingResult,
  
  // Job Matching types
  JobSkill,
  
  // Parser types
  ParsedPDF,
  ParseOptions,
} from '@odda-ai/matching-core';

Generic Support

interface CustomAnalysis extends CvAnalysisResponse {
  customField: string;
}

const analysis: CustomAnalysis = {
  ...await service.cv.analyzeCvResume(buffer),
  customField: 'custom value',
};

🧪 Testing

Unit Tests

import { describe, it, expect } from 'vitest';
import { createAIProvider } from '@odda-ai/matching-core';

describe('AI Provider', () => {
  it('should create OpenAI provider', () => {
    const provider = createAIProvider({
      provider: 'openai',
      apiKey: 'test-key',
    });
    
    expect(provider).toBeDefined();
  });
});

Integration Tests

describe('CV Analysis', () => {
  it('should analyze CV correctly', async () => {
    const buffer = fs.readFileSync('./test/fixtures/sample-cv.pdf');
    const analysis = await service.cv.analyzeCvResume(buffer);
    
    expect(analysis.personalInfo.name).toBeDefined();
    expect(analysis.technicalSkills.length).toBeGreaterThan(0);
    expect(analysis.overallSeniority).toMatch(/JUNIOR|MID|SENIOR|LEAD|PRINCIPAL/);
  });
});

📄 License

ISC


🤝 Contributing

Contributions are welcome! Please open an issue or submit a pull request.


🔗 Links


Made with ❤️ by Odda Studio