narative-semantic-engine
v0.1.1
Published
General-purpose semantic analysis engine with modular analyzers and data adapters.
Readme
Narative Semantic Engine
A modular semantic analysis engine for dashboards, consoles, and analytics apps. Pick only the analyzers and data adapters you need, plug in a Groq API key if you want AI-assisted insights, and run on general-purpose records from Postgres, Firebase, APIs, JSON, or CSV. The engine is domain-agnostic (agriculture, operations, support, sales, banking, government, etc.).
Install
npm install narative-semantic-engineNode.js 18+ is required.
Quick start
import { createEngine } from 'narative-semantic-engine';
import { semanticLexicalAnalyzer } from 'narative-semantic-engine/analyzers/semantic-lexical';
import { jsonSource } from 'narative-semantic-engine/adapters/json';
const engine = createEngine({
analyzers: [semanticLexicalAnalyzer()],
});
const results = await engine.run(
jsonSource({ path: './records.json' })
);
console.log(results.metrics.semanticLexical);Record format
Records require two fields:
note(required): The content to analyzecreatedAt(required): When the record was created
Optional fields:
actor: Who/what created the record- Any other domain-specific fields (preserved in
extra)
const records = [
{ note: 'Irrigation pump failed again', createdAt: '2026-01-18T08:10:00Z', actor: 'Amina', farm: 'Green Valley' },
{ note: 'Soil moisture looks low', createdAt: '2026-01-18T09:40:00Z', actor: 'Luis', field: 'South-3' },
];Domain examples
The engine works with any domain. Your records just need note and createdAt.
Agriculture
const records = [
{ note: 'Irrigation pump failed again', createdAt: '2026-01-18T08:10:00Z', actor: 'Amina', farm: 'Green Valley', field: 'North-12' },
{ note: 'Soil moisture looks low', createdAt: '2026-01-18T09:40:00Z', actor: 'Luis', farm: 'Green Valley', field: 'South-3' },
];
await engine.run(jsonSource({ data: records }));Corporate (Banking)
const records = [
{ note: 'KYC docs missing for renewal', createdAt: '2026-01-12T14:22:00Z', actor: 'Priya', accountId: 'AC-442', team: 'Risk' },
{ note: 'Wire transfer flagged for review', createdAt: '2026-01-12T15:05:00Z', actor: 'Chen', accountId: 'AC-771', team: 'Ops' },
];
await engine.run(jsonSource({ data: records }));Sales (Stores)
const records = [
{ note: 'Stockout on top sellers', createdAt: '2026-01-09T18:10:00Z', actor: 'Jess', storeId: 'S-104', region: 'West' },
{ note: 'Promo lift stronger than forecast', createdAt: '2026-01-09T19:45:00Z', actor: 'Omar', storeId: 'S-212', region: 'East' },
];
await engine.run(jsonSource({ data: records }));Government (GDP)
const records = [
{ note: 'GDP revised up by 0.4%', createdAt: '2026-01-05T10:00:00Z', actor: 'StatsOffice', country: 'Exampleland', period: '2025-Q4' },
{ note: 'Services sector drives growth', createdAt: '2026-01-20T10:00:00Z', actor: 'StatsOffice', country: 'Exampleland', period: '2026-Q1' },
];
await engine.run(jsonSource({ data: records }));Pick only what you need
- Analyzers:
lexical,semantic-lexical,sentiment,time-series,trend - Adapters:
postgres,firebase,api,json,csv - Providers:
groq
Example imports:
import { lexicalAnalyzer } from 'narative-semantic-engine/analyzers/lexical';
import { sentimentAnalyzer } from 'narative-semantic-engine/analyzers/sentiment';
import { timeSeriesAnalyzer } from 'narative-semantic-engine/analyzers/time-series';
import { trendAnalyzer } from 'narative-semantic-engine/analyzers/trend';
import { postgresSource } from 'narative-semantic-engine/adapters/postgres';Record-first model
- The engine only understands records. Adapters return
{ records }, andengine.run()expects records. - Records must have
noteandcreatedAt. Theactorfield is optional. - Domain-specific fields (farm, accountId, storeId, etc.) are preserved and accessible to custom analyzers.
Time-series rollups and trends
Create metric series from records, then analyze the trend. The engine returns metricSeries
alongside analyzer metrics.
import { createEngine } from 'narative-semantic-engine';
import { timeSeriesAnalyzer } from 'narative-semantic-engine/analyzers/time-series';
import { trendAnalyzer } from 'narative-semantic-engine/analyzers/trend';
const engine = createEngine({
analyzers: [
timeSeriesAnalyzer({ metricId: 'records.count', bucket: 'day' }),
trendAnalyzer({ metricId: 'records.count', sourceAnalyzerId: 'timeSeries' }),
],
});
const results = await engine.run(records);
console.log(results.metricSeries['records.count']);
console.log(results.metrics.trend);Firebase (use your existing config)
If you already initialize Firebase in your app, just pass the Firestore instance.
import { firebaseSource } from 'narative-semantic-engine/adapters/firebase';
const results = await engine.run(
firebaseSource({
firestore,
collectionPath: 'records',
whereClauses: [['chatId', '==', chatId]],
orderByClauses: [['timestamp', 'asc']],
map: (doc) => ({
note: doc.body,
createdAt: doc.timestamp?.toDate?.() ?? doc.timestamp,
actor: doc.actor,
}),
})
);Postgres
import { postgresSource } from 'narative-semantic-engine/adapters/postgres';
const results = await engine.run(
postgresSource({
client: pgClient,
query: 'SELECT actor, body, created_at FROM records WHERE chat_id = $1',
params: [chatId],
map: (row) => ({
note: row.body,
createdAt: row.created_at,
actor: row.actor,
}),
})
);REST API
import { apiSource } from 'narative-semantic-engine/adapters/api';
const results = await engine.run(
apiSource({
url: 'https://example.com/records',
responsePath: 'data.items',
map: (record) => ({
note: record.body,
createdAt: new Date(record.created_at),
actor: record.user,
}),
})
);CSV
import { csvSource } from 'narative-semantic-engine/adapters/csv';
const results = await engine.run(
csvSource({
path: './records.csv',
map: (row) => ({
note: row.body,
createdAt: new Date(row.timestamp),
actor: row.actor,
}),
})
);Groq provider (optional)
For AI-assisted insights, provide a Groq key and pass the provider to the engine.
import { groqProvider } from 'narative-semantic-engine/providers/groq';
const engine = createEngine({
analyzers: [semanticLexicalAnalyzer()],
providers: {
groq: groqProvider({ apiKey: process.env.GROQ_API_KEY }),
},
});Custom analyzer (business-specific)
Create analyzers that read any fields on your records. This example flags overdue operational tasks.
const overdueTasksAnalyzer = (options = {}) => ({
id: 'overdue-tasks',
run({ records }) {
const now = options.now ?? new Date();
const overdue = records.filter((record) => {
const extra = record.extra || {};
if (!extra.dueDate || extra.status === 'done') return false;
return new Date(extra.dueDate) < now;
});
return {
metrics: {
totalRecords: records.length,
overdueCount: overdue.length,
overdueRate: records.length ? Number((overdue.length / records.length).toFixed(4)) : 0,
},
details: { overdue },
};
},
});
const engine = createEngine({
analyzers: [overdueTasksAnalyzer()],
});Engine output
{
metrics: { [analyzerId]: { ... } },
insights: [{ analyzer, label, severity, description }],
details?: { [analyzerId]: { ... } },
meta: { recordCount, source }
}Notes
- The engine works on general-purpose records. For text analyzers, ensure records have a
notefield with the content to analyze. - Domain-specific fields are preserved in
extraand accessible to custom analyzers. - AI features are optional; if keys are missing, the engine falls back to deterministic analysis.
- This package is designed to be embedded in other apps; no CLI or build scripts are included.
