wikai
v0.2.0
Published
```bash npm install wikai ```
Readme
wikai SDK Reference
Installation
npm install wikaiThe SDK has zero runtime dependencies. It uses the native fetch API (Node.js 18+).
Quick start
import { WikiaiClient } from 'wikai'
const client = new WikiaiClient({
baseUrl: 'https://api.wikai.dev',
apiKey: 'wk_live_...',
})
// Seed domain knowledge
await client.graph.seed({
entities: [
{
entityType: 'topic',
entityKey: 'rate_limiting',
displayName: 'Rate Limiting',
properties: { description: 'Controlling request throughput to a service' },
aliases: [{ aliasValue: 'throttling', priority: 10 }],
},
{
entityType: 'pattern',
entityKey: 'token_bucket',
displayName: 'Token Bucket',
properties: { category: 'algorithm' },
},
],
edges: [
{
fromEntityType: 'topic', fromEntityKey: 'rate_limiting',
toEntityType: 'pattern', toEntityKey: 'token_bucket',
edgeType: 'implemented_by',
properties: { level: 'primary' },
},
],
})
// Search the graph
const result = await client.graph.search({ query: 'throttling' })
for (const hit of result.hits) {
console.log(`${hit.displayName} (${hit.entityType}) — score: ${hit.score}`)
}Client
Constructor
import { WikiaiClient } from 'wikai'
const client = new WikiaiClient({
baseUrl: string, // API endpoint (e.g., 'https://api.wikai.dev')
apiKey: string, // Bearer token (format: wk_live_... or wk_test_...)
maxRetries?: number, // Retry count for 5xx and 429 responses (default: 2)
})All requests use Authorization: Bearer <apiKey>. The client automatically retries server errors and rate limit responses with exponential backoff (1s, 2s, 4s, ... capped at 10s).
Namespaces
The client organizes operations into namespaces:
| Namespace | Purpose |
|-----------|---------|
| client.graph | Bulk seed, search, inspect |
| client.entities | Individual entity CRUD |
| client.aliases | Entity alias management |
| client.edges | Relationship management |
| client.documents | Document lifecycle |
| client.sources | Evidence source management |
| client.claims | Claim extraction and conflict detection |
| client.chunks | Document chunking and embedding |
| client.contextPacks | Pre-computed context bundles |
| client.documentLinks | Inter-document relationships |
| client.evidence | Full-text and public search |
| client.guidance | Recommendations, antipatterns, terminology |
| client.admin | API key management |
Domain Graph
The domain graph is wikai's structured knowledge layer. It stores typed entities with properties and named relationships between them. The shape of the graph is defined by a domain schema that the server validates against on every write.
Seeding
graph.seed() is the primary way to load data. It upserts entities, replaces their aliases, and creates edges -- all in one call.
const result = await client.graph.seed({
entities: [
{
entityType: 'topic',
entityKey: 'rate_limiting',
displayName: 'Rate Limiting',
properties: {
description: 'Controlling request throughput to a service',
area: 'infrastructure',
},
aliases: [
{ aliasValue: 'throttling', priority: 10 },
{ aliasValue: 'request limiting', priority: 5 },
{ aliasValue: 'traffic shaping', scope: { area: 'networking' }, priority: 8 },
],
},
{
entityType: 'pattern',
entityKey: 'token_bucket',
displayName: 'Token Bucket',
properties: { category: 'algorithm' },
},
],
edges: [
{
fromEntityType: 'topic',
fromEntityKey: 'rate_limiting',
toEntityType: 'pattern',
toEntityKey: 'token_bucket',
edgeType: 'implemented_by',
properties: { level: 'primary' },
},
],
})
// result.entities — created/updated entities
// result.edges — created edges
// result.errors — validation errors (non-fatal: valid items still persisted)Idempotency: Entities are upserted by (entityType, entityKey). Re-seeding with the same keys updates the entity; aliases are fully replaced (not merged). Edges are additive.
Validation: The server validates every entity's properties and every edge's endpoints against the domain schema. Invalid items are reported in result.errors but don't prevent valid items from being persisted.
Entity seed fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| entityType | string | yes | Must match a type in the domain schema |
| entityKey | string | yes | Unique identifier within (tenant, scope, type) |
| displayName | string | yes | Human-readable name, used in search |
| status | string | no | Default: 'active' |
| properties | object | no | JSONB, validated against schema's property types |
| aliases | AliasInput[] | no | Alternative names for discovery |
Alias fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| aliasValue | string | yes | The alternative name |
| scope | object | no | Contextual qualifiers (e.g., { area: 'networking' }) |
| priority | number | no | Higher = ranked higher in search expansion |
Edge seed fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| fromEntityType | string | yes | Source entity type |
| fromEntityKey | string | yes | Source entity key |
| toEntityType | string | yes | Target entity type |
| toEntityKey | string | yes | Target entity key |
| edgeType | string | yes | Must match a type in the domain schema |
| properties | object | no | Edge attributes (e.g., { level: 'primary' }) |
| orderIndex | number | no | Positional ordering |
Searching
graph.search() queries the domain graph using keyword matching, alias expansion, and edge-based boosting.
const result = await client.graph.search({
query: 'throttling',
filters: { area: 'infrastructure' },
limit: 10,
})
for (const hit of result.hits) {
console.log(hit.displayName, hit.score, hit.matchedOn)
// "Rate Limiting" 100 ["alias"]
// "Token Bucket" 80 ["edge"]
for (const linked of hit.linkedEntities) {
console.log(` -> ${linked.displayName} (${linked.edgeType})`)
}
for (const rec of hit.linkedDoctrine.recommendations) {
console.log(` [rec] ${rec.displayName}`)
}
}How search works
Direct matching. The query is normalized and matched against entity
displayName,entityKey, and searchable properties. Exact match = 100 points, substring = 60, partial term overlap = proportional.Alias matching. Aliases are searched separately. If an alias has a
scopethat matches the query'sfilters, it gets a +20 scope boost.Query expansion. Entity aliases are used to expand the query. If "throttling" is an alias for "Rate Limiting", searching for "throttling" also finds entities connected to Rate Limiting. Expansion hits score lower (35 base) to prefer direct matches.
Edge boosting. For each hit, edges with configured boost levels add to the score. If
implemented_byedges haveboosts: { primary: 50, supporting: 30 }, an entity connected via aprimaryedge gets +50.Ranking. Results are sorted by score (descending), then by entity type priority order from the domain schema's
search.rankOrder.
Search result shape
interface SearchResult {
hits: SearchHit[]
}
interface SearchHit {
entityType: string // e.g., 'topic'
entityKey: string // e.g., 'rate_limiting'
displayName: string // e.g., 'Rate Limiting'
score: number // composite relevance score
matchedOn: MatchSource[] // how the hit was found
properties: object // entity properties
linkedEntities: LinkedEntity[] // connected entities via edges
linkedDoctrine: LinkedDoctrine // connected recommendations/antipatterns
}
type MatchSource = 'name' | 'alias' | 'edge' | 'full_text' | 'doctrine' | 'example'Filters
Filters are key-value pairs that match against entity properties. An entity is excluded if it has a property with the filter key but a different value. Entities without the property are not excluded (open-world assumption).
// Only entities where area = 'infrastructure' (or area is not set)
await client.graph.search({ query: 'limiting', filters: { area: 'infrastructure' } })Inspecting
graph.inspect() returns the full detail view for a single entity: properties, aliases, all connected edges, and linked doctrine.
const detail = await client.graph.inspect({
entityType: 'topic',
entityKey: 'rate_limiting',
})
if (detail) {
console.log(detail.displayName, detail.properties)
for (const alias of detail.aliases) {
console.log(` alias: ${alias.aliasValue} (priority: ${alias.priority})`)
}
for (const edge of detail.edges) {
console.log(` ${edge.direction} ${edge.edgeType}: ${edge.displayName}`)
}
for (const rec of detail.doctrine.recommendations) {
console.log(` [rec] ${rec.displayName}`)
}
}Individual entity operations
For fine-grained control beyond bulk seeding:
// Upsert a single entity
await client.entities.upsert({
entityType: 'topic',
entityKey: 'rate_limiting',
displayName: 'Rate Limiting',
properties: { description: 'Controlling request throughput' },
})
// Get by type and key
const entity = await client.entities.get('topic', 'rate_limiting')
// List all entities of a type
const topics = await client.entities.list('topic')Alias management
// Replace all aliases for an entity
await client.aliases.set(entityId, {
aliases: [
{ aliasValue: 'throttling', priority: 10 },
{ aliasValue: 'traffic shaping', scope: { area: 'networking' }, priority: 8 },
],
})Setting aliases is a full replacement -- all existing aliases for the entity are removed, then the new ones are inserted.
Edge management
// Create an edge (requires entity IDs, not keys)
await client.edges.create({
fromEntityId: '...',
toEntityId: '...',
edgeType: 'implemented_by',
properties: { level: 'primary' },
})
// Get all edges for an entity
const edges = await client.edges.between(entityId)Evidence Layer
The evidence layer stores documents, tracks their versions, and maintains a provenance chain from sources through claims to recommendations.
Document lifecycle
Documents follow a status progression: draft -> approved -> published. When a new version is published, the old version is superseded and its chunks and context pack associations are cleaned up.
Importing documents
documents.import() is the primary entry point. It handles deduplication, versioning, and content storage in one call.
const result = await client.documents.import({
slug: 'rate-limiting-guide',
title: 'Rate Limiting Best Practices',
layer: 'guide',
originType: 'external',
bodyMarkdown: markdownContent,
bodyHtml: htmlContent,
plainText: textContent,
sourceChecksum: 'sha256-of-content',
// Optional fields
artifactType: 'knowledge_article',
visibility: 'public',
summary: 'Patterns and strategies for API rate limiting',
primaryCategory: 'infrastructure',
})
// result.documentId — stable across versions
// result.versionId — unique to this version
// result.versionNumber — incremented on each new version
// result.isNewDocument — true if slug was newDeduplication: If a document with the same slug exists and the latest version has the same sourceChecksum, the call is a no-op and returns the existing IDs. Changed content creates a new version automatically.
Approving and publishing
// Approve a draft version
await client.documents.approveVersion(documentId, versionId)
// Publish an approved version (cascades cleanup)
const published = await client.documents.publishVersion(documentId, versionId)
// published.supersededVersionId — the old version (now 'superseded')
// published.chunksDeleted — number of old chunks removed
// published.packsInvalidated — context packs that lost their source version
// published.termsReassigned — terms moved from old to new versionPublishing cascades:
- The old version's status becomes
superseded - The old version's chunks are deleted
- Context packs linked to the old version are disassociated
- Terms assigned to the old version are reassigned to the new version
- The document's
currentVersionIdis updated
Creating documents (low-level)
For cases where you need more control over the lifecycle:
// Create a document shell
const doc = await client.documents.create({
slug: 'my-doc',
title: 'My Document',
layer: 'research',
})
// Add a version separately
const version = await client.documents.addVersion(doc.id, {
versionNumber: 1,
sourceChecksum: 'abc123',
originType: 'internal',
bodyMarkdown: '...',
bodyHtml: '...',
plainText: '...',
})Querying documents
// Get by slug
const doc = await client.documents.getBySlug('rate-limiting-guide', {
visibility: 'agent', // viewer tier: 'public' | 'agent' | 'internal'
requirePublished: true,
})
// Get by ID
const doc = await client.documents.getById(documentId)
// List with filters
const docs = await client.documents.list({
artifactType: 'policy',
visibility: 'internal', // sees public + agent + internal
requirePublished: true,
limit: 50,
})Document fields reference
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| slug | string | yes | URL-friendly identifier, unique per tenant |
| title | string | yes | Human-readable title |
| layer | Layer | yes | Content authority level |
| originType | OriginType | yes (import) | Where the content came from |
| bodyHtml | string | yes (import) | HTML content |
| bodyMarkdown | string | yes (import) | Markdown content |
| plainText | string | yes (import) | Plain text for chunking |
| sourceChecksum | string | yes (import) | Content hash for deduplication |
| artifactType | ArtifactType | no | Default: 'knowledge_article' |
| visibility | Visibility | no | Default: 'public' |
| summary | string | no | Short description |
| primaryCategory | string | no | Top-level categorization |
| publicUrl | string | no | External URL |
| externalId | string | no | ID in the source system |
Layers
Layers represent editorial authority and constrain how documents can link to each other:
| Layer | Purpose |
|-------|---------|
| research | Raw research, data, analysis |
| opinion | Interpretations, perspectives |
| guide | Actionable guidance, best practices |
| overarching | High-level doctrine, principles |
| support | Supporting material, glossaries |
Constructive document links (builds_on, supports, read_next, application_of, synthesis_of) must point same-level or downward in the hierarchy. A research document cannot builds_on an overarching document. The relationships supersedes and contradicts are exempt from this rule.
Artifact types
| Type | Description |
|------|-------------|
| knowledge_article | General knowledge content |
| policy | Organizational policy |
| reference | Reference material |
| glossary | Term definitions |
| note | Internal notes |
Visibility
Visibility is a per-record editorial setting that controls who can see a piece of content. It follows a hierarchical access model:
| Level | Who sees it |
|-------|-------------|
| public | Everyone — end users, agents, internal |
| agent | Agents and internal users only |
| internal | Internal users only |
When querying, pass a visibility level to indicate the viewer's access tier. The query returns all records visible at that tier and below:
visibility: 'public'— returns onlypublicrecordsvisibility: 'agent'— returnspublic+agentrecordsvisibility: 'internal'— returnspublic+agent+internalrecords
If omitted, queries default to 'public' visibility.
Sources
Sources are authoritative references that back claims. They represent the provenance of knowledge.
// Create a source
const source = await client.sources.create({
sourceType: 'industry_code',
sourceQuality: 'official',
publisher: 'IETF',
title: 'RFC 6585 — Additional HTTP Status Codes',
url: 'https://www.rfc-editor.org/rfc/rfc6585',
citationText: 'RFC 6585 (2012)',
publishedAt: '2012-04-01',
})
// Upsert by URL (idempotent)
const result = await client.sources.upsert({
sourceType: 'peer_reviewed',
sourceQuality: 'primary',
publisher: 'ACM Computing Surveys',
title: 'A Survey of Rate Limiting Algorithms',
url: 'https://doi.org/10.1234/example',
citationText: 'Johnson et al. (2024)',
})Source types
peer_reviewed, working_paper, statute, case_law, regulatory_guidance, government_reporting, industry_code, vendor_survey, internal_doctrine, other
Source quality
| Quality | Meaning |
|---------|---------|
| primary | Original research, primary evidence |
| official | Government, regulatory, or standards body |
| secondary | Analysis of primary sources |
| vendor | Vendor-produced research or white papers |
| internal | Organization's own internal knowledge |
Claims
Claims are structured assertions extracted from documents. They're the atomic unit of knowledge in the evidence layer.
const claim = await client.claims.create({
documentVersionId: versionId,
claimType: 'fact',
statement: 'Token bucket algorithms provide smoother throughput than fixed window counters',
normalizedKey: 'token_bucket_vs_fixed_window',
confidence: 'high',
applicability: {
area: ['infrastructure'],
primitives: ['rate_limiting'],
},
sourceIds: [sourceId],
isHousePosition: true,
})Claim types
| Type | Meaning |
|------|---------|
| fact | Verifiable statement |
| interpretation | Analysis or perspective |
| recommendation | Suggested action |
| anti_pattern | What not to do |
| definition | Term definition |
Confidence levels
high, medium, low
Conflict detection
Claims with the same normalizedKey but different statements are potential conflicts. The system can detect these:
const conflicts = await client.claims.detectConflicts({
normalizedKey: 'token_bucket_vs_fixed_window',
statement: 'Fixed window counters outperform token buckets in all scenarios',
excludeClaimId: existingClaimId, // optional
})
// Returns claims that contradict the given statementChunks
Chunks are text segments from documents, stored with vector embeddings for semantic search and full-text search vectors for keyword matching.
// Ingest a document version's text into searchable chunks
const { count } = await client.chunks.ingest(documentId, versionId, {
plainText: documentText,
maxTokens: 512, // optional, default 512
})
// count — number of chunks createdThe server splits the text into chunks (preserving paragraph boundaries), generates embeddings via the configured embedding model, and stores each chunk with both a vector embedding and a PostgreSQL tsvector for full-text search.
Document links
Explicit relationships between documents:
await client.documentLinks.create({
fromDocumentId: guideId,
toDocumentId: researchId,
relationshipType: 'builds_on',
orderIndex: 0,
})Relationship types
| Type | Direction rule | Meaning |
|------|---------------|---------|
| builds_on | Downward only | Extends the target document |
| supports | Downward only | Provides evidence for |
| read_next | Downward only | Suggested follow-up |
| application_of | Downward only | Applies principles from |
| synthesis_of | Downward only | Synthesizes multiple sources |
| related | Downward only | General relationship |
| supersedes | Any direction | Replaces the target |
| contradicts | Any direction | Conflicts with the target |
Evidence Search
Two search modes over the evidence layer:
Tenant-scoped search
Full-text search over document chunks, scoped to the current tenant.
const results = await client.evidence.search({
query: 'sliding window rate limiting strategies',
visibility: 'agent', // viewer tier: 'public' | 'agent' | 'internal' (default: 'public')
limit: 10,
})
for (const result of results) {
console.log(result.documentTitle, result.slug)
console.log(result.content) // chunk text
console.log(result.rank) // relevance score
}Public search
Document-level search over published, public documents. Designed for public-facing knowledge base queries.
const results = await client.evidence.searchPublic({
query: 'API design patterns',
artifactType: 'knowledge_article',
limit: 5,
})
for (const result of results) {
console.log(result.title, result.summary)
console.log(result.snippet) // highlighted excerpt
}Guidance
The guidance system assembles recommendations, antipatterns, and terminology from the evidence layer. It's the primary interface for AI agents that need actionable, sourced guidance.
Recommendations
Returns recommendations filtered by applicability, with full rationale chains tracing back to sources.
const guidance = await client.guidance.recommendations({
vertical: 'infrastructure',
primitives: ['rate_limiting', 'circuit_breaker'],
role: ['backend_engineer', 'architect'],
minConfidence: 'medium',
})
for (const rec of guidance.recommendations) {
console.log(rec.title)
console.log(rec.recommendation)
console.log(`confidence: ${rec.confidence}, priority: ${rec.priority}`)
// Full provenance chain
for (const rationale of rec.rationale) {
console.log(` because: ${rationale.statement}`)
for (const source of rationale.sources) {
console.log(` source: ${source.title} (${source.publisher})`)
}
}
}
for (const ap of guidance.antipatterns) {
console.log(`[avoid] ${ap.name}: ${ap.description}`)
console.log(` why it fails: ${ap.whyItFails}`)
console.log(` instead: ${ap.preferredAlternatives.join(', ')}`)
}
for (const term of guidance.terms) {
console.log(`${term.displayTerm}: ${term.notes}`)
}All filter fields are optional. Omitting all filters returns all approved recommendations visible to the tenant.
Primitive-specific guidance
Focused view for a single domain primitive -- returns claims, recommendations, and antipatterns.
const result = await client.guidance.forPrimitive({
primitive: 'rate_limiting',
vertical: 'infrastructure',
role: 'backend_engineer',
})
for (const claim of result.claims) {
console.log(`[${claim.claimType}] ${claim.statement}`)
console.log(` confidence: ${claim.confidence}`)
if (claim.isHousePosition) console.log(' ** house position **')
}
// result.recommendations and result.antipatterns follow the same shapeSource provenance
Bulk lookup of sources backing specific claims:
const provenance = await client.guidance.sourcesForClaims({
claimIds: ['claim-1', 'claim-2'],
})
for (const claim of provenance.claims) {
console.log(claim.statement)
for (const source of claim.sources) {
console.log(` ${source.citationText} (${source.supportLevel})`)
// supportLevel: 'direct' | 'partial' | 'context'
}
}Terminology
Query term definitions scoped by category and role:
const terms = await client.guidance.terminology({
category: 'infrastructure',
role: 'backend_engineer',
entityKey: 'rate_limiting', // optional: scope to a specific entity
})
for (const term of terms.terms) {
console.log(`${term.displayTerm}: ${term.notes}`)
}Context Packs
Context packs are pre-computed bundles of structured data for specific use cases. They're designed for scenarios where an agent needs a consistent, curated payload rather than assembling one from search results at query time.
// Create or update a context pack
await client.contextPacks.upsert({
packKey: 'rate-limiting-bootstrap',
useCase: 'agent_bootstrap',
category: 'infrastructure',
topic: 'rate_limiting',
role: 'backend_engineer',
payload: {
summary: 'Rate limiting patterns and implementation guidance',
keyPoints: ['Token bucket is preferred for...', 'Always set a global fallback...'],
terminology: { /* ... */ },
},
sourceVersionIds: [versionId], // links to document versions
})
// Retrieve by use case and filters
const pack = await client.contextPacks.get({
useCase: 'agent_bootstrap',
category: 'infrastructure',
topic: 'rate_limiting',
role: 'backend_engineer',
})
// Returns the payload object, or null if no matchContext packs are linked to document versions via sourceVersionIds. When a document version is superseded (a new version is published), the pack's association with the old version is removed. This is how packs stay fresh -- stale associations signal that a pack needs to be regenerated.
API Key Management
Tenants can manage their own API keys through the admin namespace.
// Create a new key
const key = await client.admin.createKey({
name: 'production',
environment: 'live', // 'live' or 'test'
scopes: [], // reserved for future use
})
console.log(key.plainKey) // wk_live_... — shown only once
// List keys
const keys = await client.admin.listKeys()
for (const k of keys) {
console.log(`${k.name} (${k.keyPrefix}...) — last used: ${k.lastUsedAt}`)
}
// Revoke a key
await client.admin.revokeKey(keyId)API keys use the format wk_{environment}_{random}. The plain key is shown only at creation time; the server stores a SHA-256 hash.
Error Handling
All errors extend WikiaiError:
import { WikiaiError, NotFoundError, ValidationError,
ConflictError, AuthenticationError, RateLimitError } from 'wikai'
try {
await client.entities.get('concept', 'nonexistent')
} catch (err) {
if (err instanceof NotFoundError) {
console.log('Not found:', err.message)
}
if (err instanceof ValidationError) {
console.log('Invalid input:', err.fields)
}
if (err instanceof RateLimitError) {
console.log('Retry after:', err.retryAfter, 'seconds')
}
}| Error | HTTP Status | When |
|-------|-------------|------|
| ValidationError | 400 | Malformed input, schema violation |
| AuthenticationError | 401 | Missing or invalid API key |
| NotFoundError | 404 | Entity, document, or resource not found |
| ConflictError | 409 | Duplicate key, concurrent modification |
| RateLimitError | 429 | Too many requests (retried automatically) |
| WikiaiError | 5xx | Server error (retried automatically) |
The fields property on ValidationError, NotFoundError, and ConflictError provides per-field error details when available.
Workflows
Seed -> Search -> Guidance
The typical integration flow:
- Define your domain schema on the server. Entity types, edge types, search config.
- Seed the graph with your domain knowledge via
graph.seed(). - Import documents with evidence, sources, and claims.
- Search the graph or evidence when your agent needs context.
- Query guidance for recommendations with full provenance.
Document ingestion pipeline
A complete document ingestion:
// 1. Import the document
const { documentId, versionId } = await client.documents.import({
slug: 'rate-limiting-best-practices',
title: 'Rate Limiting Best Practices',
layer: 'guide',
originType: 'external',
bodyMarkdown: markdown,
bodyHtml: html,
plainText: text,
sourceChecksum: computeHash(html),
visibility: 'public',
})
// 2. Register sources
const source = await client.sources.upsert({
sourceType: 'industry_code',
sourceQuality: 'official',
publisher: 'IETF',
title: 'RFC 6585',
url: 'https://www.rfc-editor.org/rfc/rfc6585',
citationText: 'RFC 6585 (2012)',
})
// 3. Extract claims
const claim = await client.claims.create({
documentVersionId: versionId,
claimType: 'recommendation',
statement: 'Use 429 status codes with Retry-After headers for rate-limited responses',
normalizedKey: 'rate_limit_429_retry_after',
confidence: 'high',
applicability: { area: ['infrastructure'] },
sourceIds: [source.id],
})
// 4. Ingest chunks for evidence search
await client.chunks.ingest(documentId, versionId, {
plainText: text,
})
// 5. Approve and publish
await client.documents.approveVersion(documentId, versionId)
await client.documents.publishVersion(documentId, versionId)
// 6. Build a context pack
await client.contextPacks.upsert({
packKey: 'rate-limiting-bootstrap',
useCase: 'agent_bootstrap',
category: 'rate_limiting',
role: 'backend_engineer',
payload: { summary: '...', keyFacts: ['...'] },
sourceVersionIds: [versionId],
})Agent query pattern
How an AI agent typically queries wikai at runtime:
async function getAgentContext(userQuery: string, vertical: string) {
// 1. Search the domain graph for relevant concepts
const graphResults = await client.graph.search({
query: userQuery,
filters: { vertical },
limit: 5,
})
// 2. Get guidance for the top hit's primitives
const primitives = graphResults.hits
.flatMap(h => h.linkedEntities)
.filter(e => e.entityType === 'primitive')
.map(e => e.entityKey)
const guidance = await client.guidance.recommendations({
vertical,
primitives,
minConfidence: 'medium',
})
// 3. Search evidence for supporting detail
const evidence = await client.evidence.search({
query: userQuery,
visibility: 'agent', // agent sees public + agent records
limit: 3,
})
// 4. Check for a pre-built context pack
const pack = await client.contextPacks.get({
useCase: 'agent_bootstrap',
category: primitives[0],
})
return { graphResults, guidance, evidence, contextPack: pack }
}Types
All input and output types are available via a dedicated export:
import type {
SeedInput, SeedResult,
SearchInput, SearchResult, SearchHit,
InspectInput,
UpsertEntityInput, SetAliasesInput, CreateEdgeInput,
CreateDocumentInput, ImportDocumentInput, AddVersionInput,
DocumentQueryOptions, DocumentListOptions,
CreateSourceInput, CreateClaimInput, DetectConflictsInput,
IngestChunksInput,
UpsertContextPackInput, ContextPackQuery,
CreateDocumentLinkInput,
EvidenceSearchInput, PublicSearchInput,
RecommendationInput, PrimitiveGuidanceInput,
ClaimSourcesInput, TerminologyInput,
CreateApiKeyInput, ApiKeyResponse, ApiKeyListItem,
} from 'wikai/types'