@systima/aiact-docs
v0.1.0
Published
Annex IV technical documentation generator for AI systems — Article 11 EU AI Act compliance infrastructure
Downloads
51
Maintainers
Readme
@systima/aiact-docs
Annex IV technical documentation generator for AI systems. Scans your codebase, asks what it cannot infer, and produces the structured documentation required by EU AI Act Article 11.
- Codebase scanning: auto-detects AI frameworks, model identifiers, architecture patterns, and infrastructure
- Interactive questionnaire: fills in what code analysis cannot infer (intended purpose, risk management, human oversight)
- CI/CD-friendly: non-interactive mode with YAML/JSON config for pipeline integration
- Multiple output formats: Markdown, JSON, PDF
- Gap analysis: reports what is documented, what is missing, and what to prioritise
- Schema mapped to Annex IV: every field is annotated with the Annex IV section it relates to
npm install @systima/aiact-docsImportant: this package provides technical documentation generation capability required by Article 11. It is necessary infrastructure for compliance, not sufficient compliance in itself. See From Documentation to Compliance and COMPLIANCE.md.
Quick Start
Scan a codebase
npx @systima/aiact-docs scan --dir ./my-ai-projectDetects AI frameworks (Vercel AI SDK, Mastra, LangChain, OpenAI, Anthropic, and more), model identifiers in source code, architecture patterns (RAG, agents, multi-agent, streaming, function-calling), and Systima compliance packages.
Generate full documentation
npx @systima/aiact-docs generate --system-id loan-scorer-v2 --dir ./my-ai-projectScans the codebase, runs the interactive questionnaire for information that cannot be auto-detected, and writes Annex IV documentation to ./annex-iv-docs/.
Non-interactive mode (CI/CD)
npx @systima/aiact-docs generate \
--system-id loan-scorer-v2 \
--dir ./my-ai-project \
--config ./annex-iv-config.yaml \
--non-interactiveReads questionnaire answers from a YAML or JSON config file instead of prompting interactively. Suitable for automated pipelines.
Gap analysis only
npx @systima/aiact-docs gap-analysis --dir ./my-ai-projectReports documentation completeness without generating the full document. Shows which Annex IV sections have gaps, their severity, and what actions are needed.
Programmatic API
import { scanCodebase, runQuestionnaire, generateDocumentation, analyseGaps } from '@systima/aiact-docs'
// Step 1: scan
const scan = await scanCodebase({ directory: './my-ai-project' })
// Step 2: interactive questionnaire (or load config for CI/CD)
const answers = await runQuestionnaire({ scanResult: scan })
// Step 3: generate
await generateDocumentation({
systemId: 'loan-scorer-v2',
scanResult: scan,
questionnaireResult: answers,
outputDirectory: './annex-iv-docs',
formats: ['markdown', 'json'],
})
// Or just run gap analysis
const gaps = analyseGaps(document)CLI Reference
All commands accept --dir to specify the target codebase (defaults to .).
scan
Scan a codebase for AI framework usage, models, and architecture patterns.
npx @systima/aiact-docs scan [options]
Options:
--dir Directory to scan (default: ".")
--format Output format: json, table (default: "table")
--include-dev Include devDependencies in detection (default: false)generate
Scan, run questionnaire, and generate Annex IV documentation.
npx @systima/aiact-docs generate [options]
Options:
--dir Directory to scan (default: ".")
--system-id System identifier (required)
--output Output directory (default: "./annex-iv-docs")
--format Output format: markdown, json, pdf, all (default: "all")
--config Path to questionnaire config (YAML/JSON)
--non-interactive Skip interactive questionnaire; requires --configgap-analysis
Scan and report documentation gaps without generating full docs.
npx @systima/aiact-docs gap-analysis [options]
Options:
--dir Directory to scan (default: ".")
--format Output format: json, table (default: "table")What Gets Auto-Detected
| Category | Examples |
|---|---|
| AI SDKs | Vercel AI SDK, Mastra, LangChain |
| Model providers | OpenAI, Anthropic, Google Generative AI, Hugging Face |
| ML frameworks | TensorFlow.js |
| Vector stores | Pinecone, ChromaDB, Qdrant, Weaviate |
| Architecture patterns | RAG, agents, multi-agent, middleware, streaming, function-calling, fine-tuning, embeddings |
| Model identifiers | gpt-4o, claude-sonnet-4-5-20250929, gemini-2.5-pro, etc. in source files |
| Compliance infrastructure | @systima/aiact-audit-log, @systima/llm-bias-test |
| Quality tooling | CI/CD configs, test frameworks, linting, code review tools |
What Requires Human Input
The questionnaire covers information that cannot be inferred from code:
- Section 1: Intended purpose, use cases, target users, deployment geography
- Section 2: Training data description, model selection rationale, design choices
- Section 3: Monitoring KPIs, alert thresholds, human oversight procedures
- Section 4: Risk identification methodology, risk assessment, mitigation measures
- Section 5: Change management procedures, update triggers
- Section 6: Applicable harmonised standards, conformity assessment approach
- Section 7: Incident reporting procedures, corrective action procedures
- Section 8: Bias assessment methodology, fairness metrics
- Section 9: QMS scope, organisational responsibilities, audit schedule
Annex IV Schema
The output follows the 9-section structure defined in EU AI Act Annex IV (Regulation (EU) 2024/1689). Every field tracks its source (auto-detected, questionnaire, or missing) and confidence level.
| Section | Title | |---|---| | 1 | General description of the AI system | | 2 | Detailed description of elements and development process | | 3 | Monitoring, functioning, and control | | 4 | Risk management system | | 5 | Changes throughout the lifecycle | | 6 | Harmonised standards, common specifications, or other means | | 7 | Post-market monitoring system | | 8 | Assessment of possible discriminatory impacts | | 9 | Quality management system description |
Output Formats
- Markdown (
annex-iv-documentation.md): human-readable document with all 9 sections, suitable for review and version control - JSON (
annex-iv-documentation.json): machine-readableAnnexIVDocumentobject with schema version, source indicators, and confidence scores - PDF (
annex-iv-documentation.pdf): formatted document suitable for submission to notified bodies or regulatory authorities
From Documentation to Compliance
This package generates the technical documentation required by Article 11 and structured per Annex IV. Full EU AI Act compliance for a high-risk system also requires:
- Risk management system (Article 9): defining risk criteria, conducting assessments, and implementing mitigations. The documentation captures what you report; it does not perform the risk assessment itself.
- Data governance (Article 10): data quality, representativeness, and bias considerations for training and validation datasets.
- Automatic logging (Article 12): structured, tamper-evident audit logging. See
@systima/aiact-audit-logfor a ready-made solution. - Human oversight (Article 14): designing mechanisms for human review, override, and intervention. The documentation records your oversight design; it does not implement the mechanisms.
- Post-market monitoring (Article 72): defining monitoring procedures, KPIs, and escalation paths.
- Conformity assessment (Articles 40-49): the complete assessment process, which may involve a notified body depending on the system's risk classification.
The generated documentation is a starting point that requires expert review; it is not a finished compliance deliverable.
For a compliance assessment of your specific system, visit systima.ai.
Requirements
- Node.js >= 18
