@iflow-mcp/samrajtheailyceum-ai-governance-mcp
v2.0.0
Published
MCP server for AI governance laws, regulations, and frameworks
Readme
AI Governance MCP
A Model Context Protocol (MCP) server that gives any AI assistant real-time access to AI governance laws, regulations, and policy frameworks from around the world.
Compatible with Claude, ChatGPT, Gemini, Copilot, Cursor, Windsurf, and any MCP-compatible client. Runs locally (stdio) or as a hosted server (HTTP/SSE).
GitHub: https://github.com/Samrajtheailyceum/ai-governance-mcp
Quick Navigation
- 🚀 Getting Started
- ⚙️ Configuration
- 🛠️ Available Tools
- 📚 Reference
- 🏗️ Architecture
- 🧪 Testing
- 🔧 Troubleshooting
- 🤝 Contributing
- 🔒 Repository Quality & Governance
- ❓ FAQ
- 📬 Contact & Consulting
- 📄 License
🚀 Getting Started
⚡ One-Command Install (macOS / Linux)
bash <(curl -fsSL https://raw.githubusercontent.com/Samrajtheailyceum/ai-governance-mcp/main/scripts/install.sh)This script:
- Checks Node.js 18+, npm, and git are installed
- Clones the repo to
~/ai-governance-mcp(or uses an existing clone) - Runs
npm install - Runs the smoke test to confirm the server is healthy
- Prints ready-to-paste config snippets for Claude Desktop, Claude Code, Cursor, Windsurf, and HTTP/SSE mode
Options:
| Flag | Description |
|------|-------------|
| --mode stdio | Install then launch server in stdio mode |
| --mode sse | Install then launch server in HTTP/SSE mode on port 3100 |
| --mode skip | Install only, don't start the server (default) |
| --dir <path> | Install to a custom directory (default: ~/ai-governance-mcp) |
| --no-test | Skip the post-install smoke test |
Example — install and immediately start in SSE mode:
bash <(curl -fsSL https://raw.githubusercontent.com/Samrajtheailyceum/ai-governance-mcp/main/scripts/install.sh) --mode sseOption 1: Use the Hosted Server (no setup needed)
A public demo server may be available — check the latest endpoint in Releases or the Discussions tab, as hosted URLs can change. The most recently published endpoint:
https://billing-connecting-aquatic-performs.trycloudflare.com/sseNote: This is an ephemeral Cloudflare tunnel URL and may be offline. For a stable endpoint, deploy your own instance.
Health check: https://billing-connecting-aquatic-performs.trycloudflare.com/health
Option 2: Deploy Your Own (one click)
After deploying, your server URL will be something like:
https://your-app-name.onrender.com/sseUse that URL as your MCP server endpoint on any platform.
Option 3: Manual Setup (local)
Prerequisites
- Node.js 18+ (check with
node --version) - npm (comes with Node)
- git (to clone the repo)
Step 1: Clone and Install
git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git
cd ai-governance-mcp
npm installStep 2: Verify It Works
# Run the test suite (passes with or without internet — offline fallback built-in)
npm test
# Quick health check in HTTP mode
PORT=3100 node src/index.js &
curl http://localhost:3100/health
# Should return: {"status":"ok","server":"ai-governance-mcp","version":"2.0.0"}
kill %1
# One-command terminal smoke test (starts server, checks /health, validates version)
npm run test:terminalOpenAI / ChatGPT Terminal Test Flow
If you are testing from an OpenAI-compatible terminal workflow, use this minimal sequence:
# 1) Install + baseline tests
npm install
npm test
# 2) Start server in SSE mode for MCP clients
npm run start:sse
# endpoint: http://localhost:3100/sse
# health: http://localhost:3100/healthThen in your MCP client, run prompts like:
search_ai_governancewithquery="foundation model transparency requirements"get_latest_ai_governance_updateswithregion="all"get_applied_ai_governance_frameworkswithuse_case="AI hiring assistant for EU market"
If live sources are blocked/rate-limited, the server now returns a limits-aware response with trusted generic regulatory URLs so users still get actionable resources.
Step 3: Choose Your Mode
Option A: Local (stdio) — for Claude Desktop, Claude Code, Cursor, Windsurf
npm start
# Server runs on stdin/stdout — connect via your platform's MCP configThen add to your platform's config (see Platform Config Reference below).
Option B: Remote (HTTP/SSE) — for OpenAI, ChatGPT, platform connectors, team use
npm run start:sse
# Server runs on http://localhost:3100Server URL: http://localhost:3100/sse
Health check: http://localhost:3100/health
Option C: Docker
docker build -t ai-governance-mcp .
docker run -p 3100:3100 ai-governance-mcpServer URL: http://localhost:3100/sse
Deploy to any hosting provider (Railway, Render, Fly.io, etc.) and use that URL instead.
One-Click AI-Assisted Install
Don't want to configure anything manually? Just copy the prompt for your platform below and paste it into your AI assistant. It will handle the installation for you.
For Claude Code (CLI)
Paste this into Claude Code:
Install the AI Governance MCP server from https://github.com/Samrajtheailyceum/ai-governance-mcp for me.
Steps:
1. Clone the repo: git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git ~/ai-governance-mcp
2. Run: cd ~/ai-governance-mcp && npm install
3. Add the MCP server: claude mcp add ai-governance node ~/ai-governance-mcp/src/index.js
4. Confirm it's added by running: claude mcp listFor Cursor (with AI chat)
Paste this into Cursor's AI chat:
Help me install the AI Governance MCP server. Here's what to do:
1. Open a terminal and run:
git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git ~/ai-governance-mcp
cd ~/ai-governance-mcp && npm install
2. Then add this to my MCP config file (.cursor/mcp.json):
{
"mcpServers": {
"ai-governance": {
"command": "node",
"args": ["~/ai-governance-mcp/src/index.js"]
}
}
}
3. Tell me to restart Cursor to activate it.For Windsurf (with AI chat)
Paste this into Windsurf's AI chat:
Help me install the AI Governance MCP server. Here's what to do:
1. Open a terminal and run:
git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git ~/ai-governance-mcp
cd ~/ai-governance-mcp && npm install
2. Then add this to my Windsurf MCP config (~/.codeium/windsurf/mcp_config.json):
{
"mcpServers": {
"ai-governance": {
"command": "node",
"args": ["~/ai-governance-mcp/src/index.js"]
}
}
}
3. Tell me to restart Windsurf to activate it.For ChatGPT / OpenAI (needs hosted server)
Paste this into ChatGPT or any OpenAI-powered tool:
I want to connect to the AI Governance MCP server.
The server repo is at: https://github.com/Samrajtheailyceum/ai-governance-mcp
To use it with OpenAI, I need to:
1. Clone and install: git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git && cd ai-governance-mcp && npm install
2. Start in HTTP/SSE mode: npm run start:sse
3. The server will be available at: http://localhost:3100/sse
4. For production, deploy to Railway/Render and use the public URL as the MCP endpoint.
Help me set this up step by step.For Claude Desktop (manual config)
Paste this into Claude Desktop or Claude Code to get help setting it up:
Help me add the AI Governance MCP server to my Claude Desktop config.
1. First clone and install:
git clone https://github.com/Samrajtheailyceum/ai-governance-mcp.git ~/ai-governance-mcp
cd ~/ai-governance-mcp && npm install
2. Then edit my claude_desktop_config.json and add this to the mcpServers section:
"ai-governance": {
"command": "node",
"args": ["/Users/YOUR_USERNAME/ai-governance-mcp/src/index.js"]
}
Config location:
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
- Windows: %APPDATA%\Claude\claude_desktop_config.json
3. Remind me to restart Claude Desktop after.For Any Other MCP-Compatible Platform
Generic prompt you can paste into any AI assistant:
I want to install the AI Governance MCP server from https://github.com/Samrajtheailyceum/ai-governance-mcp
It's a standard MCP server that runs over stdio (default) or HTTP/SSE (with PORT env var).
Please help me:
1. Clone the repo and run npm install
2. Configure it for whatever MCP client/platform I'm using
3. The entry point is src/index.js
4. For HTTP/SSE mode, run with PORT=3100 and connect to http://localhost:3100/sse⚙️ Configuration
AI Platform Support
| Platform | Logo | Typical MCP Mode | Notes |
|----------|------|------------------|-------|
| ChatGPT / OpenAI | | HTTP/SSE | Use hosted endpoint or
npm run start:sse. |
| Claude (Desktop / Code) | | stdio or HTTP/SSE | Great for local stdio integration. |
| Gemini |
| HTTP/SSE | Use the public
/sse URL for remote connectors. |
| GitHub Copilot | | HTTP/SSE | Connect as remote MCP endpoint. |
| Cursor |
| stdio | Configure
.cursor/mcp.json. |
| Windsurf | | stdio | Configure
mcp_config.json. |
Platform Config Reference
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"ai-governance": {
"command": "node",
"args": ["/absolute/path/to/ai-governance-mcp/src/index.js"]
}
}
}Config file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Important: Restart Claude Desktop after editing the config.
Claude Code (CLI)
claude mcp add ai-governance node /absolute/path/to/ai-governance-mcp/src/index.jsCursor
Add to .cursor/mcp.json (project) or ~/.cursor/mcp.json (global):
{
"mcpServers": {
"ai-governance": {
"command": "node",
"args": ["/absolute/path/to/ai-governance-mcp/src/index.js"]
}
}
}Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"ai-governance": {
"command": "node",
"args": ["/absolute/path/to/ai-governance-mcp/src/index.js"]
}
}
}OpenAI / ChatGPT / Assistants API
Start the server in HTTP/SSE mode:
npm run start:sse
# or: PORT=3100 node src/index.jsMCP server endpoint:
http://localhost:3100/sseFor production, deploy the server and use the deployed URL.
Any MCP-Compatible Client
stdio mode (default):
node /absolute/path/to/ai-governance-mcp/src/index.jsHTTP/SSE mode:
PORT=3100 node /absolute/path/to/ai-governance-mcp/src/index.jsConnect to http://localhost:3100/sse using the MCP SSE transport.
Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| PORT | (none) | Set to enable HTTP/SSE mode (e.g. 3100). When unset, server runs in stdio mode. |
| NODE_ENV | development | Set to production for deployed instances. |
Examples:
# stdio mode (default — no PORT set)
node src/index.js
# HTTP/SSE mode on port 3100
PORT=3100 node src/index.js
# Custom port
PORT=8080 node src/index.jsnpm Scripts
| Script | Command | What It Does |
|--------|---------|-------------|
| npm run install:local | bash scripts/install.sh | Full local installer with prereq checks, smoke test, and config snippets |
| npm start | node src/index.js | Start in stdio mode (for MCP clients) |
| npm run start:sse | PORT=3100 node src/index.js | Start in HTTP/SSE mode on port 3100 |
| npm test | node test/client.js | Run the full test suite against live APIs |
| npm run test:terminal | bash scripts/terminal-smoke.sh | Start server and validate /health + version in one command |
🛠️ Available Tools
| Tool | Description |
|------|-------------|
| search_ai_governance | Full-text search across all databases, with focus support for sustainability |
| get_latest_ai_governance_updates | Latest updates from RSS feeds, with automatic non-RSS fallback |
| get_sustainability_ai_regulatory_briefing | Sustainability-focused AI and disclosure regulation briefing |
| get_applied_ai_governance_frameworks | Applies real frameworks to a use case with context, implementation checklist, and links |
| get_key_ai_governance_documents | Curated list of landmark documents |
| get_eu_ai_act_info | EU AI Act deep dive with topic search |
| get_us_ai_policy | US policy landscape with Federal Register search |
| get_global_ai_frameworks | OECD, G7, UN, UNESCO, Bletchley and more |
| fetch_governance_document | Fetch and extract text from any document URL |
| compare_ai_governance_frameworks | Side-by-side comparison on a specific topic |
| submit_mcp_feedback | Capture structured user feedback (rating + message) for maintainers; logs to logs/mcp-feedback.jsonl |
Most user-facing tools now include a Response Protocol (Professional) preface so downstream LLMs use the retrieved context correctly and state assumptions/uncertainty explicitly.
If live sources are unavailable or a question is out-of-scope for current retrievable data, the MCP now returns a clear limitations notice plus trusted generic resource URLs (OECD, EUR-Lex, Federal Register, NIST, UNESCO, IFRS) so users still get actionable next steps.
📚 Reference
Data Sources
| Region | Source | What's Covered | |--------|--------|----------------| | EU | EUR-Lex API + RSS | EU AI Act, GDPR, CSRD, CSDDD, AI regulations | | US | Federal Register API, GovInfo | Executive orders, federal agency rules, AI bills, climate disclosure activity | | Global | OECD, G7, UNESCO, UN, ISSB | International frameworks and principles | | News | Stanford HAI, AI Now, FLI, ESG Today | Research, policy, and sustainability news |
Core Regulatory Reference Matrix
| Reference | Region | Why it matters | Link | |-----------|--------|----------------|------| | EU AI Act (2024/1689) | EU | Binding AI risk obligations, GPAI duties, penalties | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 | | GDPR | EU | Data protection/legal basis controls for AI systems | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679 | | CSRD | EU | Sustainability reporting obligations and governance evidence expectations | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022L2464 | | CSDDD | EU | Supply-chain due diligence duties relevant to AI-enabled operations | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024L1760 | | EO 14179 (2025) | US | Current federal executive direction for AI policy | https://www.federalregister.gov/documents/2025/01/23/2025-01953/removing-barriers-to-american-leadership-in-artificial-intelligence | | NIST AI RMF 1.0 | US | Practical governance lifecycle (Govern, Map, Measure, Manage) | https://airc.nist.gov/RMF | | SEC Climate Rule | US | Climate-related disclosure governance and controls context | https://www.sec.gov/rules-regulations/2024/03/enhancement-standardization-climate-related-disclosures-investors | | OECD AI Principles | Global | International baseline principles adopted across countries | https://oecd.ai/en/ai-principles | | UNESCO AI Ethics Recommendation | Global | Human-rights and ethics guardrails for AI policy and deployment | https://unesdoc.unesco.org/ark:/48223/pf0000381137 | | ISSB IFRS S1/S2 | Global | Sustainability disclosure standards used in cross-border governance | https://www.ifrs.org/issued-standards/ifrs-sustainability-standards-navigator/ |
Example Prompts
Once connected to any AI assistant, you can ask:
- "What are the latest AI governance updates from the EU?"
- "Search for AI liability regulations"
- "Compare how the EU and US handle foundation model requirements"
- "Give me a summary of the EU AI Act's prohibited practices"
- "Fetch the NIST AI Risk Management Framework"
- "What US executive orders on AI are currently active?"
Feedback Loop
The MCP can collect user feedback through submit_mcp_feedback (rating + context + message).
This helps maintainers improve future versions, but does not auto-train the model in real-time.
🏗️ Architecture
ai-governance-mcp/
├── src/
│ ├── index.js — MCP server (stdio + HTTP/SSE), all tool definitions
│ ├── fetcher.js — Data fetching (EUR-Lex, Fed Register, RSS, web scraping)
│ ├── sources.js — Source config (API URLs, key documents, RSS feeds)
│ ├── cache.js — LRU in-memory cache (30-min TTL)
│ ├── logger.js — File-based logger utility
│ ├── middleware/
│ │ └── rateLimiter.js — Express rate-limiting middleware (optional, not wired by default)
│ └── tools/
│ └── updates.js — Tool handler stubs (work in progress)
├── test/
│ └── client.js — End-to-end test suite (27 checks, works offline)
├── scripts/
│ ├── install.sh — One-command installer (prereq checks, clone, npm install, smoke test, config snippets)
│ └── terminal-smoke.sh — HTTP health-check smoke test
├── Dockerfile — Docker deployment config
├── render.yaml — Render.com one-click deploy config
├── CONTRIBUTING.md — Contributor and maintainer workflow
├── SECURITY.md — Responsible disclosure policy
└── package.json — Dependencies and npm scriptsSystem Architecture Diagram (L1 — System Context)
flowchart TB
subgraph L1["LAYER 1 — DATA SOURCES"]
direction LR
DS1[("1A\nEUR-Lex\nEU Regulations")]
DS2[("1B\nFederal Register\n& GovInfo\nUS Regulations")]
DS3[("1C\nOECD · UNESCO · G7\nGlobal Frameworks")]
DS4[("1D\nRSS News Feeds\nAI & Sustainability News")]
end
subgraph L2["LAYER 2 — DATA ACQUISITION"]
direction LR
AQ["2\nData Fetcher\nAPI · Scraper · RSS Parser"]
end
subgraph L3["LAYER 3 — STORAGE"]
direction LR
ST1[("3A\nIn-Memory Cache\n30-min TTL")]
ST2[("3B\nEmbedded Key Docs\nOffline Fallback")]
end
subgraph L4["LAYER 4 — MCP SERVER"]
direction LR
MCP["4\nAI Governance MCP Server\n11 Governance Tools"]
end
subgraph L5["LAYER 5 — TRANSPORT"]
direction LR
TR1["5A\nStdio Transport\nLocal Mode"]
TR2["5B\nHTTP / SSE Transport\nRemote Mode"]
end
subgraph L6["LAYER 6 — AI CLIENTS"]
direction LR
CL1["6A\nClaude\nDesktop / Code"]
CL2["6B\nCursor /\nWindsurf"]
CL3["6C\nChatGPT / OpenAI /\nGemini / Copilot"]
CL4["6D\nAny MCP-Compatible\nClient"]
end
DS1 & DS2 & DS3 & DS4 -->|fetch| AQ
AQ <-->|cache read/write| ST1
ST2 -.->|offline fallback| AQ
AQ --> MCP
MCP --> TR1 & TR2
TR1 -->|stdio| CL1 & CL2
TR2 -->|HTTP/SSE| CL3 & CL4
classDef datasource fill:#aed6f1,stroke:#2980b9,color:#000
classDef acquisition fill:#a9dfbf,stroke:#27ae60,color:#000
classDef storage fill:#a9dfbf,stroke:#27ae60,color:#000
classDef server fill:#a9dfbf,stroke:#27ae60,color:#000
classDef transport fill:#f9c8a3,stroke:#e67e22,color:#000
classDef client fill:#f9c8a3,stroke:#e67e22,color:#000
class DS1,DS2,DS3,DS4 datasource
class AQ acquisition
class ST1,ST2 storage
class MCP server
class TR1,TR2 transport
class CL1,CL2,CL3,CL4 client
style L1 fill:#d6eaf8,stroke:#2980b9,color:#000
style L2 fill:#d5f5e3,stroke:#27ae60,color:#000
style L3 fill:#d5f5e3,stroke:#27ae60,color:#000
style L4 fill:#d5f5e3,stroke:#27ae60,color:#000
style L5 fill:#fde8d8,stroke:#e67e22,color:#000
style L6 fill:#fde8d8,stroke:#e67e22,color:#000How It Works
- Client connects via stdio (local) or SSE (remote)
- Client calls a tool (e.g.
search_ai_governancewith query "AI liability") - Server fetches data from EUR-Lex, Federal Register, GovInfo, or RSS feeds
- Results are cached in-memory for 30 minutes (avoids rate limits, speeds up repeat queries)
- Server returns formatted markdown results to the client
Caching
All API responses are cached in-memory for 30 minutes. The cache is per-process — restarting the server clears the cache. No external cache (Redis, etc.) is needed.
Adding New Sources
- Add the source config in
src/sources.js:
// In SOURCES object
myNewSource: {
name: "My Source Name",
region: "EU", // or "US", "Global"
baseUrl: "https://api.example.com",
rssFeeds: [
{ label: "My Feed", url: "https://example.com/feed.xml" }
],
keyDocs: [
{
id: "doc-1",
title: "Important Document",
url: "https://example.com/doc",
date: "2024-01-01",
status: "Active",
type: "Regulation"
}
]
}- Add fetch logic in
src/fetcher.js:
export async function searchMySource(query, maxResults = 10) {
const cacheKey = `mysource:${query}:${maxResults}`;
const cached = getCached(cacheKey);
if (cached) return cached;
// Fetch from API, parse results, return array of { title, url, date, summary, source, region }
setCached(cacheKey, results);
return results;
}Wire it into
globalSearchinsrc/fetcher.jsto include in combined search results.Optionally add a dedicated tool in
src/index.jsusingserver.tool(...).
🧪 Testing
# Full test suite — tests v2.0 data fetchers against live APIs (with offline fallback)
npm testThe test suite checks:
- Source configuration (key docs, RSS feeds, regions)
- Key document retrieval (EU, US, Global)
- Federal Register API search
- EUR-Lex search
- RSS feed aggregation
- Global combined search (all sources)
- Document content fetching (scrapes a live URL)
- Global framework ranking and retrieval quality
- A 20-prompt reliability sweep across sustainability + AI governance topics
- Applied framework guidance includes context, implementation steps, and resource links
Internet access improves results, but all 27 tests pass even offline — the server's built-in offline cache and fallback sources ensure the suite always completes successfully. Network errors will appear in the output when live APIs are unreachable; these are expected and handled.
🔧 Troubleshooting
"Cannot find module" or npm install fails
# Make sure you're in the project directory
cd ai-governance-mcp
# Clear and reinstall
rm -rf node_modules package-lock.json
npm installServer starts but tools return empty results
The server fetches live data from external APIs (EUR-Lex, Federal Register, etc.). Check:
- You have internet access
- The APIs aren't temporarily down (the server has fallback caches for key documents)
- Run
npm testto see which sources are responding
Claude Desktop doesn't show the MCP tools
- Make sure the absolute path in
claude_desktop_config.jsonis correct (no~— use full path) - Restart Claude Desktop after editing the config
- Check the path works:
node /your/absolute/path/to/ai-governance-mcp/src/index.js— should print "AI Governance MCP Server running on stdio" to stderr
Port already in use (HTTP/SSE mode)
# Find what's using the port
lsof -i :3100
# Kill it
kill -9 <PID>
# Or use a different port
PORT=3200 node src/index.jsCORS errors when connecting from a browser-based client
The server includes CORS headers that allow all origins (*). If you're behind a reverse proxy, make sure the proxy forwards the CORS headers.
Docker build fails
# Make sure Docker is running, then:
docker build --no-cache -t ai-governance-mcp .🤝 Contributing
Contributions welcome! Here's how:
- Fork the repo
- Create a branch (
git checkout -b feature/my-new-source) - Make your changes — add sources in
sources.js, fetch logic infetcher.js, tools inindex.js - Test (
npm test) - Open a PR with a clear description of what you added
Ideas for contributions:
- New data sources (UK, China, Canada, Brazil, Singapore AI regulations)
- Additional comparison topics in
compare_ai_governance_frameworks - Structured data extraction (JSON output for specific regulations)
- Webhook/notification support for new regulation alerts
- Authentication support for premium data APIs
🔒 Repository Quality & Governance
This repository includes:
- Contributor workflow: see
CONTRIBUTING.md - Version history: see
CHANGELOG.md - Security policy: see
SECURITY.md - Operational smoke check:
npm run test:terminal
Design goals for this MCP:
- High-signal governance answers with source links and jurisdiction context
- Graceful fallback behavior when live endpoints are blocked/rate-limited
- Practical implementation guidance (not just policy summaries)
❓ Frequently Asked Questions
Q: Does this cost anything? A: No. The server is free and open source. All data sources (EUR-Lex, Federal Register, OECD, RSS feeds) are free public APIs.
Q: How current is the data? A: Live. Every query hits the actual APIs in real-time (with 30-minute caching). RSS feeds pull the latest published items. Key documents are updated in source code as new landmark regulations are published.
Q: Can I use this commercially? A: Yes. MIT license. The data comes from public government sources.
Q: Does it work offline? A: Partially. Key documents (EU AI Act, NIST RMF, etc.) are cached in source code and always available. Live search and RSS feeds require internet.
Q: How do I add my own country's regulations?
A: See Adding New Sources above. Add the API/RSS config to sources.js and the fetch logic to fetcher.js.
📬 Questions / AI Governance Consulting
For any questions or tailored AI governance support, email [email protected] or visit https://theailyceum.com.
📄 License
MIT
