sec-gatekeeper
v1.2.2
Published
Pre-commit security scanner for Node.js backends — Lambda, AppSync, DynamoDB, OpenSearch, JWT
Maintainers
Readme
sec-gatekeeper
Pre-commit security scanner for Node.js backends, powered by AI and a centralized rule server. Catches secrets, injection vulnerabilities, auth issues, business logic flaws, and more — before code reaches your repo.
Built for AWS Lambda + AppSync + DynamoDB + OpenSearch + JWT stacks, but works with any Node.js/TypeScript backend.
Quick Start
npm install --save-dev sec-gatekeeper
npx sec-gatekeeper setupThat's it. Every git commit now runs a security scan on staged files.
What It Catches
Static Analysis (regex + entropy)
- Secrets — API keys, AWS credentials, JWT secrets, passwords, private keys, connection strings, entropy-based detection
- JWT/Auth —
jwt.decode()without verify, hardcoded secrets, missing expiry, algorithmnone, missing authorization - DynamoDB — Expression injection, unfiltered scans, missing
ExpressionAttributeValues - OpenSearch — Query injection from user input,
match_alldata leaks - Lambda/AppSync — Missing auth, raw DB items in responses, logging PII, missing input validation
- Third-Party — SSRF, HTTP instead of HTTPS, hardcoded API credentials
- Injection —
eval(),new Function(),child_processwith user input - .env Files — Scans committed
.envfiles for secrets
AI Deep Analysis (optional)
- Business logic flaws that regex can't catch (authorization bypass, privilege escalation, IDOR)
- Cross-file data flow tracking (user input → database query across modules)
- Race conditions in async operations (TOCTOU)
- Validates static analysis findings — confirms true positives, flags false positives
- Finds issues the pattern engine missed
How It Works
On every commit:
- Extracts staged JS/TS files via
git diff - Classifies high-risk files (resolvers, handlers, auth, services)
- Fetches rules from MCP server (if enabled)
- Runs secrets detection (regex + Shannon entropy)
- Runs the rule engine (28+ built-in rules + custom + MCP rules)
- Runs AI deep analysis on high-risk files (if enabled)
- Deduplicates issues (same file + line + rule = single issue)
- Blocks the commit if any HIGH severity issue is found
Commands
npx sec-gatekeeper setup # Install pre-commit hook + generate config
npx sec-gatekeeper uninstall # Remove the pre-commit hook
npx sec-gatekeeper scan # Run security scan on staged files (default)
npx sec-gatekeeper serve # Start the MCP rule server
npx sec-gatekeeper --help # Show help
npx sec-gatekeeper --version # Show versionCLI Flags
npx sec-gatekeeper scan --diff-only # Only scan changed lines (not entire files)
npx sec-gatekeeper scan --verbose # Show OWASP/CWE refs + scan timing
npx sec-gatekeeper scan --quiet # JSON output only
npx sec-gatekeeper scan --format=sarif # SARIF output for CI integration
npx sec-gatekeeper scan --format=json # Structured JSON output
npx sec-gatekeeper serve --port=3100 # MCP server on custom port
npx sec-gatekeeper serve --api-key=KEY # MCP server with authConfiguration
setup creates a .sec-gatekeeper.json in your repo root:
{
"blockOnSeverity": ["HIGH"],
"warnOnSeverity": ["MEDIUM"],
"highRiskPaths": ["resolvers/", "handlers/", "lambdas/", "services/", "auth/"],
"ignorePaths": ["**/*.test.*", "**/__tests__/**", "**/node_modules/**"],
"scanExtensions": [".js", ".ts", ".jsx", ".tsx", ".mjs", ".mts", ".env"],
"diffOnly": false,
"verbose": false,
"entropyAllowlist": [],
"customRulesPath": null,
"ai": {
"enabled": false,
"provider": "openai",
"model": "gpt-4o-mini",
"maxRetries": 2,
"timeoutMs": 30000,
"analyzeAllFiles": false
},
"mcp": {
"enabled": false,
"serverUrl": "http://localhost:3100",
"cacheRules": true
}
}Arrays in your config are merged with defaults (not replaced), so adding a custom ignorePaths entry won't lose the built-in ignores.
AI Analysis
sec-gatekeeper integrates with AI providers for deep security analysis that goes beyond pattern matching. The AI analyzer:
- Receives the file content plus cross-file context (imported modules that are also staged)
- Gets the static analysis findings and validates them (confirms true positives, flags false positives)
- Detects business logic flaws, data flow issues, and race conditions
- Returns structured issues with OWASP/CWE references and confidence ratings
- Auto-filters low-confidence findings to reduce noise
Supported Providers
| Provider | Config value | Default model | Notes |
|----------|-------------|---------------|-------|
| OpenAI | openai | gpt-4o-mini | Also works with any OpenAI-compatible API |
| Anthropic | anthropic | claude-sonnet-4-20250514 | Uses Messages API |
| Google Gemini | gemini | gemini-2.0-flash | Uses Generative Language API |
| Ollama | ollama | llama3 | Local/self-hosted, no API key needed |
| Custom | custom | — | Any OpenAI-compatible endpoint |
Configuration via .sec-gatekeeper.json
{
"ai": {
"enabled": true,
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"apiKey": "sk-ant-...",
"maxRetries": 2,
"timeoutMs": 30000,
"maxFileSize": 15000,
"analyzeAllFiles": false
}
}Set analyzeAllFiles to true to run AI on every file, not just high-risk ones.
Configuration via environment variables
SEC_AI_ENABLED=true
SEC_AI_PROVIDER=openai # openai | anthropic | gemini | ollama | custom
SEC_AI_ENDPOINT=https://api.openai.com/v1/chat/completions
SEC_AI_API_KEY=sk-your-key
SEC_AI_MODEL=gpt-4o-miniUsing Ollama (local, free)
{
"ai": {
"enabled": true,
"provider": "ollama",
"model": "llama3",
"endpoint": "http://localhost:11434/api/chat"
}
}No API key needed — runs entirely on your machine.
Legacy config
The old aiEnabled, aiEndpoint, aiApiKey, aiModel fields still work for backward compatibility. They are auto-synced into the ai config block.
MCP Rule Server
sec-gatekeeper includes a built-in MCP (Model Context Protocol) rule server for centralized rule management across multiple repos.
Starting the server
npx sec-gatekeeper serve # Default port 3100
npx sec-gatekeeper serve --port=8080 # Custom port
npx sec-gatekeeper serve --api-key=secret # With authenticationOr via environment variable:
SEC_MCP_API_KEY=secret npx sec-gatekeeper serveAPI Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | /rules | All rules with regex patterns, OWASP/CWE metadata |
| GET | /rules/:category | Rules filtered by category (auth, db, api, injection, secrets, input) |
| GET | /rules/id/:id | Single rule by ID |
| POST | /rules/validate | Send { "code": "...", "file": "handler.ts" } to scan code remotely |
| GET | /health | Health check |
| GET | /meta | Server metadata (version, rule count, categories) |
Connecting clients to the MCP server
In each repo's .sec-gatekeeper.json:
{
"mcp": {
"enabled": true,
"serverUrl": "http://your-server:3100",
"apiKey": "secret",
"cacheRules": true
}
}Or via environment variables:
SEC_MCP_ENABLED=true
SEC_MCP_URL=http://your-server:3100
SEC_MCP_API_KEY=secretRules fetched from the MCP server are cached locally for 1 hour (.sec-gatekeeper-mcp-cache.json) and merged with built-in + custom rules. If the server is unreachable, the scanner falls back to cached rules.
Programmatic usage
import { startMCPServer } from "sec-gatekeeper";
startMCPServer({
port: 3100,
apiKey: "secret",
customRules: myAdditionalRules,
});Remote code validation
curl -X POST http://localhost:3100/rules/validate \
-H "Content-Type: application/json" \
-d '{"code": "const x = jwt.decode(token);", "file": "auth.ts"}'Diff-Only Mode
For legacy repos with existing issues, use diff-only mode to only flag problems in newly changed lines:
{ "diffOnly": true }Or via CLI: npx sec-gatekeeper scan --diff-only
Inline Suppression
Suppress specific lines (works for both secrets and rule engine):
// sec-gatekeeper-disable-next-line
const decoded = jwt.decode(token); // intentional — verified elsewhere
const x = eval(code); // sec-gatekeeper-disable-lineEntropy Allowlist
Reduce false positives from base64 strings, hashed IDs, etc.:
{
"entropyAllowlist": ["base64EncodedPrefix", "someKnownHash"]
}Custom Rules
Create a JSON file with additional rules:
[
{
"id": "CUSTOM-001",
"name": "Forbidden import",
"description": "Do not import from deprecated module",
"severity": "HIGH",
"category": "api",
"pattern": "from ['\"]deprecated-module['\"]",
"flags": "g",
"owasp": "A06",
"cwe": "1104",
"enabled": true
}
]Then reference it in .sec-gatekeeper.json:
{ "customRulesPath": ".sec-gatekeeper-rules.json" }CI/CD Integration
GitHub Actions (SARIF)
- name: Security scan
run: npx sec-gatekeeper scan --format=sarif > results.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarifJSON output for custom processing
- name: Security scan
run: npx sec-gatekeeper scan --format=json > results.jsonProgrammatic Usage
import { scanFileContent, loadConfig, evaluate, analyzeWithAI } from "sec-gatekeeper";
const config = loadConfig();
const issues = scanFileContent(code, "handler.ts");
// Optional: add AI analysis
const aiIssues = await analyzeWithAI(code, "handler.ts", config);
issues.push(...aiIssues);
const result = evaluate(issues, 1, config);
console.log(result.blocked); // true if HIGH issues found
console.log(result.durationMs); // scan time in msEnvironment Variables Reference
| Variable | Description |
|----------|-------------|
| SEC_AI_ENABLED | Set to "true" to enable AI analysis |
| SEC_AI_PROVIDER | AI provider: openai, anthropic, gemini, ollama, custom |
| SEC_AI_ENDPOINT | AI API endpoint URL |
| SEC_AI_API_KEY | AI API key |
| SEC_AI_MODEL | AI model name |
| SEC_MCP_ENABLED | Set to "true" to fetch rules from MCP server |
| SEC_MCP_URL | MCP rule server URL |
| SEC_MCP_API_KEY | MCP server API key |
| SEC_DIFF_ONLY | Set to "true" for diff-only mode |
| SEC_OUTPUT_FORMAT | Output format: text, json, sarif |
| NODE_ENV | Environment mode: dev, test, prod |
Hook System Support
setup auto-detects your hook system:
- Husky — appends to
.husky/pre-commit - lint-staged — adds to
lint-stagedconfig inpackage.jsonwith--diff-only - Raw git hooks — creates
.git/hooks/pre-commit
License
MIT
