@ttgca/custom-ai-code-analyzer
v1.3.0
Published
A portable, GPT-powered static code analyzer that enforces custom code quality rules using natural language prompts. Designed to work in local development (via husky), CI/CD pipelines (GitHub Actions, Azure DevOps), or programmatically.
Readme
🧠 @ttgca/custom-ai-code-analyzer
A portable, GPT-powered static code analyzer and improver that enforces or enhances code quality using natural language rules. Works in local development (husky), CI/CD pipelines (GitHub Actions, Azure DevOps), or as a Node.js module.
✨ Features
- ✅ Rule-driven static analysis using OpenAI or Azure OpenAI
- ✅ Auto-improve code based on the same rules
- ✅ Portable as an NPM library or CLI tool
- ✅ Supports custom rule definitions via JSON
- ✅ Integration-ready for husky pre-commit hooks and CI pipelines
- ✅ Human-readable results with emoji indicators
- ✅ Exit codes for easy automation
- ✅ VSCode-ready safe wrapper functions
📦 Installation
Install locally in your project:
npm install --save-dev @ttgca/custom-ai-code-analyzerEnable husky (if needed):
npx husky install🚀 CLI Usage
Evaluate Code
npx custom-ai-code-analyzer \
--file ./src/index.ts \
--rules ./config/rules.json \
--provider openai \
--api-key <api-key> \
--model gpt-4 \
--mode evaluateImprove Code
npx custom-ai-code-analyzer \
--file ./src/index.ts \
--rules ./config/rules.json \
--provider azure \
--api-key <api-key> \
--endpoint <endpoint> \
--deployment <deployment> \
--api-version <api-version> \
--mode improve
--modedefaults toevaluateif not specified.
🔧 CLI Flags
| Flag | Description | Required |
| ---------------- | --------------------------------------------- | ------------ |
| --file, -f | File to analyze/improve | ✅ |
| --rules, -r | Path to rules JSON | ✅ |
| --provider, -p | openai or azure | ✅ |
| --mode, -m | evaluate (default) or improve | ❌ |
| --api-key | API key for the provider | ✅ |
| --model | OpenAI model (e.g. gpt-4) | ✔️ (openai) |
| --endpoint | Azure endpoint URL | ✔️ (azure) |
| --deployment | Azure deployment ID | ✔️ (azure) |
| --api-version | Azure API version (e.g. 2025-01-01-preview) | ✔️ (azure) |
📜 Rule Definition Example
Define rules in a simple JSON format:
config/rules.json
[
{
"name": "JSDoc Presence",
"prompt": "Ensure every function is documented with JSDoc."
},
{
"name": "No Console Logs",
"prompt": "Detect usage of console.log or console.error and flag it."
}
]📁 Project Structure Example
your-project/
├── src/
├── config/
│ └── rules.json
├── scripts/
│ └── analyze-staged.sh
├── .husky/
│ └── pre-commit
└── .env (optional)🔐 .env Sample (Optional)
Provide your API keys and configuration in a .env file for convenience. This is optional but recommended for local development.
# For OpenAI
CACA_OPENAI_API_KEY=your_openai_key
CACA_OPENAI_MODEL=your_model_name
# OPENAI_MODEL=gpt-4 # Example: gpt-3.5-turbo, gpt-4, etc.
# For Azure
CACA_AZURE_ENDPOINT=https://your-resource.openai.azure.com
CACA_AZURE_KEY=your_azure_key
CACA_AZURE_DEPLOYMENT=your-deployment-id
CACA_AZURE_API_VERSION=your_api_version # Example: 2025-01-01-previewCLI flags take priority over
.env.
🧩 Programmatic Usage
Evaluate a file
import { runAnalyzer } from "@ttgca/custom-ai-code-analyzer";
import { OpenAIProvider } from "@ttgca/custom-ai-code-analyzer";
const provider = new OpenAIProvider("sk-...", "gpt-4");
const rules = [{ name: "Rule Name", prompt: "Check this..." }];
await runAnalyzer("src/example.ts", rules, provider);Improve a file
import { runImprover } from "@ttgca/custom-ai-code-analyzer";
const provider = new OpenAIProvider("sk-...", "gpt-4");
const rules = [{ name: "Rule Name", prompt: "Check this..." }];
await runImprover("src/example.ts", rules, provider);🔄 Pre-commit Hook with husky
Step 1: Enable husky
npx husky installAdd to package.json:
"scripts": {
"prepare": "husky install"
}Step 2: Create Hook
Create .husky/pre-commit:
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
bash scripts/analyze-staged.shStep 3: Create Analysis Script
scripts/analyze-staged.sh
#!/bin/bash
FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(ts|js)$')
if [ -z "$FILES" ]; then
echo "✅ No staged JS/TS files to analyze."
exit 0
fi
FAIL=0
for FILE in $FILES; do
echo "Analyzing $FILE..."
npx custom-ai-code-analyzer \
--file "$FILE" \
--rules ./config/rules.json \
--provider openai \
--api-key "$OPENAI_API_KEY" \
--model gpt-4
if [ $? -ne 0 ]; then
FAIL=1
fi
done
if [ $FAIL -ne 0 ]; then
echo "❌ Commit blocked due to analysis issues."
exit 1
else
echo "✅ All staged files passed analysis."
fiMake it executable:
chmod +x scripts/analyze-staged.shRemember to set
OPENAI_API_KEYin your shell or CI config.
⚙️ CI/CD Integration
GitHub Actions Example:
- name: Run Static Code Analysis
run: |
npx custom-ai-code-analyzer \
--file src/index.ts \
--rules config/rules.json \
--provider openai \
--api-key ${{ secrets.OPENAI_API_KEY }}📤 Exit Codes
| Code | Meaning |
| ---- | ---------------------------------- |
| 0 | All checks passed or warnings only |
| 1 | One or more critical issues found |
💡 Future Ideas
- File Glob Support: Allow patterns like
--file src/**/*.ts - Git Diff Filtering: Analyze only changed lines
- ESLint-Compatible Output: For CI/editor integration
- VSCode Extension: In-editor GenAI guidance for rule adherence ✅
📝 License
© The Tong Group — For internal or personal evaluation use only. Commercial usage prohibited without prior written permission.
👨💻 Author
Made with ❤️ by [Noah Yejia Tong / THE TONG GROUP]
