secure-ai-audit-remedy-middleware
v1.0.0
Published
Local AI security scanner for AI-generated code using Ollama.
Downloads
131
Maintainers
Readme
secure-ai-middleware
A production-ready local AI security scanner for AI-generated code. This package uses an Ollama-hosted model (specifically llama3:8b) to analyze code for security vulnerabilities.
Features
- API Wrapper: Wrap any AI chat completion function to automatically scan outputs.
- Local Analysis: No data leaves your machine. Uses Ollama for local LLM processing.
- CLI Tool: Scan any file directly from the terminal.
- Detailed Reports: Identifies secret exposure, unsafe patterns, and dangerous constructs.
Prerequisites
- Install Ollama: Download and install from ollama.com.
- Pull Model: Run
ollama pull llama3:8bin your terminal. - Run Ollama: Ensure Ollama is running (
ollama serve).
Installation
npm install secure-ai-middlewareAPI Usage
import { secureAI } from 'secure-ai-middleware';
// Example wrapping a mock AI call
const result = await secureAI.generate(async () => {
// Simulate an AI call that returns insecure code
return `
const apiKey = "SG.xxyyzz";
const data = eval(userInput);
`;
});
console.log(result.originalOutput);
console.log(result.report.riskScore); // e.g., 9
console.log(result.report.issues);CLI Usage
npx secure-ai scan ./path/to/generated-file.jsHow it Works
The middleware intercepts the string returned by your AI generation function. It then sends this code to a local Ollama endpoint (default: http://localhost:11434) with a specialized security analysis prompt. The model analyzes the code and returns a structured JSON report which is then merged with the original output.
Security Considerations
- Ollama Access: Ensure your Ollama server is secured if reachable over a network.
- Model Quality: Security detection depends on the capabilities of the
llama3:8bmodel. - CORS: For browser-based usage (like the demo), you must set
OLLAMA_ORIGINS="*"before starting Ollama.
License
MIT
