seraph-agent
v1.0.22
Published
An extremely lightweight, SRE autonomous AI agent for seamless integration with common observability tasks.
Maintainers
Readme
Seraph Guardian Agent
Seraph is an extremely lightweight, SRE autonomous AI agent designed for seamless integration with common observability tasks.
It is highly scalable, capable of independent asynchronous analysis, and possesses the ability to integrate with other AI agents for automated mitigation and code modifications.
Key Features
- Log Ingestion: Integrates with log forwarders like Fluentd, Logstash, and Vector via HTTP.
- Autonomous Log Analysis: Uses a configurable LLM provider (Gemini, Anthropic, OpenAI) to analyze logs in real-time, detect anomalies, and trigger alerts.
- Context-Aware Chat: Chat with the agent about recent logs to gain insights and summaries.
- Scalable & Autonomous: Manages multiple asynchronous agent workers for parallel log analysis.
- Automated Mitigation: Can be configured to call out to other AI agents for automated mitigation and code modification proposals.
- CLI Control: A simple and powerful Command Line Interface for managing the agent's lifecycle.
- Easy to Deploy: Can be deployed locally, on-premise, or in any cloud environment.
- Extremely Lightweight: Built with performance in mind to minimize resource consumption.
- Integrations: Supports integrations with log forwarders, LLM providers, and monitoring tools.
- Smart Caching: Optional Redis-based semantic caching reduces LLM API costs by 40-70%.
Built-in SRE Tooling
Seraph now comes with a built-in Model Context Protocol (MCP) server that provides essential SRE tools out-of-the-box. When you start the Seraph agent, it automatically starts a second server on the next available port (e.g., 8081) that provides these tools to the agent for its investigations.
Included Tools
Git: The agent can analyze the Git repository where your application's source code is located. It can read commit logs to correlate a production error with a recent code change.
Prometheus: The agent can query your Prometheus instance to investigate metrics, alerts, targets, and rules. This enables correlation of log anomalies with system metrics and infrastructure health.
Configuration
To use the built-in tools, configure them in your seraph.config.json:
{
"builtInMcpServer": {
"gitRepoPath": "/path/to/your/local/git/repo",
"prometheusUrl": "http://localhost:9090"
}
}With this configuration, the agent will automatically have access to:
git_logandgit_clonetools for code analysisprometheus_queryfor custom PromQL queriesprometheus_metricsto explore available metricsprometheus_alertsto check current alert statusprometheus_targetsto verify scrape target healthprometheus_rulesto inspect alerting and recording rules
Dynamic Tool Integration with MCP
Seraph now supports the Model Context Protocol (MCP), allowing it to dynamically connect to and use external tools from any MCP-compliant server. This "plug and play" functionality makes the agent highly extensible and adaptable to new tasks without requiring any code changes.
How It Works
- Dynamic Discovery: When you start the agent with a specified MCP server, it connects to the server and automatically discovers the list of available tools.
- Intelligent Tool Selection: The agent's underlying LLM is informed of the available tools and their descriptions. When you chat with the agent, the LLM intelligently decides which tool (if any) is best suited to fulfill your request.
- Seamless Execution: The agent then executes the chosen tool and uses its output to formulate a response.
This architecture allows you to easily expand the agent's capabilities by simply pointing it to a new MCP server.
Using MCP Tools
There are two ways to connect Seraph to MCP servers:
Custom Server: You can connect to any MCP-compliant server using the
--mcp-server-urlflag. This is useful for development or for connecting to private, custom tool servers.seraph chat "What's the weather in London?" --mcp-server-url https://some-weather-mcp-server.comBuilt-in Toolsets: Seraph comes with a curated list of high-quality, pre-configured MCP servers that you can easily use with the
--toolsflag.seraph chat "What is the current time in Tokyo?" --tools timeTo see the list of all available built-in toolsets, run:
seraph tools list
Security Warning: Only connect to MCP servers that you trust. A malicious MCP server could provide tools that could harm your system or exfiltrate data.
Autonomous Log Analysis and Investigation
Seraph's core feature is its ability to autonomously analyze logs and perform root cause analysis. The process involves two stages:
Triage: When a log is ingested, it is passed to a triage worker. This worker makes a quick decision on whether the log requires further attention. The model responds with a
decision("alert" or "ok") and a briefreason.Investigation: If the decision is "alert", the log is passed to an investigation worker. This worker uses a ReAct-style loop to conduct a detailed root cause analysis. It can use a variety of tools (like the built-in Git tool) to gather more context.
Reporting: The findings of the investigation, including the root cause analysis, impact assessment, and suggested remediation steps, are saved to a local SQLite database.
This multi-stage process allows Seraph to quickly filter through a high volume of logs and perform deep analysis only when necessary, making it both efficient and powerful.
Setup and Installation
Seraph is distributed as an npm package. You can install it globally to use the CLI anywhere on your system.
npm install -g seraph-agentNote on Native Addons: The agent uses the sqlite3 package to store investigation reports, which is a native Node.js addon. If you encounter installation issues, you may need to install the necessary build tools for your operating system. Please see the "Troubleshooting" section for more details.
Alternatively, you can add it as a dependency to your project:
npm install seraph-agentConfiguration
Seraph is configured via a seraph.config.json file in your project root. Environment variables can also be used and will override settings in the file.
For a detailed explanation of all available options, please see the well-commented example configuration file:
config.example.json
Redis Caching (Optional)
Seraph supports optional Redis-based semantic caching to reduce LLM API costs:
{
"llmCache": {
"redis": {
"host": "localhost",
"port": 6379,
"password": "secret",
"keyPrefix": "seraph:"
},
"similarityThreshold": 0.85,
"ttlSeconds": 3600
}
}Benefits:
- 40-70% cache hit rate for similar infrastructure logs
- 60-80% token reduction for repeated error patterns
- Semantic similarity matching using cosine similarity on text embeddings
- Graceful degradation when Redis is unavailable
See CACHE.md for detailed caching documentation.
Environment Variables
The primary LLM API key is configured via environment variables.
GEMINI_API_KEY: Your Gemini API key.ANTHROPIC_API_KEY: Your Anthropic API key.OPENAI_API_KEY: Your OpenAI API key.
Troubleshooting
sqlite3 Native Addon Installation Issues
The agent uses the sqlite3 package to store investigation reports, which is a native Node.js addon. If you encounter errors during npm install related to node-gyp or sqlite3, it likely means you are missing the necessary build tools for your operating system.
- Windows:
npm install --global windows-build-tools - macOS:
- Install the Xcode Command Line Tools:
xcode-select --install
- Install the Xcode Command Line Tools:
- Debian/Ubuntu:
sudo apt-get install -y build-essential
For more detailed instructions, please refer to the node-gyp installation guide.
Integrations
Seraph is designed to be a component in a larger observability and automation ecosystem. It supports integrations with log forwarders, LLM providers, monitoring tools, and alert managers.
For a detailed guide on integrating with tools like Fluentd, Vector, and Alertmanager, or for information on inter-agent communication, please see the Integration Guide.
LLM Providers
You can choose from the following LLM providers:
gemini(default)anthropicopenai
You can also specify a model for the selected provider. If no model is specified, a default will be used.
Quick Start
Configure your API Key: Set the environment variable for your chosen provider:
# For Gemini export GEMINI_API_KEY="YOUR_GEMINI_API_KEY" # For Anthropic export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY" # For OpenAI export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"Alternatively, you can create a
seraph.config.jsonfile as described above.Start the agent: If you installed it globally, you can run:
seraph startThis will start the log ingestion server on port
8080and spin up4analysis workers.
CLI Usage
The Seraph agent features a superior CLI experience designed to address common pain points found in other AI SRE tools:
✨ CLI Excellence
- Beautiful, responsive UI that adapts to any terminal size
- Zero-config philosophy with intelligent auto-detection
- Multiple output formats (table, JSON, markdown) for every command
- Comprehensive diagnostics with actionable suggestions
- Interactive setup wizard eliminates configuration complexity
- Real-time health monitoring with visual indicators
- Graceful error handling with helpful recovery suggestions
🚀 Quick Commands
seraph setup- Interactive setup wizard with auto-detectionseraph start- Start the AI SRE agentseraph status --verbose- Detailed agent status with health checksseraph doctor- Comprehensive diagnostics and troubleshooting
Core Commands
seraph start
Starts the agent and the log ingestion server.
Options:
--mcp-server-url <url>: Connect to an MCP server to enable dynamic tool usage--tools <names>: Comma-separated list of built-in toolsets to use
seraph status [--verbose]
Check agent status with beautiful, responsive output.
Features:
- Real-time system metrics and health checks
- Memory usage and performance indicators
- Feature status (MCP, Redis, integrations)
- Activity summary (logs processed, investigations)
- Quick action suggestions
seraph stop
Gracefully stops the agent and all workers.
seraph doctor
Comprehensive diagnostics covering:
- System requirements and dependencies
- Configuration validation
- Runtime health checks
- Network connectivity tests
- Health scoring and actionable suggestions
Setup and Configuration
seraph setup [--guided]
Interactive setup wizard featuring:
- Auto-detection of existing configurations
- Intelligent defaults for common scenarios
- Step-by-step LLM provider configuration
- Integration setup (Git, Prometheus, Redis)
- Configuration validation and preview
Reports Management
seraph reports list [options]
List investigation reports with flexible formatting.
Options:
--format table|json|markdown- Output format (default: table)--limit <number>- Maximum number of results (default: 50)--filter <status>- Filter by status: all, resolved, open, acknowledged
seraph reports view <incidentId> [options]
View detailed investigation reports.
Options:
--format json|markdown|raw- Output format (default: markdown)
Features:
- Beautiful markdown formatting with syntax highlighting
- Structured analysis and investigation traces
- Tool usage summaries and execution details
Tools Management
seraph tools list [--format table|json|markdown]
List available built-in toolsets with descriptions and URLs.
seraph chat <message>
Chat with the Seraph agent. Requires a configured LLM provider and API key.
Options:
-c, --context: Include the last 100 logs as context for the chat. This allows you to ask questions like"summarize the recent errors".--mcp-server-url <url>: Connect to a custom MCP server to use its tools.--tools <names>: A comma-separated list of built-in toolsets to use (e.g., "fetch,git").
seraph tools list
Lists all available built-in toolsets.
Running with Docker
You can also run the Seraph agent in a Docker container for easy deployment.
Build the Docker image:
docker build -t seraph-agent .Run the Docker container:
You can configure the agent inside the container using environment variables.
Example for Gemini:
docker run -d -p 8080:8080 \ -e GEMINI_API_KEY="YOUR_GEMINI_API_KEY" \ --name seraph-agent seraph-agentExample for Anthropic:
docker run -d -p 8080:8080 \ -e ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY" \ --name seraph-agent seraph-agentAlternatively, you can mount a
seraph.config.jsonfile to configure the container, which is useful if you want to specify a provider and model.docker run -d -p 8080:8080 \ -v $(pwd)/seraph.config.json:/usr/src/app/seraph.config.json \ --name seraph-agent seraph-agentInteract with the agent:
You can then interact with the agent using the
docker execcommand:docker exec -it seraph-agent node dist/index.js status docker exec -it seraph-agent node dist/index.js chat "hello" docker exec -it seraph-agent node dist/index.js chat --context "any recent errors?"Check the logs or stop the agent:
docker logs -f seraph-agent docker stop seraph-agent
Monitoring with Prometheus
The Seraph agent exposes a /metrics endpoint for Prometheus scraping.
Example prometheus.yml scrape configuration:
scrape_configs:
- job_name: 'seraph-agent'
static_configs:
- targets: ['localhost:8080']For more detailed documentation on deployment and integrations, please see the docs directory.
Deploying with Helm
A Helm chart is provided for easy deployment to Kubernetes with optional Redis caching.
Prerequisites:
- A running Kubernetes cluster (e.g., Minikube, Docker Desktop).
helmcommand-line tool installed.
Configure API Keys: The Helm chart uses environment variables for API keys. You can set these in the
helm/values.yamlfile or by using the--setflag during installation.Example
helm/values.yamlmodification:env: GEMINI_API_KEY: "YOUR_GEMINI_API_KEY"Install the Chart: From the root of the project, run the following command:
# Basic installation helm install my-seraph-release ./helm \ --set env.GEMINI_API_KEY="YOUR_GEMINI_API_KEY" # With Redis caching enabled helm install my-seraph-release ./helm \ --set env.GEMINI_API_KEY="YOUR_GEMINI_API_KEY" \ --set redis.enabled=trueThis will deploy the Seraph agent to your Kubernetes cluster with optional Redis for LLM response caching.
Accessing the Agent: By default, the service is of type
ClusterIP. To access it from your local machine, you can usekubectl port-forward:kubectl port-forward svc/my-seraph-release-seraph 8080:8080You can then send logs to
http://localhost:8080/logs.Uninstalling the Chart: To remove the deployment, run:
helm uninstall my-seraph-release
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
