@alcyone-labs/arg-parser
v2.9.0
Published
A robust, type-safe CLI builder with out-of-the-box support for creating MCPs and bundling Claude Desktop's DXT packages
Downloads
199
Maintainers
Readme
ArgParser - Type-Safe Command Line Argument Parser
A modern, type-safe command line argument parser with built-in MCP (Model Context Protocol) integration, real-time MCP Resources, and automatic Claude Desktop Extension (DXT) generation.
Table of Contents
MCP & Claude Desktop Integration
- Output Schema Support
- Writing Effective MCP Tool Descriptions
- Automatic MCP Server Mode (
--s-mcp-serve) - MCP Transports
- MCP Logging Configuration
- MCP Lifecycle Events
- MCP Resources - Real-Time Data Feeds
- Automatic Console Safety
- Generating DXT Packages (
--s-build-dxt) - Logo Configuration
- Including Additional Files in DXT Packages
- How DXT Generation Works
- DXT Bundling Strategies
- Typical Errors
Features Overview
- Unified Tool Architecture: Define tools once with
addTool()and they automatically function as both CLI subcommands and MCP tools. - Type-safe flag definitions with full TypeScript support and autocompletion.
- Automatic MCP Integration: Transform any CLI into a compliant MCP server with a single command (
--s-mcp-serve). - MCP Resources with Real-Time Feeds ⭐: Create subscription-based data feeds with URI templates for live notifications to AI assistants.
- Console Safe:
console.logand other methods are automatically handled in MCP mode to prevent protocol contamination, requiring no changes to your code. - DXT Package Generation: Generate complete, ready-to-install Claude Desktop Extension (
.dxt) packages with the--s-build-dxtcommand and--s-with-node-modulesfor platform-dependent builds. - Hierarchical Sub-commands: Create complex, nested sub-command structures (e.g.,
git commit,docker container ls) with flag inheritance. - Configuration Management: Easily load (
--s-with-env) and save (--s-save-to-env) configurations from/to.env,.json,.yaml, and.tomlfiles. - Automatic Help & Error Handling: Context-aware help text and user-friendly error messages are generated automatically.
- Debugging Tools: Built-in system flags like
--s-debugand--s-debug-printfor easy troubleshooting.
Installation
# Using PNPM (recommended)
pnpm add @alcyone-labs/arg-parserQuick Start: The Unified addTool API
The modern way to build with ArgParser is using the .addTool() method. It creates a single, self-contained unit that works as both a CLI subcommand and an MCP tool.
import { z } from "zod";
import { ArgParser } from "@alcyone-labs/arg-parser";
// Use ArgParser.withMcp to enable MCP and DXT features
const cli = ArgParser.withMcp({
appName: "My Awesome CLI",
appCommandName: "mycli",
description: "A tool that works in both CLI and MCP mode",
mcp: {
serverInfo: { name: "my-awesome-mcp-server", version: "1.0.0" },
},
})
// Define a tool that works everywhere
.addTool({
name: "greet",
description: "A tool to greet someone",
flags: [
{
name: "name",
type: "string",
mandatory: true,
options: ["--name"],
description: "Name to greet",
},
{
name: "style",
type: "string",
enum: ["formal", "casual"],
defaultValue: "casual",
description: "Greeting style",
},
],
// Optional: Define output schema for MCP clients (Claude Desktop, etc.)
// This only affects MCP mode - CLI mode works the same regardless
outputSchema: {
success: z.boolean().describe("Whether the greeting was successful"),
greeting: z.string().describe("The formatted greeting message"),
name: z.string().describe("The name that was greeted"),
},
handler: async (ctx) => {
// Use console.log freely - it's automatically safe in MCP mode!
console.log(`Greeting ${ctx.args.name} in a ${ctx.args.style} style...`);
const greeting =
ctx.args.style === "formal"
? `Good day, ${ctx.args.name}.`
: `Hey ${ctx.args.name}!`;
console.log(greeting);
return { success: true, greeting, name: ctx.args.name };
},
});
// parse() is async and works with both sync and async handlers
async function main() {
try {
await cli.parse(process.argv.slice(2));
} catch (error) {
console.error("Error:", error.message);
process.exit(1);
}
}
main();
// Export if you want to test, use the CLI programmatically
// or use the --s-enable-fuzzing system flag to run fuzzy tests on your CLI
export default cli;MCP Tool Name Constraints
When using .addTool() or .addMcpTool(), tool names are automatically sanitized for MCP compatibility. MCP tool names must follow the pattern ^[a-zA-Z0-9_-]{1,64}$ (only alphanumeric characters, underscores, and hyphens, with a maximum length of 64 characters).
// These names will be automatically sanitized:
cli.addTool({
name: "test.tool", // → "test_tool"
// ... rest of config
});
cli.addTool({
name: "my@tool", // → "my_tool"
// ... rest of config
});
cli.addTool({
name: "tool with spaces", // → "tool_with_spaces"
// ... rest of config
});
cli.addTool({
name: "very-long-tool-name-that-exceeds-the-64-character-limit-for-mcp", // → truncated to 64 chars
// ... rest of config
});The library will warn you when tool names are sanitized, but your tools will continue to work normally. For CLI usage, the original name is preserved as the subcommand name.
How to Run It
# This assumes `mycli` is your CLI's entry point
# 1. As a standard CLI subcommand
mycli greet --name Jane --style formal
# 2. As an MCP server, exposing the 'greet' tool
mycli --s-mcp-serve
# 3. Generate a DXT package for Claude Desktop (2-steps)
mycli --s-build-dxt ./my-dxt-package
# If you use ML models or packages that include binaries such as Sqlite3 or sharp, etc...
# You need to bundle the node_modules folder with your DXT package
# In order to do this, you need to use the following flag:
# First hard-install all the packages
rm -rf node_moduels
pnpm install --prod --node-linker=hoisted
# Then bundle with node_modules
mycli --s-build-dxt ./my-dxt-package --s-with-node-modules
# then packages the dxt
npx @anthropic-ai/dxt pack ./my-dxt-package
# then upload the dxt bundle to Claude Desktop from the settings > extensions > advanced screenRead more on generating the DXT package here: Generating DXT Packages
Setting Up System-Wide CLI Access
To make your CLI available system-wide as a binary command, you need to configure the bin field in your package.json and use package linking:
1. Configure your package.json:
{
"name": "my-cli-app",
"version": "1.0.0",
"type": "module",
"bin": {
"mycli": "./cli.js"
}
}2. Make your CLI file executable:
chmod +x cli.js3. Add a shebang to your CLI file:
#!/usr/bin/env node
# or #!/usr/bin/env bun for native typescript runtime
import { ArgParser } from '@alcyone-labs/arg-parser';
const cli = ArgParser.withMcp({
appName: "My CLI",
appCommandName: "mycli",
// ... your configuration
});
// Parse command line arguments
await cli.parse(process.argv.slice(2));4. Link the package globally:
# Using npm
npm link
# Using pnpm
pnpm link --global
# Using bun
bun link
# Using yarn
yarn link5. Use your CLI from anywhere:
# Now you can run your CLI from any directory
mycli --help
mycli greet --name "World"
# Or use with npx/pnpx if you prefer
npx mycli --help
pnpx mycli greet --name "World"To unlink later:
# Using npm
npm unlink --global my-cli-app
# Using pnpm
pnpm unlink --global
# Using bun
bun unlink
# Using yarn
yarn unlinkParsing Command-Line Arguments
ArgParser's parse() method is async and automatically handles both synchronous and asynchronous handlers:
Auto-Execution versus Import: No More Boilerplate
ArgParser now provides auto-execution ability that eliminates the need for boilerplate code to check if your script is being run directly vs. imported. This enables use cases such as programmatically loading the CLI and scanning for tools or testing it from another script via the --s-enable-fuzzy flag or your own script.
const cli = ArgParser.withMcp({
appName: "My CLI",
appCommandName: "my-cli",
handler: async (ctx) => ({ success: true, data: ctx.args }),
});
// Now, this will NOT automatically execute the parser if the script is imported, but will execute if called directly:
await cli
.parse(undefined, {
importMetaUrl: import.meta.url,
})
.catch(handleError);
// Or, using the manual APIs:
await cli.parseIfExecutedDirectly(import.meta.url).catch((error) => {
console.error(
"Fatal error:",
error instanceof Error ? error.message : String(error),
);
process.exit(1);
});Replaces previously confusing patterns:
// Brittle and breaks in sandboxes
if (import.meta.url === `file://${process.argv[1]}`) {
await cli.parse().catch(handleError);
}Cannonical Usage Pattern
const cli = ArgParser.withMcp({
appName: "My CLI",
handler: async (ctx) => {
// Works with both sync and async operations
const result = await someAsyncOperation(ctx.args.input);
return { success: true, result };
},
});
// parse() is async and works with both sync and async handlers
async function main() {
try {
// Option 1: Auto-detection - convenient for simple scripts
const result = await cli.parse();
// Option 2: Explicit arguments - full control
// const result = await cli.parse(process.argv.slice(2));
// Handler results are automatically awaited and merged
console.log(result.success); // true
} catch (error) {
console.error("Error:", error.message);
process.exit(1);
}
}Top-level await
Works in ES modules or Node.js >=18 with top-level await
try {
// Auto-detection approach (recommended for simple scripts)
const result = await cli.parse();
// Or explicit approach for full control
// const result = await cli.parse(process.argv.slice(2));
console.log("Success:", result);
} catch (error) {
console.error("Error:", error.message);
process.exit(1);
}Promise-based parsing
If you need synchronous contexts, you can simply rely on promise-based APIs
// Auto-detection approach
cli
.parse()
.then((result) => {
console.log("Success:", result);
})
.catch((error) => {
console.error("Error:", error.message);
process.exit(1);
});
// Or explicit approach
// cli
// .parse(process.argv.slice(2))
// .then((result) => {
// console.log("Success:", result);
// })
// .catch((error) => {
// console.error("Error:", error.message);
// process.exit(1);
// });Migrating from v1.x to the v2.0 addTool API
Version 2.0 introduces the addTool() method to unify CLI subcommand and MCP tool creation. This simplifies development by removing boilerplate and conditional logic.
Before v2.0: Separate Definitions
Previously, you had to define CLI handlers and MCP tools separately, often with conditional logic inside the handler to manage different output formats.
const cli = ArgParser.withMcp({
appName: "My Awesome CLI",
appCommandName: "mycli",
description: "A tool that works in both CLI and MCP mode",
mcp: {
serverInfo: { name: "my-awesome-mcp-server", version: "1.0.0" },
},
});
// Old way: Separate CLI subcommands and MCP tools
cli
.addSubCommand({
name: "search",
handler: async (ctx) => {
// Manual MCP detection was required
if (ctx.isMcp) {
return { content: [{ type: "text", text: JSON.stringify(result) }] };
} else {
console.log("Search results...");
return result;
}
},
})
// And a separate command to start the server
.addMcpSubCommand("serve", {
/* MCP config */
});After v2.0: The Unified addTool() Method
Now, a single addTool() definition creates both the CLI subcommand and the MCP tool. Console output is automatically managed, flags are converted to MCP schemas, and the server is started with a universal system flag.
const cli = ArgParser.withMcp({
appName: "My Awesome CLI",
appCommandName: "mycli",
description: "A tool that works in both CLI and MCP mode",
mcp: {
serverInfo: { name: "my-awesome-mcp-server", version: "1.0.0" },
},
});
// New way: A single tool definition for both CLI and MCP
cli.addTool({
name: "search",
description: "Search for items",
flags: [
{ name: "query", type: "string", mandatory: true },
{ name: "apiKey", type: "string", env: "API_KEY" }, // For DXT integration
],
handler: async (ctx) => {
// No more MCP detection! Use console.log freely.
console.log(`Searching for: ${ctx.args.query}`);
const results = await performSearch(ctx.args.query, ctx.args.apiKey);
console.log(`Found ${results.length} results`);
return { success: true, results };
},
});
// CLI usage: mycli search --query "test"
// MCP usage: mycli --s-mcp-serveBenefits of Migrating:
- Less Code: A single definition replaces two or more complex ones.
- Simpler Logic: No more manual MCP mode detection or response formatting.
- Automatic Schemas: Flags are automatically converted into the
input_schemafor MCP tools. - Automatic Console Safety:
console.logis automatically redirected in MCP mode. - Optional Output Schemas: Add
outputSchemaonly if you want structured responses for MCP clients - CLI mode works perfectly without them.
Core Concepts
Defining Flags
Flags are defined using the IFlag interface within the flags array of a tool or command.
interface IFlag {
name: string; // Internal name (e.g., 'verbose')
options: string[]; // Command-line options (e.g., ['--verbose', '-v'])
type:
| "string"
| "number"
| "boolean"
| "array"
| "object"
| Function
| ZodSchema;
description?: string; // Help text
mandatory?: boolean | ((args: any) => boolean); // Whether the flag is required
defaultValue?: any; // Default value if not provided
flagOnly?: boolean; // A flag that doesn't consume a value (like --help)
enum?: any[]; // An array of allowed values
validate?: (value: any, parsedArgs?: any) => boolean | string | void; // Custom validation function
allowMultiple?: boolean; // Allow the flag to be provided multiple times
env?: string; // Links the flag to an environment variable for DXT packages, will automatically generate user_config entries in the DXT manifest and fill the flag value to the ENV value if found (process.env)
dxtOptions?: DxtOptions; // Customizes how this flag appears in DXT package user_config
}
interface DxtOptions {
type?: "string" | "directory" | "file" | "boolean" | "number"; // UI input type in Claude Desktop
title?: string; // Display name in Claude Desktop (defaults to formatted flag name)
sensitive?: boolean; // Whether to hide the value in UI (defaults to true for security)
default?: any; // Default value for the user_config entry
min?: number; // Minimum value (for number types)
max?: number; // Maximum value (for number types)
}DXT Package User Configuration & Path Handling
ArgParser v2.5.0 introduces comprehensive DXT (Desktop Extension Toolkit) support with rich user interfaces, automatic path resolution, and context-aware development tools.
Enhanced dxtOptions
When generating DXT packages with --s-build-dxt, you can create rich user configuration interfaces using dxtOptions:
import { ArgParser, DxtPathResolver } from "@alcyone-labs/arg-parser";
const parser = new ArgParser()
.withMcp({
name: "file-processor",
version: "1.0.0",
logPath: "${HOME}/logs/file-processor.log", // DXT variables supported!
})
.addFlag({
name: "input-file",
description: "File to process",
type: "string",
mandatory: true,
dxtOptions: {
type: "file",
title: "Select Input File",
},
})
.addFlag({
name: "output-dir",
description: "Output directory for processed files",
type: "string",
dxtOptions: {
type: "directory",
localDefault: "${DOCUMENTS}/processed-files", // Smart defaults with DXT variables
title: "Output Directory",
},
})
.addFlag({
name: "api-key",
description: "API authentication key",
type: "string",
env: "API_KEY",
dxtOptions: {
type: "string",
sensitive: true, // Excluded from DXT manifest for security
title: "API Key",
},
})
.addFlag({
name: "quality",
description: "Processing quality (1-100)",
type: "number",
dxtOptions: {
type: "number",
min: 1,
max: 100,
localDefault: 85,
title: "Quality (%)",
},
})
.addFlag({
name: "parallel",
description: "Enable parallel processing",
type: "boolean",
dxtOptions: {
type: "boolean",
localDefault: true,
title: "Parallel Processing",
},
});DXT Variables & Path Resolution
ArgParser automatically resolves paths based on your runtime environment:
// DXT variables work everywhere - in flags, MCP config, and code
const logPath = "${HOME}/logs/app.log";
const configPath = "${DOCUMENTS}/myapp/config.json";
const resourcePath = "${__dirname}/templates/default.hbs";
// Helper functions for common patterns
const userDataPath = DxtPathResolver.createUserDataPath("cache.db");
const tempPath = DxtPathResolver.createTempPath("processing.tmp");
const configPath = DxtPathResolver.createConfigPath("settings.json");
// Context detection
const context = DxtPathResolver.detectContext();
if (context.isDxt) {
console.log("Running in DXT environment");
} else {
console.log("Running in development");
}Supported DXT Variables:
${HOME}- User's home directory${DOCUMENTS}- Documents folder${DOWNLOADS}- Downloads folder${DESKTOP}- Desktop folder${__dirname}- Entry point directory (DXT package root in DXT)${pathSeparator}- Platform-specific path separator${DXT_DIR}- DXT package directory (DXT only)${EXTENSION_DIR}- Extension root directory (DXT only)
dxtOptions Properties
| Property | Type | Description |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------ |
| type | 'string' \| 'file' \| 'directory' \| 'boolean' \| 'number' | UI component type |
| sensitive | boolean | Mark as sensitive (excluded from manifest) |
| localDefault | string \| number \| boolean | Default for development (supports DXT variables) |
| multiple | boolean | Allow multiple values |
| min / max | number | Validation constraints |
| title | string | Custom display name |
Security & Best Practices
- Sensitive Data: Use
sensitive: truefor passwords, API keys, tokens - Smart Defaults: Use DXT variables in
localDefaultfor portable paths - Type Safety: Match
dxtOptions.typewith flagtypefor validation - Cross-Platform: Use
${pathSeparator}for platform-independent paths
Comprehensive Documentation
For detailed guides and examples:
- DXT Path Handling Guide - Complete path resolution guide
- dxtOptions API Documentation - Full API reference with examples
- DXT Migration Guide - Migrate existing applications
- DXT Practical Examples - Real-world usage patterns
Type Handling and Validation
ArgParser provides strong typing for flag definitions with comprehensive validation at both compile-time and runtime. The type property accepts multiple formats and ensures type safety throughout your application.
Supported Type Formats
You can define flag types using either constructor functions or string literals:
const parser = new ArgParser({
/* ... */
}).addFlags([
// Constructor functions (recommended for TypeScript)
{ name: "count", options: ["--count"], type: Number },
{ name: "enabled", options: ["--enabled"], type: Boolean, flagOnly: true },
{ name: "files", options: ["--files"], type: Array, allowMultiple: true },
// String literals (case-insensitive)
{ name: "name", options: ["--name"], type: "string" },
{ name: "port", options: ["--port"], type: "number" },
{ name: "verbose", options: ["-v"], type: "boolean", flagOnly: true },
{ name: "tags", options: ["--tags"], type: "array", allowMultiple: true },
{ name: "config", options: ["--config"], type: "object" },
// Custom parser functions (sync)
{
name: "date",
options: ["--date"],
type: (value: string) => new Date(value),
},
// Async custom parser functions
{
name: "config",
options: ["--config"],
type: async (filePath: string) => {
const content = await fs.readFile(filePath, "utf8");
return JSON.parse(content);
},
},
{
name: "user",
options: ["--user-id"],
type: async (userId: string) => {
const response = await fetch(`/api/users/${userId}`);
return response.json();
},
},
]);Runtime Type Validation
The type system validates flag definitions at runtime and throws descriptive errors for invalid configurations:
// ✅ Valid - these work
{ name: "count", options: ["--count"], type: Number }
{ name: "count", options: ["--count"], type: "number" }
{ name: "count", options: ["--count"], type: "NUMBER" } // case-insensitive
// ❌ Invalid - these throw ZodError
{ name: "count", options: ["--count"], type: "invalid-type" }
{ name: "count", options: ["--count"], type: 42 } // primitive instead of constructor
{ name: "count", options: ["--count"], type: null }Automatic Type Processing
- String literals are automatically converted to constructor functions internally
- Constructor functions are preserved as-is
- Custom parser functions (sync and async) allow complex transformations
- undefined falls back to the default
"string"type
Async Custom Parser Support
Custom parser functions can be asynchronous, enabling powerful use cases like file I/O, API calls, and database lookups:
const parser = new ArgParser({
/* ... */
}).addFlags([
{
name: "config",
options: ["--config"],
type: async (filePath: string) => {
const content = await fs.readFile(filePath, "utf8");
return JSON.parse(content);
},
},
{
name: "user",
options: ["--user-id"],
type: async (userId: string) => {
const response = await fetch(`/api/users/${userId}`);
if (!response.ok) throw new Error(`User not found: ${userId}`);
return response.json();
},
},
]);
// Usage: --config ./settings.json --user-id 123
const result = await parser.parse(process.argv.slice(2));
// result.config contains parsed JSON from file
// result.user contains user data from APIKey Features:
- ✅ Backward compatible - existing sync parsers continue to work
- ✅ Automatic detection - no configuration needed, just return a Promise
- ✅ Error handling - async errors are properly propagated
- ✅ Performance - parsers run concurrently when possible
Type Conversion Examples
// String flags
--name value → "value"
--name="quoted value" → "quoted value"
// Number flags
--count 42 → 42
--port=8080 → 8080
// Boolean flags (flagOnly: true)
--verbose → true
(no flag) → false
// Array flags (allowMultiple: true)
--tags tag1,tag2,tag3 → ["tag1", "tag2", "tag3"]
--file file1.txt --file file2.txt → ["file1.txt", "file2.txt"]
// Custom parser functions (sync)
--date "2023-01-01" → Date object
--json '{"key":"val"}' → parsed JSON object
// Async custom parser functions
--config "./settings.json" → parsed JSON from file (async)
--user-id "123" → user data from API (async)
// Zod schema validation (structured JSON)
--config '{"host":"localhost","port":5432}' → validated object
--deployment '{"env":"prod","region":"us-east-1"}' → validated objectZod Schema Flags (Structured JSON Validation)
Since v2.5.0 - You can now use Zod schemas as flag types for structured JSON input with automatic validation and proper MCP JSON Schema generation:
import { z } from "zod";
const DatabaseConfigSchema = z.object({
host: z.string().describe("Database host address"),
port: z.number().min(1).max(65535).describe("Database port number"),
credentials: z.object({
username: z.string().describe("Database username"),
password: z.string().describe("Database password"),
}),
ssl: z.boolean().optional().describe("Enable SSL connection"),
});
const cli = ArgParser.withMcp({
appName: "Database CLI",
appCommandName: "db-cli",
}).addTool({
name: "connect",
description: "Connect to database with structured configuration",
flags: [
{
name: "config",
options: ["--config", "-c"],
type: DatabaseConfigSchema, // 🎉 Zod schema as type!
description: "Database configuration as JSON object",
mandatory: true,
},
],
handler: async (ctx) => {
// ctx.args.config is fully typed and validated!
const { host, port, credentials, ssl } = ctx.args.config;
console.log(`Connecting to ${host}:${port} as ${credentials.username}`);
return { success: true };
},
});
// CLI usage with JSON validation:
// db-cli connect --config '{"host":"localhost","port":5432,"credentials":{"username":"admin","password":"secret"},"ssl":true}'
// MCP usage: Generates proper JSON Schema for MCP clients
// db-cli --s-mcp-serveExample with Complex Nested Schema:
const DeploymentSchema = z.object({
environment: z.enum(["dev", "staging", "prod"]),
region: z.string(),
scaling: z.object({
minInstances: z.number().min(1),
maxInstances: z.number().min(1),
targetCpu: z.number().min(10).max(100),
}),
monitoring: z.object({
enabled: z.boolean(),
alertEmail: z.string().email().optional(),
metrics: z.array(z.string()),
}),
});
// This generates comprehensive JSON Schema for MCP clients
// while providing full validation and type safety for CLI usageHierarchical CLIs (Sub-Commands)
While addTool() is the recommended way to create subcommands that are also MCP-compatible, you can use .addSubCommand() for traditional CLI hierarchies.
Note: By default, subcommands created with
.addSubCommand()are exposed to MCP as tools. If you want to create CLI-only subcommands, setincludeSubCommands: falsewhen adding tools.
// Create a parser for a nested command
const logsParser = new ArgParser().addFlags([
{ name: "follow", options: ["-f"], type: "boolean", flagOnly: true },
]);
// This creates a command group: `my-cli monitor`
const monitorParser = new ArgParser().addSubCommand({
name: "logs",
description: "Show application logs",
parser: logsParser,
handler: ({ args }) => console.log(`Following logs: ${args.follow}`),
});
// Attach the command group to the main CLI
const cli = new ArgParser().addSubCommand({
name: "monitor",
description: "Monitoring commands",
parser: monitorParser,
});
// Usage: my-cli monitor logs -fMCP Exposure Control
// By default, subcommands are exposed to MCP
const mcpTools = parser.toMcpTools(); // Includes all subcommands
// To exclude subcommands from MCP (CLI-only)
const mcpToolsOnly = parser.toMcpTools({ includeSubCommands: false });
// Name conflicts: You cannot have both addSubCommand("name") and addTool({ name: "name" })
// This will throw an error:
parser.addSubCommand({ name: "process", parser: subParser });
parser.addTool({ name: "process", handler: async () => {} }); // ❌ Error: Sub-command 'process' already existsFlag Inheritance (inheritParentFlags)
To share common flags (like --verbose or --config) across sub-commands, set inheritParentFlags: true in the sub-command's parser.
const parentParser = new ArgParser().addFlags([
{ name: "verbose", options: ["-v"], type: "boolean" },
]);
// This child parser will automatically have the --verbose flag
const childParser = new ArgParser({ inheritParentFlags: true }).addFlags([
{ name: "target", options: ["-t"], type: "string" },
]);
parentParser.addSubCommand({ name: "deploy", parser: childParser });Dynamic Flags (dynamicRegister)
Register flags at runtime from another flag's value (e.g., load a manifest and add flags programmatically). This works in normal runs and when showing --help.
- Two-phase parsing: loader flags run first, can register more flags, then parsing continues with the full set
- Help preload: when
--helpis present, dynamic loaders run to show complete help (no command handlers execute) - Cleanup: dynamic flags are removed between parses (no accumulation)
- Async-friendly: loaders can be async (e.g.,
fs.readFile)
import { readFile } from "node:fs/promises";
import { ArgParser } from "@alcyone-labs/arg-parser";
const cli = new ArgParser().addFlags([
{
name: "manifest",
options: ["-w", "--manifest"],
type: "string",
description: "Path to manifest.json that defines extra flags",
dynamicRegister: async ({ value, registerFlags }) => {
const json = JSON.parse(await readFile(value, "utf8"));
if (Array.isArray(json.flags)) {
// Each entry should be a valid IFlag
registerFlags(json.flags);
}
},
},
]);
// Examples:
// my-cli -w manifest.json --help → help includes dynamic flags
// my-cli -w manifest.json --foo bar → dynamic flag "--foo" parsed/validated normallyNotes:
- Inherited behavior works normally: if loader lives on a parent parser and children use
inheritParentFlags, dynamic flags will be visible to children - For heavy loaders, implement app-level caching inside your
dynamicRegister(e.g., memoize by absolute path + mtime); library-level caching may be added later
parentParser.addSubCommand({ name: "deploy", parser: childParser });
---
## MCP & Claude Desktop Integration
### Output Schema Support
Output schemas are **completely optional** and **only affect MCP mode** (Claude Desktop, MCP clients). They have **zero impact** on CLI usage - your CLI will work exactly the same with or without them.
**When do I need output schemas?**
- ❌ **CLI-only usage**: Never needed - skip this section entirely
- ✅ **MCP integration**: Optional but recommended for better structured responses
- ✅ **Claude Desktop**: Helpful for Claude to understand your tool's output format
**Key Points:**
- ✅ **CLI works perfectly without them**: Your command-line interface is unaffected
- ✅ **MCP-only feature**: Only used when running with `--s-mcp-serve`
- ✅ **Version-aware**: Automatically included only for compatible MCP clients (v2025-06-18+)
- ✅ **Flexible**: Use predefined patterns or custom Zod schemas
#### Basic Usage
```typescript
import { z } from "zod";
.addTool({
name: "process-file",
description: "Process a file",
flags: [
{ name: "path", options: ["--path"], type: "string", mandatory: true }
],
// Optional: Only needed if you want structured MCP responses
// CLI mode works exactly the same whether this is present or not
outputSchema: {
success: z.boolean().describe("Whether processing succeeded"),
filePath: z.string().describe("Path to the processed file"),
size: z.number().describe("File size in bytes"),
lastModified: z.string().describe("Last modification timestamp")
},
handler: async (ctx) => {
// Your logic here - same code for both CLI and MCP
// The outputSchema doesn't change how this function works
return {
success: true,
filePath: ctx.args.path,
size: 1024,
lastModified: new Date().toISOString()
};
}
})
// CLI usage (outputSchema ignored): mycli process-file --path /my/file.txt
// MCP usage (outputSchema provides structure): mycli --s-mcp-servePredefined Schema Patterns
For common use cases, use predefined patterns:
// For simple success/error responses
outputSchema: "successError";
// For operations that return data
outputSchema: "successWithData";
// For file operations
outputSchema: "fileOperation";
// For process execution
outputSchema: "processExecution";
// For list operations
outputSchema: "list";Custom Zod Schemas
For complex data structures:
outputSchema: z.object({
analysis: z.object({
summary: z.string(),
wordCount: z.number(),
sentiment: z.enum(["positive", "negative", "neutral"]),
}),
metadata: z.object({
timestamp: z.string(),
processingTime: z.number(),
}),
});MCP Version Compatibility
Output schemas are automatically handled based on MCP client version:
- MCP v2025-06-18+: Full output schema support with
structuredContent - Earlier versions: Schemas ignored, standard JSON text response only
To explicitly set the MCP version for testing:
const cli = ArgParser.withMcp({
// ... your config
}).setMcpProtocolVersion("2025-06-18"); // Enable output schema supportImportant:
- CLI users: You can ignore MCP versions entirely - they don't affect command-line usage
- MCP users: ArgParser handles version detection automatically based on client capabilities
Automatic Error Handling
ArgParser automatically handles errors differently based on execution context, so your handlers can simply throw errors without worrying about CLI vs MCP mode:
const cli = ArgParser.withMcp({
// ... config
}).addTool({
name: "process-data",
handler: async (ctx) => {
// Simply throw errors - ArgParser handles the rest automatically
if (!ctx.args.apiKey) {
throw new Error("API key is required");
}
// Do your work and return results
return { success: true, data: processedData };
},
});How it works:
- CLI mode: Thrown errors cause the process to exit with error code 1
- MCP mode: Thrown errors are automatically converted to structured MCP error responses
- No manual checks needed: Handlers don't need to check
ctx.isMcpor handle different response formats
Writing Effective MCP Tool Descriptions
Why descriptions matter: When your tools are exposed to Claude Desktop or other MCP clients, the description field is the primary way LLMs understand what your tool does and when to use it. A well-written description significantly improves tool selection accuracy and user experience.
Best Practices for Tool Descriptions
1. Start with the action - Begin with a clear verb describing what the tool does:
// ✅ Good: Action-first, specific
description: "Analyzes text files and returns detailed statistics including word count, character count, and sentiment analysis";
// ❌ Avoid: Vague or noun-heavy
description: "File analysis tool";2. Include context and use cases - Explain when and why to use the tool:
// ✅ Good: Provides context
description: "Converts image files between formats (PNG, JPEG, WebP). Use this when you need to change image format, resize images, or optimize file sizes. Supports batch processing of multiple files.";
// ❌ Avoid: No context
description: "Converts images";3. Mention key parameters and constraints - Reference important inputs and limitations:
// ✅ Good: Mentions key parameters and constraints
description: "Searches through project files using regex patterns. Specify the search pattern and optionally filter by file type. Supports JavaScript, TypeScript, Python, and text files up to 10MB.";
// ❌ Avoid: No parameter guidance
description: "Searches files";4. Be specific about outputs - Describe what the tool returns:
// ✅ Good: Clear output description
description: "Analyzes code complexity and returns metrics including cyclomatic complexity, lines of code, and maintainability index. Results include detailed breakdown by function and overall file scores.";
// ❌ Avoid: Unclear output
description: "Analyzes code";Complete Example: Well-Documented Tool
.addTool({
name: "analyze-repository",
description: "Analyzes a Git repository and generates comprehensive statistics including commit history, contributor activity, code quality metrics, and dependency analysis. Use this to understand project health, identify bottlenecks, or prepare reports. Supports Git repositories up to 1GB with history up to 5 years.",
flags: [
{
name: "path",
description: "Path to the Git repository root directory",
options: ["--path", "-p"],
type: "string",
mandatory: true,
},
{
name: "include-dependencies",
description: "Include analysis of package.json dependencies and security vulnerabilities",
options: ["--include-dependencies", "-d"],
type: "boolean",
flagOnly: true,
},
{
name: "output-format",
description: "Output format for the analysis report",
options: ["--output-format", "-f"],
type: "string",
choices: ["json", "markdown", "html"],
defaultValue: "json",
}
],
handler: async (ctx) => {
// Implementation here
}
})Parameter Description Guidelines
Each flag should have a clear, concise description:
// ✅ Good parameter descriptions
{
name: "timeout",
description: "Maximum execution time in seconds (default: 30, max: 300)",
options: ["--timeout", "-t"],
type: "number",
}
{
name: "verbose",
description: "Enable detailed logging output including debug information",
options: ["--verbose", "-v"],
type: "boolean",
flagOnly: true,
}
{
name: "format",
description: "Output format for results (json: structured data, csv: spreadsheet-friendly, pretty: human-readable)",
options: ["--format"],
type: "string",
choices: ["json", "csv", "pretty"],
}Common Pitfalls to Avoid
- Don't be overly technical: Avoid jargon that doesn't help with tool selection
- Don't repeat the tool name: The name is already visible, focus on functionality
- Don't use generic terms: "Process data" or "handle files" are too vague
- Don't forget constraints: Mention important limitations or requirements
- Don't ignore parameter descriptions: Each flag should have a helpful description
Remember: A good description helps the LLM choose the right tool for the task and use it correctly. Invest time in writing clear, comprehensive descriptions - it directly impacts the user experience in Claude Desktop and other MCP clients.
Automatic MCP Server Mode (--s-mcp-serve)
You don't need to write any server logic. Run your application with the --s-mcp-serve flag, and ArgParser will automatically start a compliant MCP server, exposing all tools defined with .addTool() and subcommands created with .addSubCommand() (unless includeSubCommands: false is set).
# This single command starts a fully compliant MCP server
my-cli-app --s-mcp-serve
# You can also override transports and ports using system flags
my-cli-app --s-mcp-serve --s-mcp-transport sse --s-mcp-port 3001
# Configure custom log file path for MCP server logs
my-cli-app --s-mcp-serve --s-mcp-log-path ./custom-logs/mcp-server.log
# Or configure logging programmatically in withMcp()
const cli = ArgParser.withMcp({
appName: 'My CLI App',
appCommandName: 'my-cli-app',
mcp: {
serverInfo: { name: 'my-server', version: '1.0.0' },
// NEW: Improved logging with level control
log: {
level: 'info', // Captures info, warn, error
logToFile: './my-logs/mcp-server.log',
prefix: 'MyApp'
}
// LEGACY: logPath: './my-logs/mcp-server.log' // Still works
}
});MCP Transports
You can define the transports directly in the .withMcp() settings, or override them via the --s-mcp-transport(s) flags.
# Single transport
my-tool --s-mcp-serve --s-mcp-transport stdio
# Multiple transports via JSON
my-tool --s-mcp-serve --s-mcp-transports '[{"type":"stdio"},{"type":"sse","port":3001}]'
# Single transport with custom options
my-tool --s-mcp-serve --s-mcp-transport sse --s-mcp-port 3000 --s-mcp-host 0.0.0.0
# Streamable HTTP CORS/auth via CLI flags (JSON strings)
my-tool --s-mcp-serve \
--s-mcp-transport streamable-http \
--s-mcp-port 3002 --s-mcp-path /api/mcp \
--s-mcp-cors '{"origins":["http://localhost:5173"],"credentials":true,"methods":["GET","POST","OPTIONS"],"maxAge":600}' \
--s-mcp-auth '{"required":true,"scheme":"jwt","jwt":{"algorithms":["HS256"],"secret":"$MY_JWT_SECRET"},"publicPaths":["/health"]}'
# Custom log path via CLI flag (logs to specified file instead of ./logs/mcp.log)
my-tool --s-mcp-serve --s-mcp-log-path /var/log/my-mcp-server.log
# Improved logging via programmatic configuration
const parser = ArgParser.withMcp({
mcp: {
serverInfo: { name: 'my-tool', version: '1.0.0' },
CORS and Authentication for streamable-http
CORS is often required when connecting a Web UI to an MCP server over HTTP.
- Programmatic transport config:
import type { McpTransportConfig } from "@alcyone-labs/arg-parser";
const defaultTransports: McpTransportConfig[] = [
{
type: "streamable-http",
port: 3002,
path: "/api/mcp",
cors: {
origins: ["http://localhost:5173", /^https?:\/\/example\.com$/],
methods: ["GET", "POST", "OPTIONS"],
headers: ["Content-Type", "Authorization", "MCP-Session-Id"],
exposedHeaders: ["MCP-Session-Id"],
credentials: true,
maxAge: 600,
},
auth: {
required: true,
scheme: "jwt", // or "bearer"
// Bearer allowlist:
// allowedTokens: ["token1","token2"],
// JWT verification (HS256):
// jwt: { algorithms: ["HS256"], secret: process.env.JWT_SECRET },
// JWT verification (RS256 with static public key):
// jwt: { algorithms: ["RS256"], publicKey: process.env.RS256_PUBLIC_KEY },
// JWT verification (RS256 with dynamic JWKS):
// jwt: { algorithms: ["RS256"], getPublicKey: async (header)=>{ /* fetch JWKS and return PEM */ } },
publicPaths: ["/health"],
protectedPaths: undefined, // if set, only listed paths require auth
// Optional custom validator to add extra checks
validator: async (req, token) => true,
},
},
];- CLI flags (JSON strings):
my-tool --s-mcp-serve \
--s-mcp-transport streamable-http \
--s-mcp-port 3002 --s-mcp-path /api/mcp \
--s-mcp-cors '{"origins":["http://localhost:5173"],"credentials":true,"methods":["GET","POST","OPTIONS"],"maxAge":600}' \
--s-mcp-auth '{"required":true,"scheme":"jwt","jwt":{"algorithms":["HS256"],"secret":"'$JWT_SECRET'"},"publicPaths":["/health"]}'- Express hook for custom routes:
httpServer: {
configureExpress: (app) => {
app.get("/health", (_req, res) => res.json({ ok: true }));
},
}See examples:
- examples/streamable-http/secure-mcp.ts (HS256)
- examples/streamable-http/rs256-mcp.ts (RS256)
- examples/streamable-http/jwks-mcp.ts (JWKS)
- examples/streamable-http/bearer-mcp.ts (Bearer)
- examples/streamable-http/productized-mcp.ts (token + session usage)
TypeScript types
- CorsOptions
export type CorsOptions = {
origins?: "*" | string | RegExp | Array<string | RegExp>;
methods?: string[];
headers?: string[];
exposedHeaders?: string[];
credentials?: boolean;
maxAge?: number;
};- AuthOptions and JwtVerifyOptions
export type JwtVerifyOptions = {
algorithms?: ("HS256" | "RS256")[];
secret?: string; // HS256
publicKey?: string; // RS256 static
getPublicKey?: (
header: Record<string, unknown>,
payload: Record<string, unknown>,
) => Promise<string> | string; // RS256 dynamic
audience?: string | string[];
issuer?: string | string[];
clockToleranceSec?: number;
};
export type AuthOptions = {
required?: boolean; // default true for MCP endpoint
scheme?: "bearer" | "jwt";
allowedTokens?: string[]; // simple bearer allowlist
validator?: (
req: any,
token: string | undefined,
) => boolean | Promise<boolean>;
jwt?: JwtVerifyOptions;
publicPaths?: string[]; // paths that skip auth
protectedPaths?: string[]; // if provided, only these paths require auth
customMiddleware?: (req: any, res: any, next: any) => any; // full control hook
};- HttpServerOptions
export type HttpServerOptions = {
configureExpress?: (app: any) => void;
};Notes:
When credentials are true, Access-Control-Allow-Origin echoes the request Origin rather than using "*".
You can manage CORS for non-MCP routes in configureExpress.
Use publicPaths to allow some routes without auth; use protectedPaths to only require auth for certain routes.
log: { level: 'debug', // Capture all log levels logToFile: '/var/log/my-mcp-server.log', prefix: 'MyTool' } // LEGACY: logPath: '/var/log/my-mcp-server.log' // Still works }
Adding custom HTTP routes (e.g., /health)
Use the httpServer.configureExpress(app) hook to register routes before MCP endpoints are bound. Example:
const cli = ArgParser.withMcp({
mcp: {
serverInfo: { name: "my-mcp", version: "1.0.0" },
defaultTransports: [
{
type: "streamable-http",
port: 3002,
path: "/api/mcp",
auth: { required: true, publicPaths: ["/health"] },
},
],
httpServer: {
configureExpress: (app) =>
app.get("/health", (_req, res) => res.json({ ok: true })),
},
},
});- To make a route public (no auth), add it to auth.publicPaths.
- CORS headers for non-MCP paths can be applied by your own middleware inside the hook if desired.
Multiple transports and improved logging
const cli = ArgParser.withMcp({ appName: 'multi-tool', appCommandName: 'multi-tool', mcp: { // NEW: improved logging configuration log: { level: 'info', logToFile: './logs/multi-tool-mcp.log', prefix: 'MultiTool' }, serverInfo: { name: 'multi-tool-mcp', version: '1.0.0' }, transports: [ // Can be a single string... "stdio", // or one of the other transport types supported by @modelcontextprotocol/sdk { type: "sse", port: 3000, host: "0.0.0.0" }, { type: "websocket", port: 3001, path: "/ws" } ] } });
### MCP Logging Configuration
MCP server logging can be configured with McpLoggerOptions options using `@alcyone-labs/simple-mcp-logger`. You can control log levels, output destinations, and more.
#### Enhanced Logging (Recommended)
Use the new `log` property for comprehensive logging control:
```typescript
const parser = ArgParser.withMcp({
appName: "My MCP Server",
appCommandName: "my-mcp-server",
mcp: {
serverInfo: { name: "my-server", version: "1.0.0" },
log: {
level: "debug", // Captures debug, info, warn, error
logToFile: "./logs/comprehensive.log",
prefix: "MyServer",
mcpMode: true, // MCP compliant (default)
},
},
});Available log levels: "debug" | "info" | "warn" | "error" | "silent"
Type Safety: The McpLoggerOptions type is provided for full TypeScript support and matches the interface from @alcyone-labs/simple-mcp-logger.
Simple Logging Configuration
For basic use cases, you can use a simple string path:
const parser = ArgParser.withMcp({
mcp: {
serverInfo: { name: "my-server", version: "1.0.0" },
log: "./logs/simple.log", // Simple string path
},
});Configuration Priority
Logging configuration follows this priority order:
- CLI Flag (Highest Priority):
--s-mcp-log-path <path> - Merging: When both
mcp.logandmcp.logPathare present:mcp.logprovides logger options (level, prefix, mcpMode)mcp.logPathprovides flexible path resolution (relativeTo, basePath)- Path resolution:
mcp.logPath>mcp.log.logToFile
- Log Config Only:
mcp.logobject or string inwithMcp() - Legacy Log Path Only:
mcp.logPathinwithMcp() - Default Path (Fallback):
./logs/mcp.log
Configuration Merging
When both log and logPath are specified:
const parser = ArgParser.withMcp({
mcp: {
serverInfo: { name: "my-server", version: "1.0.0" },
// log provides logger options (level, prefix, mcpMode)
log: {
level: "debug",
prefix: "MyServer",
mcpMode: true,
// logToFile can be omitted when using logPath
},
// logPath provides flexible path resolution
logPath: {
path: "./logs/app.log",
relativeTo: "entry", // "entry" | "cwd" | "absolute"
basePath: "/custom/base", // Optional custom base path
},
},
});Merging behavior:
logprovides logger configuration (level, prefix, mcpMode)logPathprovides flexible path resolution withrelativeTooptions- If both specify a file path,
logPathtakes precedence for path resolution - This preserves the powerful
LogPathfeatures while usingMcpLoggerOptionsfor logger settings
Path Resolution Options
Log paths are resolved with smart defaults for better DXT package compatibility:
// Simple string paths (recommended)
const parser = ArgParser.withMcp({
appName: "My CLI",
appCommandName: "my-cli",
mcp: {
serverInfo: { name: "my-server", version: "1.0.0" },
logPath: "./logs/app.log", // Relative to entry point (default)
// logPath: "/tmp/app.log", // Absolute paths work too
// logPath: "cwd:./logs/app.log", // Explicit process.cwd() relative
},
});
// Object configuration for more granular use cases
const parser = ArgParser.withMcp({
// ... other config
mcp: {
// ... server info
logPath: {
path: "./logs/app.log",
relativeTo: "entry", // "entry" | "cwd" | "absolute"
basePath: "/custom/base", // Optional custom base path
},
},
});
// CLI flag overrides programmatic setting
// my-cli --s-mcp-serve --s-mcp-log-path ./override.logThe CLI flag always takes precedence, allowing users to override the developer's programmatic configuration when needed. By default, relative paths resolve relative to the application's entry point, making logs predictably located near DXT packages.
MCP Lifecycle Events
ArgParser MCP servers support lifecycle events that allow you to perform initialization, cleanup, and other operations at specific points in the MCP protocol flow. These events are particularly useful for database connections, resource setup, and graceful shutdown procedures.
const cli = ArgParser.withMcp({
appName: "Database CLI",
appCommandName: "db-cli",
mcp: {
serverInfo: { name: "database-server", version: "1.0.0" },
lifecycle: {
onInitialize: async (ctx) => {
// Called when client sends "initialize" request
// Perfect for database connections, resource setup
ctx.logger.mcpError("Initializing server...");
const dbUrl = ctx.getFlag("database_url");
if (dbUrl) {
await connectToDatabase(dbUrl);
ctx.logger.mcpError("Database connected successfully");
}
},
onInitialized: async (ctx) => {
// Called when client sends "initialized" notification
// Server is ready for normal operations
ctx.logger.mcpError("Server ready for requests");
await startBackgroundTasks();
},
onShutdown: async (ctx) => {
// Called during server shutdown
// Perfect for cleanup, closing connections
ctx.logger.mcpError(`Shutting down: ${ctx.reason}`);
await cleanupResources();
await closeDatabase();
},
},
},
});Lifecycle Events:
onInitialize: Called when a client sends an "initialize" request. Ideal for database connections, resource initialization, configuration validation, and authentication setup.onInitialized: Called when a client sends an "initialized" notification, indicating the client is ready for normal operations. Perfect for final setup steps and background task initialization.onShutdown: Called when the MCP server is shutting down. Essential for cleanup, resource disposal, and graceful shutdown procedures.
Context Properties:
Each lifecycle event receives a context object with:
getFlag(name): Access parsed CLI flags and environment variableslogger: MCP-compliant logger instance for the current contextserverInfo: Server information (name, version, description)clientInfo: Client information (available in onInitialize and onInitialized)protocolVersion: MCP protocol version being usedreason: Shutdown reason (only in onShutdown: "client_disconnect", "server_shutdown", "error", "signal")
MCP Resources - Real-Time Data Feeds
MCP Resources enable your CLI tools to provide real-time, subscription-based data feeds to AI assistants. Unlike tools (which are called once), resources can be subscribed to and provide live updates when data changes.
Key Benefits:
- Real-time notifications: AI assistants get notified when your data changes
- Flexible URI templates: Support dynamic parameters like
data://alerts/aged/gte:{threshold} - Standard MCP pattern: Full subscription lifecycle support
- Zero CLI impact: Resources only work in MCP mode, CLI usage unchanged
Basic Resource Setup
const parser = ArgParser.withMcp({
appName: "Data Monitor",
appCommandName: "data-monitor",
mcp: {
serverInfo: { name: "data-monitor", version: "1.0.0" },
},
}).addMcpResource({
name: "recent-data",
uriTemplate: "data://recent",
title: "Recent Data",
description: "Get the most recent data entries",
mimeType: "application/json",
handler: async (uri) => {
const recentData = await getRecentData();
return {
contents: [
{
uri: uri.href,
text: JSON.stringify(recentData, null, 2),
mimeType: "application/json",
},
],
};
},
});URI Templates with Dynamic Parameters
Create flexible resources that accept parameters:
.addMcpResource({
name: "aged-data-alert",
uriTemplate: "data://alerts/aged/gte:{threshold}",
title: "Aged Data Alert",
description: "Monitor data that has aged past a threshold (in milliseconds)",
handler: async (uri, { threshold }) => {
const thresholdMs = parseInt(threshold);
const agedData = await getDataOlderThan(new Date(Date.now() - thresholdMs));
return {
contents: [{
uri: uri.href,
text: JSON.stringify({
threshold_ms: thresholdMs,
query_time: new Date().toISOString(),
aged_data: agedData,
count: agedData.length
}, null, 2),
mimeType: "application/json"
}]
};
}
});MCP Subscription Lifecycle
Resources support the full MCP subscription pattern:
- Client subscribes:
resources/subscribe→"data://alerts/aged/gte:10000" - Server monitors: Your application detects data changes
- Server notifies:
notifications/resources/updatedsent to subscribed clients - Client reads fresh data:
resources/read→"data://alerts/aged/gte:10000" - Client unsubscribes:
resources/unsubscribewhen done
Usage Examples
AI Assistant Integration:
// AI assistant can subscribe to real-time data
await client.request("resources/subscribe", {
uri: "data://alerts/aged/gte:60000", // 1 minute threshold
});
// Handle notifications
client.on("notifications/resources/updated", async (notification) => {
const response = await client.request("resources/read", {
uri: notification.uri,
});
console.log("Fresh data:", JSON.parse(response.contents[0].text));
});Command Line Testing:
# Start MCP server
data-monitor --s-mcp-serve
# Test resource (in another terminal)
echo '{"jsonrpc":"2.0","id":1,"method":"resources/read","params":{"uri":"data://alerts/aged/gte:10000"}}' | data-monitor --s-mcp-serveDesign Patterns
Static Resources: Use simple URIs for data that changes content but not structure
uriTemplate: "logs://recent"; // Always available, content updates
uriTemplate: "status://system"; // System status, updates in real-timeParameterized Resources: Use URI templates for flexible filtering
uriTemplate: "data://type/{type}"; // Filter by type
uriTemplate: "alerts/{severity}/gte:{age}"; // Multiple parameters
uriTemplate: "search/{query}/limit:{count}"; // Search with limitsTime-Based Resources: Perfect for monitoring and alerting
uriTemplate: "events/since:{timestamp}"; // Events since timestamp
uriTemplate: "metrics/aged/gte:{threshold}"; // Metrics past threshold
uriTemplate: "logs/errors/last:{duration}"; // Recent errors💡 Pro Tip: Resources are perfect for monitoring, alerting, and real-time data feeds. They complement tools (one-time actions) by providing continuous data streams that AI assistants can subscribe to.
Automatic Console Safety
A major challenge in MCP is preventing console.log from corrupting the JSON-RPC communication over STDOUT. ArgParser solves this automatically.
- How it works: When
--s-mcp-serveis active, ArgParser hijacks the globalconsoleobject. - What it does: It redirects
console.log,.info,.warn, and.debugtoSTDERRwith a prefix, making them visible for debugging without interfering with the protocol.console.erroris preserved onSTDERRas expected. - Your benefit: You can write
console.logstatements freely in your handlers. They will work as expected in CLI mode and be safely handled in MCP mode with zero code changes.
Generating DXT Packages (--s-build-dxt)
A Desktop Extension (.dxt) is a standardized package for installing your tools into Claude Desktop. ArgParser automates this process.
# 1. Generate the DXT package contents into a directory
my-cli-app --s-build-dxt ./my-dxt-package
# The output folder contains everything needed: manifest.json, entry point, etc.
# A default logo will be applied if you don't provide one.
# 2. (Optional) Pack the folder into a .dxt file for distribution
# (you can install the unpacked folder) directly in Claude Desktop > Settings > Extensions > Advanced
npx @anthropic-ai/dxt pack ./my-dxt-package
# 3. (Optional) Sign the DXT package - this has not been well tested yet
npx @anthropic-ai/dxt sign ./my-dxt-package.dxt
# Then drag & drop the .dxt file into Claude Desktop to install it, in the Settings > Extensions screen.
# **IMPORTANT**:
# If you use ML models or packages that include binaries such as Sqlite3 or sharp, etc...
# You need to bundle the node_modules folder with your DXT package
# In order to do this, you need to use the following flag:
# First hard-install all the packages
rm -rf node_moduels
pnpm install --prod --linker hoisted
# Then bundle with node_modules
mycli --s-build-dxt ./my-dxt-package --s-with-node-modules
# then build the dxt bundle
npx @anthropic-ai/dxt pack ./my-dxt-package
# then upload the dxt bundle to Claude Desktop from the settings > extensions > advancedLogo Configuration
The logo will appear in Claude Desktop's Extensions settings and when users interact with your MCP tools. Note that neither ArgParser nor Anthropic packer will modify the logo, so make sure to use a reasonable size, such as 256x256 pixels or 512x512 pixels maximum. Any image type that can display in a browser is supported.
You can customize the logo/icon that appears in Claude Desktop for your DXT package by configuring the logo property in your serverInfo:
const cli = ArgParser.withMcp({
appName: "My CLI",
appCommandName: "mycli",
mcp: {
// This will appear in Claude Desktop's Extensions settings
serverInfo: {
name: "my-mcp-server",
version: "1.0.0",
description: "My CLI as