npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@qckfx/agent

v1.0.0-beta.3

Published

qckfx AI Agent SDK

Downloads

178

Readme

qckfx Agent SDK

A modular, OpenAI-compatible framework for building LLM-powered coding agents with tool execution capabilities.

npm version License: MIT

Features

🔧 Built-in Tool System - 10 powerful tools including Claude CLI integration for cost-effective coding
🔌 OpenAI-Compatible - Works with OpenAI, Anthropic, Google, and local models
🎯 Modular Architecture - Compose custom toolsets and swap providers seamlessly
📡 Event System - Monitor and debug agent behavior with comprehensive events
🔄 Session Management - Rollback capabilities and context window management
CLI Integration - Command-line tool for quick agent interactions
🛡️ Permission Control - Fine-grained control over tool execution permissions

Installation

npm install @qckfx/agent

Local Development

To run and test the CLI locally during development:

# Build the project
npm run build

# Link the package globally for local testing
npm link

# Now you can run qckfx commands using your local code
qckfx "List all TypeScript files in the src directory"
qckfx "Create a simple README for this project"

After linking, any changes you make to the code will be reflected in the qckfx command after running npm run build again.

Quick Start

import { Agent } from '@qckfx/agent';

// Create an agent with default configuration
const agent = await Agent.create({
  config: {
    defaultModel: 'google/gemini-2.5-pro-preview',
    environment: 'local',
    logLevel: 'info',
    systemPrompt: 'You are a helpful AI assistant.',
    tools: [
      'bash',
      'claude',
      'glob',
      'grep',
      'ls',
      'file_read',
      'file_edit',
      'file_write',
      'think',
      'batch',
    ],
  },
});

// Process a natural language query
const result = await agent.processQuery('What files are in this directory?');
console.log(result.response);

CLI Usage

The SDK includes a command-line tool for quick interactions:

# Install globally for CLI access
npm install -g @qckfx/agent

# Use the CLI
qckfx "List all TypeScript files in the src directory"
qckfx "Create a simple README for this project"

Model Provider Support

The SDK uses the OpenAI SDK internally and works with any OpenAI-compatible API endpoint:

Direct Provider APIs

# OpenAI
export LLM_API_KEY=your_openai_key
export LLM_DEFAULT_MODEL=gpt-4

# Or use environment-specific configuration

Using with LiteLLM

Set up LiteLLM to proxy requests to any provider:

# Set your LLM endpoint
export LLM_BASE_URL=http://localhost:8001  # LiteLLM server
export LLM_DEFAULT_MODEL=claude-3-5-sonnet-20241022

# Run LiteLLM proxy with Docker
cd litellm
docker build -t qckfx-litellm .
docker run -p 8001:8001 \
  -e ANTHROPIC_API_KEY=your_key \
  -e OPENAI_API_KEY=your_key \
  -e GEMINI_API_KEY=your_key \
  qckfx-litellm

Using with OpenRouter

export LLM_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY=your_openrouter_key
export LLM_DEFAULT_MODEL=anthropic/claude-3.5-sonnet

Built-in Tools

The SDK includes these powerful built-in tools:

| Tool | Description | | ------------ | ------------------------------------------------------------------------------------------------------- | | bash | Execute shell commands with full environment access | | claude | ⭐ Claude CLI Integration - Use the familiar Claude coding assistant for cost-effective development | | glob | Find files using powerful pattern matching | | grep | Search file contents with regex support | | ls | List directory contents with detailed information | | file_read | Read file contents with encoding support | | file_edit | Edit files with targeted replacements | | file_write | Write new files or overwrite existing ones | | think | Internal reasoning and planning capabilities | | batch | Execute multiple tools in parallel for efficiency |

Configuration

Agent Configuration Schema

interface AgentConfig {
  defaultModel?: string; // Default: 'google/gemini-2.5-pro-preview'
  environment?: 'local'; // Only 'local' is currently supported
  logLevel?: 'debug' | 'info' | 'warn' | 'error'; // Default: 'info'
  systemPrompt?: string; // Custom system prompt
  tools?: (string | ToolConfig)[]; // Built-in tools or custom tool configs
  experimentalFeatures?: {
    subAgents?: boolean; // Default: false
  };
}

Environment Variables

Configure the SDK behavior with these environment variables:

# LLM Configuration
LLM_BASE_URL=http://localhost:8001        # OpenAI-compatible API endpoint
LLM_API_KEY=your_api_key                  # API key for the endpoint
LLM_DEFAULT_MODEL=your_preferred_model    # Fallback model if discovery fails

# Model Discovery
LIST_MODELS_URL=http://localhost:8001/models  # Endpoint to list available models

# Remote Execution (planned feature)
REMOTE_ID=your_remote_session_id          # Required for remote execution

Advanced Usage

Event System

Monitor agent behavior with comprehensive event callbacks:

const agent = await Agent.create({
  config: {
    /* ... */
  },
  callbacks: {
    onProcessingStarted: data => console.log('Processing started:', data),
    onProcessingCompleted: data => console.log('Completed:', data.response),
    onProcessingError: error => console.error('Error:', error),
    onToolExecutionStarted: data => console.log('Tool started:', data.toolName),
    onToolExecutionCompleted: data => console.log('Tool completed:', data.result),
    onToolExecutionError: data => console.error('Tool error:', data.error),
  },
});

// Or subscribe after creation
const unsubscribe = agent.on('tool:execution:completed', data => {
  console.log(`Tool ${data.toolName} completed in ${data.executionTime}ms`);
});

Custom Tools

Extend the agent with your own tools:

import { Tool } from '@qckfx/agent';

const customTool: Tool = {
  name: 'my_tool',
  description: 'Does something useful',
  inputSchema: {
    type: 'object',
    properties: {
      input: { type: 'string', description: 'Input parameter' },
    },
    required: ['input'],
  },
  execute: async (args, context) => {
    // Tool implementation
    return { result: `Processed: ${args.input}` };
  },
};

agent.registerTool(customTool);

Context Window Management

Manage conversation context for multi-turn interactions:

// Create custom context window
const contextWindow = await Agent.createContextWindow([
  { role: 'user', content: 'Previous conversation...' },
]);

// Use with query processing
const result = await agent.processQuery('Continue our discussion', undefined, contextWindow);

Session Management

Control agent execution and state:

// Session control
agent.abort(); // Abort current processing
agent.isAborted(); // Check abort status
agent.clearAbort(); // Clear abort flag
agent.performRollback(messageId); // Rollback to specific message (including environment changes)

// Permission management
agent.setFastEditMode(true); // Skip edit confirmations
agent.setDangerMode(true); // Allow dangerous operations

// Tool execution
const toolResult = await agent.invokeTool('bash', { command: 'ls -la' });

Multi-Repository Support

Work with multiple repositories in a single session:

// Get repository information from session
const repoInfo = Agent.getMultiRepoInfo(sessionState);
if (repoInfo) {
  console.log(`Tracking ${repoInfo.repoCount} repositories`);
  console.log('Paths:', repoInfo.repoPaths);
}

Model Discovery

Dynamically discover available models:

// List available models from the configured endpoint
const models = await Agent.getAvailableModels(apiKey, logger);
console.log('Available models:', models);

Architecture

The SDK is built with a modular architecture that promotes flexibility and extensibility:

  • Agent Class - Main entry point and session management
  • Tool Registry - Manages built-in and custom tools with standardized interfaces
  • Provider System - OpenAI-compatible LLM communication layer
  • Execution Environment - Local tool execution (remote execution planned)
  • Event System - Observable agent and tool lifecycle events
  • Permission Manager - Fine-grained control over tool execution permissions

This modular design allows you to:

  • Swap LLM providers without changing application code
  • Compose custom toolsets for specific use cases
  • Monitor and debug agent behavior comprehensively
  • Extend functionality with custom tools and integrations

Examples

Basic File Operations

const agent = await Agent.create({
  config: {
    tools: ['file_read', 'file_write', 'ls'],
  },
});

const result = await agent.processQuery('Read the package.json file and create a summary');

Development Workflow

const agent = await Agent.create({
  config: {
    tools: ['bash', 'file_read', 'file_edit', 'grep'],
    systemPrompt: 'You are a helpful development assistant.',
  },
});

const result = await agent.processQuery('Find all TODO comments and create a task list');

Code Analysis

const agent = await Agent.create({
  config: {
    tools: ['glob', 'grep', 'file_read', 'think'],
  },
});

const result = await agent.processQuery(
  'Analyze the codebase structure and identify potential improvements',
);

Using with Claude CLI

Leverage the power of the familiar Claude coding assistant for cost-effective development:

const agent = await Agent.create({
  config: {
    tools: ['claude', 'bash', 'file_read', 'file_edit'],
    systemPrompt: 'You are a helpful coding assistant.',
  },
});

// Use Claude CLI for complex coding tasks
const result = await agent.processQuery('Refactor this component to use TypeScript interfaces');

The claude tool integrates with the local Claude CLI, allowing you to:

  • Save money by using your existing Claude subscription
  • Work with familiar tools you already know and love
  • Leverage Claude's advanced coding capabilities within the agent framework
  • Seamlessly combine Claude's expertise with other tools

Documentation

For comprehensive documentation, examples, and API reference, visit: https://docs.qckfx.com

License

MIT License - see the LICENSE file for details.

Support


Built with ❤️ by the qckfx team