npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@juspay/neurolink

v9.17.0

Published

Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and

Downloads

12,146

Readme

NeuroLink

The pipe layer for the AI nervous system.

AI intelligence flows as streams — tokens, tool calls, memory, voice, documents. NeuroLink is the vascular layer that carries these streams from where they are generated (LLM providers: the neurons) to where they are needed (connectors: the organs).

import { NeuroLink } from "@juspay/neurolink";

const pipe = new NeuroLink({ defaultProvider: "anthropic" });

// Everything is a stream
for await (const token of pipe.stream({ prompt: "Hello" })) {
  process.stdout.write(token);
}

→ Docs · → Quick Start · → npm


🧠 What is NeuroLink?

NeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

Where we're headed: We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →

Get Started in <5 Minutes →


What's New (Q1 2026)

| Feature | Version | Description | Guide | | ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | | Memory | v9.12.0 | Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends. | Memory Guide | | Context Window Management | v9.2.0 | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation | Context Compaction Guide | | Tool Execution Control | v9.3.0 | prepareStep and toolChoice support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. | API Reference | | File Processor System | v9.1.0 | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection | File Processors Guide | | RAG with generate()/stream() | v9.2.0 | Pass rag: { files } to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking. | RAG Guide | | External TracerProvider Support | v8.43.0 | Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. | Observability Guide | | Server Adapters | v8.43.0 | Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. | Server Adapters Guide | | Title Generation Events | v8.38.0 | Emit conversation:titleGenerated event when conversation title is generated. Supports custom title prompts via NEUROLINK_TITLE_PROMPT. | Conversation Memory Guide | | Video Generation with Veo | v8.32.0 | Video generation using Veo 3.1 (veo-3.1). Realistic video generation with many parameter options | Video Generation Guide | | Image Generation with Gemini | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (imagen-3.0-generate-002). High-quality image synthesis directly from Google AI. | Image Generation Guide | | HTTP/Streamable HTTP Transport | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | HTTP Transport Guide |

  • Memory – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each generate()/stream() call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → Memory Guide
  • External TracerProvider Support – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → Observability Guide
  • Server Adapters – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with serve and server commands for foreground/background modes, route management, and OpenAPI generation. → Server Adapters Guide
  • Title Generation Events – Emit real-time events when conversation titles are auto-generated. Listen to conversation:titleGenerated for session tracking. → Conversation Memory Guide
  • Custom Title Prompts – Customize conversation title generation with NEUROLINK_TITLE_PROMPT environment variable. Use ${userMessage} placeholder for dynamic prompts. → Conversation Memory Guide
  • Video Generation – Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. → Video Generation Guide
  • Image Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → Image Generation Guide
  • RAG with generate()/stream() – Just pass rag: { files: ["./docs/guide.md"] } to generate() or stream(). NeuroLink auto-chunks, embeds, and creates a search tool the AI can invoke. 10 chunking strategies, hybrid search, 5 reranker types. → RAG Guide
  • HTTP/Streamable HTTP Transport for MCP – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → HTTP Transport Guide
  • 🧠 Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities
  • 🎯 Tool Execution Control – Use prepareStep to enforce specific tool calls, change the LLM models per step in multi-step agentic executions. Prevents LLMs from skipping required tools. Use toolChoice for static control, or prepareStep for dynamic per-step logic. → GenerateOptions Reference
  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation using schema + output.format: "json" in generate(). → Structured Output Guide
  • CSV File Support – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → CSV Guide
  • PDF File Support – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. → PDF Guide
  • 50+ File Types – Process Excel, Word, RTF, JSON, YAML, XML, HTML, SVG, Markdown, and 50+ code languages with intelligent content extraction. → File Processors Guide
  • LiteLLM Integration – Access 100+ AI models from all major providers through unified interface. → Setup Guide
  • SageMaker Integration – Deploy and use custom trained models on AWS infrastructure. → Setup Guide
  • OpenRouter Integration – Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. → Setup Guide
  • Human-in-the-loop workflows – Pause generation for user approval/input before tool execution. → HITL Guide
  • Guardrails middleware – Block PII, profanity, and unsafe content with built-in filtering. → Guardrails Guide
  • Context summarization – Automatic conversation compression for long-running sessions. → Summarization Guide
  • Redis conversation export – Export full session history as JSON for analytics and debugging. → History Guide
// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generateImage({
  prompt: "A futuristic cityscape",
  provider: "google-ai",
  model: "imagen-3.0-generate-002",
});

// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
  transport: "http",
  url: "https://mcp.example.com/v1",
  headers: { Authorization: "Bearer token" },
  retries: 3,
  timeout: 15000,
});

  • Image Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → Guide
  • Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking
  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation. → Guide
  • CSV & PDF File Support – Attach CSV/PDF files to prompts with auto-detection. → CSV | PDF
  • LiteLLM & SageMaker – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → LiteLLM | SageMaker
  • OpenRouter Integration – Access 300+ models through a single unified API. → Guide
  • HITL & Guardrails – Human-in-the-loop approval workflows and content filtering middleware. → HITL | Guardrails
  • Redis & Context Management – Session export, conversation history, and automatic summarization. → History

Enterprise Security: Human-in-the-Loop (HITL)

NeuroLink includes a production-ready HITL system for regulated industries and high-stakes AI operations:

| Capability | Description | Use Case | | --------------------------- | --------------------------------------------------------- | ------------------------------------------ | | Tool Approval Workflows | Require human approval before AI executes sensitive tools | Financial transactions, data modifications | | Output Validation | Route AI outputs through human review pipelines | Medical diagnosis, legal documents | | Confidence Thresholds | Automatically trigger human review below confidence level | Critical business decisions | | Complete Audit Trail | Full audit logging for compliance (HIPAA, SOC2, GDPR) | Regulated industries |

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  hitl: {
    enabled: true,
    requireApproval: ["writeFile", "executeCode", "sendEmail"],
    confidenceThreshold: 0.85,
    reviewCallback: async (action, context) => {
      // Custom review logic - integrate with your approval system
      return await yourApprovalSystem.requestReview(action);
    },
  },
});

// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
  input: { text: "Send quarterly report to stakeholders" },
});

Enterprise HITL Guide | Quick Start

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more →

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

13 providers unified under one API - Switch providers with a single parameter change.

| Provider | Models | Free Tier | Tool Support | Status | Documentation | | --------------------- | -------------------------------------------------- | --------------- | ------------ | ------------- | ----------------------------------------------------------------------------------------------------------------------------- | | OpenAI | GPT-4o, GPT-4o-mini, o1 | ❌ | ✅ Full | ✅ Production | Setup Guide | | Anthropic | Claude 4.5 Opus/Sonnet/Haiku, Claude 4 Opus/Sonnet | ❌ | ✅ Full | ✅ Production | Setup Guide | Subscription Guide | | Google AI Studio | Gemini 3 Flash/Pro, Gemini 2.5 Flash/Pro | ✅ Free Tier | ✅ Full | ✅ Production | Setup Guide | | AWS Bedrock | Claude, Titan, Llama, Nova | ❌ | ✅ Full | ✅ Production | Setup Guide | | Google Vertex | Gemini 3/2.5 (gemini-3-*-preview) | ❌ | ✅ Full | ✅ Production | Setup Guide | | Azure OpenAI | GPT-4, GPT-4o, o1 | ❌ | ✅ Full | ✅ Production | Setup Guide | | LiteLLM | 100+ models unified | Varies | ✅ Full | ✅ Production | Setup Guide | | AWS SageMaker | Custom deployed models | ❌ | ✅ Full | ✅ Production | Setup Guide | | Mistral AI | Mistral Large, Small | ✅ Free Tier | ✅ Full | ✅ Production | Setup Guide | | Hugging Face | 100,000+ models | ✅ Free | ⚠️ Partial | ✅ Production | Setup Guide | | Ollama | Local models (Llama, Mistral) | ✅ Free (Local) | ⚠️ Partial | ✅ Production | Setup Guide | | OpenAI Compatible | Any OpenAI-compatible endpoint | Varies | ✅ Full | ✅ Production | Setup Guide | | OpenRouter | 200+ Models via OpenRouter | Varies | ✅ Full | ✅ Production | Setup Guide |

📖 Provider Comparison Guide - Detailed feature matrix and selection criteria 🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 13 providers


🔧 Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

| Tool | Purpose | Auto-Available | Documentation | | -------------------- | ------------------------ | ----------------------- | ------------------------------------------ | | getCurrentTime | Real-time clock access | ✅ | Tool Reference | | readFile | File system reading | ✅ | Tool Reference | | writeFile | File system writing | ✅ | Tool Reference | | listDirectory | Directory listing | ✅ | Tool Reference | | calculateMath | Mathematical operations | ✅ | Tool Reference | | websearchGrounding | Google Vertex web search | ⚠️ Requires credentials | Tool Reference |

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
  transport: "http",
  url: "https://api.githubcopilot.com/mcp",
  headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
  timeout: 15000,
  retries: 5,
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

MCP Transport Options:

| Transport | Use Case | Key Features | | ----------- | -------------- | ----------------------------------------------- | | stdio | Local servers | Command execution, environment variables | | http | Remote servers | URL-based, auth headers, retries, rate limiting | | sse | Event streams | Server-Sent Events, real-time updates | | websocket | Bi-directional | Full-duplex communication |

📖 MCP Integration Guide - Setup external servers 📖 HTTP Transport Guide - Remote MCP server configuration


💻 Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

| Feature | Description | Documentation | | --------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------- | | Auto Provider Selection | Intelligent provider fallback | SDK Guide | | Streaming Responses | Real-time token streaming | Streaming Guide | | Conversation Memory | Automatic context management with embedded per-user memory | Memory Guide | | Full Type Safety | Complete TypeScript types | Type Reference | | Error Handling | Graceful provider fallback | Error Guide | | Analytics & Evaluation | Usage tracking, quality scores | Analytics Guide | | Middleware System | Request/response hooks | Middleware Guide | | Framework Integration | Next.js, SvelteKit, Express | Framework Guides | | Extended Thinking | Native thinking/reasoning mode for Gemini 3 and Claude models | Thinking Guide | | RAG Document Processing | rag: { files } on generate/stream with 10 chunking strategies and hybrid search | RAG Guide |


📁 Multimodal & File Processing

17+ file categories supported (50+ total file types including code languages) with intelligent content extraction and provider-agnostic processing:

| Category | Supported Types | Processing | | ------------- | ---------------------------------------------------------- | ----------------------------------- | | Documents | Excel (.xlsx, .xls), Word (.docx), RTF, OpenDocument | Sheet extraction, text extraction | | Data | JSON, YAML, XML | Validation, syntax highlighting | | Markup | HTML, SVG, Markdown, Text | OWASP-compliant sanitization | | Code | 50+ languages (TypeScript, Python, Java, Go, etc.) | Language detection, syntax metadata | | Config | .env, .ini, .toml, .cfg | Secure parsing | | Media | Images (PNG, JPEG, WebP, GIF), PDFs, CSV | Provider-specific formatting |

// Process any supported file type
const result = await neurolink.generate({
  input: {
    text: "Analyze this data and code",
    files: [
      "./data.xlsx", // Excel spreadsheet
      "./config.yaml", // YAML configuration
      "./diagram.svg", // SVG (injected as sanitized text)
      "./main.py", // Python source code
    ],
  },
});

// CLI: Use --file for any supported type
// neurolink generate "Analyze this" --file ./report.xlsx --file ./config.json

Key Features:

  • ProcessorRegistry - Priority-based processor selection with fallback
  • OWASP Security - HTML/SVG sanitization prevents XSS attacks
  • Auto-detection - FileDetector identifies file types by extension and content
  • Provider-agnostic - All processors work across all 13 AI providers

📖 File Processors Guide - Complete reference for all file types


🏢 Enterprise & Production Features

Production-ready capabilities for regulated industries:

| Feature | Description | Use Case | Documentation | | --------------------------- | ------------------------------------------- | ------------------------- | ----------------------------------------------------------- | | Enterprise Proxy | Corporate proxy support | Behind firewalls | Proxy Setup | | Redis Memory | Distributed conversation state | Multi-instance deployment | Redis Guide | | Memory | Per-user condensed memory (S3/Redis/SQLite) | Long-term user context | Memory Guide | | Cost Optimization | Automatic cheapest model selection | Budget control | Cost Guide | | Multi-Provider Failover | Automatic provider switching | High availability | Failover Guide | | Telemetry & Monitoring | OpenTelemetry integration | Observability | Telemetry Guide | | Security Hardening | Credential management, auditing | Compliance | Security Guide | | Custom Model Hosting | SageMaker integration | Private models | SageMaker Guide | | Load Balancing | LiteLLM proxy integration | Scale & routing | Load Balancing |

Security & Compliance:

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage
  • ✅ Automatic context window management with 4-stage compaction pipeline and 80% budget gate

📖 Enterprise Deployment Guide - Complete production checklist


Enterprise Persistence: Redis Memory

Production-ready distributed conversation state for multi-instance deployments:

Capabilities

| Feature | Description | Benefit | | ---------------------- | -------------------------------------------- | --------------------------- | | Distributed Memory | Share conversation context across instances | Horizontal scaling | | Session Export | Export full history as JSON | Analytics, debugging, audit | | Auto-Detection | Automatic Redis discovery from environment | Zero-config in containers | | Graceful Failover | Falls back to in-memory if Redis unavailable | High availability | | TTL Management | Configurable session expiration | Memory management |

Quick Setup

import { NeuroLink } from "@juspay/neurolink";

// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis", // Automatically uses REDIS_URL
    ttl: 86400, // 24-hour session expiration
  },
});

// Or explicit configuration
const neurolinkExplicit = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
    redis: {
      host: "redis.example.com",
      port: 6379,
      password: process.env.REDIS_PASSWORD,
      tls: true, // Enable for production
    },
  },
});

// Export conversation for analytics
const history = await neurolink.exportConversation({ format: "json" });
await saveToDataWarehouse(history);

Docker Quick Start

# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine

# Configure NeuroLink
export REDIS_URL=redis://localhost:6379

# Start your application
node your-app.js

Redis Setup Guide | Production Configuration | Migration Patterns


🎨 Professional CLI

15+ commands for every workflow:

| Command | Purpose | Example | Documentation | | ---------------- | ------------------------------------ | -------------------------- | ----------------------------------------- | | setup | Interactive provider configuration | neurolink setup | Setup Guide | | generate | Text generation | neurolink gen "Hello" | Generate | | stream | Streaming generation | neurolink stream "Story" | Stream | | status | Provider health check | neurolink status | Status | | loop | Interactive session | neurolink loop | Loop | | mcp | MCP server management | neurolink mcp discover | MCP CLI | | models | Model listing | neurolink models | Models | | eval | Model evaluation | neurolink eval | Eval | | serve | Start HTTP server in foreground mode | neurolink serve | Serve | | server start | Start HTTP server in background mode | neurolink server start | Server | | server stop | Stop running background server | neurolink server stop | Server | | server status | Show server status information | neurolink server status | Server | | server routes | List all registered API routes | neurolink server routes | Server | | server config | View or modify server configuration | neurolink server config | Server | | server openapi | Generate OpenAPI specification | neurolink server openapi | Server | | rag chunk | Chunk documents for RAG | neurolink rag chunk f.md | RAG CLI |

RAG flags are available on generate and stream: --rag-files, --rag-strategy, --rag-chunk-size, --rag-chunk-overlap, --rag-top-k

📖 Complete CLI Reference - All commands and options


🤖 GitHub Action

Run AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.

- uses: juspay/neurolink@v1
  with:
    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    prompt: "Review this PR for security issues and code quality"
    post_comment: true

| Feature | Description | | ---------------------- | ----------------------------------------------------------------------------------------- | | Multi-Provider | 13 providers with unified interface | | PR/Issue Comments | Auto-post AI responses with intelligent updates | | Multimodal Support | Attach images, PDFs, CSVs, Excel, Word, JSON, YAML, XML, HTML, SVG, code files to prompts | | Cost Tracking | Built-in analytics and quality evaluation | | Extended Thinking | Deep reasoning with thinking tokens |

📖 GitHub Action Guide - Complete setup and examples


💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

Revolutionary Interactive CLI

NeuroLink's CLI goes beyond simple commands - it's a full AI development environment:

Why Interactive Mode Changes Everything

| Feature | Traditional CLI | NeuroLink Interactive | | ------------- | ----------------- | ------------------------------ | | Session State | None | Full persistence | | Memory | Per-command | Conversation-aware | | Configuration | Flags per command | /set persists across session | | Tool Testing | Manual per tool | Live discovery & testing | | Streaming | Optional | Real-time default |

Live Demo: Development Session

$ npx @juspay/neurolink loop --enable-conversation-memory

neurolink > /set provider vertex
✓ provider set to vertex (Gemini 3 support enabled)

neurolink > /set model gemini-3-flash-preview
✓ model set to gemini-3-flash-preview

neurolink > Analyze my project architecture and suggest improvements

✓ Analyzing your project structure...
[AI provides detailed analysis, remembering context]

neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]

neurolink > /mcp discover
✓ Discovered 58 MCP tools:
   GitHub: create_issue, list_repos, create_pr...
   PostgreSQL: query, insert, update...
   [full list]

neurolink > Use the GitHub tool to create an issue for this improvement
✓ Creating issue... (requires HITL approval if configured)

neurolink > /export json > session-2026-01-01.json
✓ Exported 15 messages to session-2026-01-01.json

neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json

Session Commands Reference

| Command | Purpose | | -------------------- | ---------------------------------------------------- | | /set <key> <value> | Persist configuration (provider, model, temperature) | | /mcp discover | List all available MCP tools | | /export json | Export conversation to JSON | | /history | View conversation history | | /clear | Clear context while keeping settings |

Interactive CLI Guide | CLI Reference

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json

# RAG: Ask questions about your docs (auto-chunks, embeds, searches)
npx @juspay/neurolink generate "What are the key features?" \
  --rag-files ./docs/guide.md ./docs/api.md --rag-strategy markdown
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
      "./report.xlsx", // Auto-detected as Excel
      "./config.json", // Auto-detected as JSON
      "./diagram.svg", // Auto-detected as SVG (injected as text)
      "./app.ts", // Auto-detected as TypeScript code
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

// RAG: Ask questions about your documents
const answer = await neurolink.generate({
  prompt: "What are the main architectural decisions?",
  rag: {
    files: ["./docs/architecture.md", "./docs/decisions.md"],
    strategy: "markdown",
    topK: 5,
  },
});
console.log(answer.content); // AI searches your docs and answers

Gemini 3 with Extended Thinking

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
  input: {
    text: "Solve this step by step: What is the optimal strategy for...",
  },
  provider: "vertex",
  model: "gemini-3-flash-preview",
  thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
});

console.log(result.content);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

| Capability | Highlights | | ------------------------ | ------------------------------------------------------------------------------------------------------------------------ | | Provider unification | 13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). | | Multimodal pipeline | Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types. | | Quality & governance | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. | | Memory & context | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). | | CLI tooling | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. | | Enterprise ops | Proxy support, regional routing (Q3), telemetry hooks, configuration management. | | Tool ecosystem | MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search. |

Documentation Map

| Area | When to Use | Link | | --------------- | --------------------------------------------------------- | ---------------------------------------------------------------- | | Getting started | Install, configure, run first prompt | docs/getting-started/index.md | | Feature guides | Understand new functionality front-to-back | docs/features/index.md | | CLI reference | Command syntax, flags, loop sessions | docs/cli/index.md | | SDK reference | Classes, methods, options | docs/sdk/index.md | | RAG | Document chunking, hybrid search, reranking, rag:{} API | docs/features/rag.md | | Integrations | LiteLLM, SageMaker, MCP, Mem0 | docs/litellm-integration.md | | Advanced | Middleware, architecture, streaming patterns | docs/advanced/index.md | | Cookbook | Practical recipes for common patterns | docs/cookbook/index.md | | Guides | Migration, Redis, troubleshooting, provider selection | docs/guides/index.md | | Operations | Configuration, troubleshooting, provider matrix | docs/reference/index.md |

New in 2026: Enhanced Documentation

Enterprise Features:

Provider Intelligence:

Middleware System:

Redis & Persistence:

Migration Guides:

Developer Experience:

Integrations

Contributing & Support


NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.