npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

Iโ€™ve always been into building performant and accessible sites, but lately Iโ€™ve been taking it extremely seriously. So much so that Iโ€™ve been building a tool to help me optimize and monitor the sites that I build to make sure that Iโ€™m making an attempt to offer the best experience to those who visit them. If youโ€™re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, ๐Ÿ‘‹, Iโ€™m Ryan Hefnerย  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If youโ€™re interested in other things Iโ€™m working on, follow me on Twitter or check out the open source projects Iโ€™ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soonโ€“ish.

Open Software & Tools

This site wouldnโ€™t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you ๐Ÿ™

ยฉ 2026 โ€“ย Pkg Stats / Ryan Hefner

sanskript

v1.0.1

Published

Production-ready AI scripting language with RAG, pluggable LLM providers, and streaming support ๐Ÿ‡ฎ๐Ÿ‡ณ

Downloads

7

Readme

Sanskript ๐Ÿ‡ฎ๐Ÿ‡ณ


What is Sanskript?

Sanskript is a minimal, local-first AI scripting language designed for building AI workflows, chatbots, and RAG (Retrieval-Augmented Generation) applications. Write readable AI workflows in a simple DSL, run them locally or in production.

Why Sanskript?

  • ๐Ÿš€ Production-Ready: npm package, SDK, cross-platform CLI
  • ๐Ÿ  Local-First: Run completely offline with Ollama or mock provider
  • ๐Ÿ“š Real RAG: Built-in vector search and document processing
  • ๐Ÿ”Œ Pluggable LLMs: OpenAI, Ollama, or custom providers
  • โšก Streaming: Real-time response streaming
  • ๐ŸŽฏ Simple Syntax: Readable DSL inspired by Sanskrit (India's ancient language)
  • ๐Ÿ Multi-Language: JavaScript and Python SDKs

Quick Start

Install

# Via npm (recommended)
npm install -g sanskript

# Or use without installing
npx sanskript --version

Hello World

Create hello.sanskript:

vakya("Namaste! Welcome to Sanskript ๐Ÿ‡ฎ๐Ÿ‡ณ")
manana("What is artificial intelligence?")

Run it:

sanskript hello.sanskript

That's it! ๐ŸŽ‰

Features

โœจ Core Capabilities

  • 5 Simple Commands

    • vakya(message) - Print output
    • manana(prompt) - Call LLM
    • manana(prompt, context) - RAG-enhanced LLM call
    • rag(path) - Load documents for RAG
    • js(code) / py(code) - Execute JavaScript/Python
  • Real RAG System

    • Automatic document chunking
    • Lightweight vector embeddings
    • In-memory vector search
    • Recursive folder loading
  • Multiple LLM Providers

    • OpenAI (GPT-3.5, GPT-4)
    • Ollama (local Llama2, Mistral, etc.)
    • Mock (for testing)
    • Auto-detection with fallback
  • Production Features

    • Streaming responses
    • Configuration files
    • Environment variables
    • Cross-platform support
    • TypeScript definitions
    • Python SDK

Installation

Prerequisites

  • Node.js 16 or higher
  • Python 3.7+ (optional, for Python SDK or py command)
  • Ollama (optional, for local LLMs)

Install Sanskript

# Global installation (recommended)
npm install -g sanskript

# Local installation
npm install sanskript

# Development
git clone https://github.com/codermoderSD/sanskript.git
cd sanskript
npm install
npm link

Verify Installation

sanskript --version
sanskript --help

Build a PDF Chatbot in 60 Seconds

Let's build a document Q&A system in just 3 steps:

Step 1: Create Your Documents

mkdir my-docs
echo "Artificial Intelligence (AI) is intelligence demonstrated by machines." > my-docs/ai.txt
echo "Machine Learning is a subset of AI that learns from data." > my-docs/ml.txt

Step 2: Create chatbot.sanskript

# Load documents into RAG system
vakya("๐Ÿ“š Loading knowledge base...")
rag("my-docs")

# Ask questions with context
vakya("\n๐Ÿค” Answering questions...")
manana("What is AI?", "artificial intelligence")
manana("What is machine learning?", "machine learning")
manana("How are AI and ML related?", "AI ML relationship")

Step 3: Run It

# With mock provider (no API needed)
sanskript chatbot.sanskript

# With OpenAI
export OPENAI_API_KEY=your-key
sanskript --provider openai chatbot.sanskript

# With local Ollama
ollama pull llama2
sanskript --provider ollama chatbot.sanskript

That's it! You have a working document chatbot! ๐ŸŽ‰

Documentation

Command Reference

vakya(message)

Print messages to console.

vakya("Hello, World!")
vakya("Multiple", "arguments", "supported")
vakya()  # Empty line

manana(prompt [, context])

Call LLM with optional RAG context.

# Basic LLM call
manana("Explain quantum computing")

# With RAG context
rag("docs/")
manana("What is quantum entanglement?", "quantum")

rag(path)

Load documents for RAG.

# Single file
rag("document.txt")

# Entire folder (recursive)
rag("docs/")

js(code) / py(code)

Execute JavaScript or Python.

js("console.log('From JavaScript')")
py("print('From Python')")

CLI Usage

# Basic usage
sanskript <file.sanskript>

# With options
sanskript --stream --provider openai demo.sanskript
sanskript --config custom.json workflow.sanskript

# Get help
sanskript --help
sanskript --version

CLI Flags

  • --stream - Enable streaming mode
  • --provider <name> - Set LLM provider (auto/mock/openai/ollama)
  • --config <path> - Use custom config file
  • --help - Show help
  • --version - Show version

Configuration

Create .sanskript.config.json in your project:

{
  "provider": "auto",
  "stream": false,
  "providers": {
    "openai": {
      "model": "gpt-3.5-turbo",
      "temperature": 0.7,
      "maxTokens": 1000
    },
    "ollama": {
      "baseURL": "http://localhost:11434",
      "model": "llama2"
    }
  },
  "rag": {
    "chunkSize": 500,
    "overlap": 50,
    "extensions": [".txt", ".md", ".json"]
  }
}

Environment Variables

# Provider settings
SANSKRIPT_PROVIDER=openai        # Set default provider
SANSKRIPT_STREAM=true            # Enable streaming

# OpenAI settings
OPENAI_API_KEY=sk-...            # Your API key
OPENAI_MODEL=gpt-4               # Model name

# Ollama settings
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2

# Debugging
DEBUG=1                          # Show debug info

Examples

Example 1: Simple Chatbot

# hello.sanskript
vakya("๐Ÿค– Sanskript Chatbot")
manana("What is machine learning?")
manana("Give me a simple example")

Example 2: Document Q&A

# qa.sanskript
vakya("๐Ÿ“š Document Q&A System")

# Load knowledge base
rag("examples/docs")

# Query with context
manana("What are the types of AI?", "types of AI")
manana("Explain neural networks", "neural networks")

Example 3: Workflow with Mixed Languages

# workflow.sanskript
vakya("Starting AI workflow...")

# Load documents
rag("data/")

# Process with LLM
manana("Summarize the key findings", "research results")

# Post-process with JavaScript
js("console.log('Timestamp:', new Date().toISOString())")

# Optional: Python analysis
py("import json; print('Analysis complete')")

vakya("โœ… Workflow completed!")

Example 4: Streaming Responses

# Run with --stream flag for real-time output
sanskript --stream examples/streaming-demo.sanskript

API

JavaScript/Node.js SDK

import { run } from 'sanskript';

// Run Sanskript code
await run(`
  vakya("Hello from Node.js!")
  manana("What is AI?")
`, {
  provider: 'mock',
  stream: false
});

// Import individual components
import { tokenize, parse, interpret } from 'sanskript';
import { createProvider } from 'sanskript/providers';
import { createRAG } from 'sanskript/rag';

// Use providers directly
const provider = await createProvider('openai');
const response = await provider.complete("Hello!");

// Stream responses
for await (const chunk of provider.stream("Tell me a story")) {
  process.stdout.write(chunk);
}

// Use RAG system directly
const rag = createRAG();
await rag.loadDocuments('./docs');
const results = await rag.query("machine learning", 5);

Python SDK

from sanskript import run_sanskript, run_file

# Run Sanskript code
result = run_sanskript('''
    vakya("Hello from Python!")
    manana("What is AI?")
''', provider='mock')

# Run a file
result = run_file('examples/hello.sanskript')

# With configuration
from sanskript import SanskriptConfig
config = SanskriptConfig()
config.set('provider', 'openai')
config.save()

TypeScript Support

Sanskript includes TypeScript definitions:

import { run, RunOptions, tokenize, parse } from 'sanskript';

const options: RunOptions = {
  provider: 'openai',
  stream: true
};

await run('vakya("Hello")', options);

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚              CLI / SDK                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  Tokenizer โ†’ Parser โ†’ Interpreter      โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚     Provider System    โ”‚   RAG System   โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”   โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
โ”‚  โ”‚ OpenAI          โ”‚   โ”‚  โ”‚Embeddings โ”‚ โ”‚
โ”‚  โ”‚ Ollama          โ”‚   โ”‚  โ”‚VectorStoreโ”‚ โ”‚
โ”‚  โ”‚ Mock            โ”‚   โ”‚  โ”‚Loader     โ”‚ โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
  • Tokenizer: Lexical analysis
  • Parser: Syntax analysis (AST)
  • Interpreter: Execution engine
  • Providers: Pluggable LLM backends
  • RAG: Document loading, chunking, vector search

LLM Providers

Mock Provider (Default)

Perfect for testing without API keys:

sanskript --provider mock demo.sanskript

OpenAI Provider

Requires API key:

export OPENAI_API_KEY=sk-...
sanskript --provider openai demo.sanskript

Ollama Provider (Local LLMs)

Run LLMs completely locally:

# Install Ollama from https://ollama.ai
ollama pull llama2
sanskript --provider ollama demo.sanskript

Auto Provider (Smart Selection)

Automatically tries: OpenAI โ†’ Ollama โ†’ Mock

sanskript demo.sanskript  # Uses auto by default

Advanced Topics

Custom Providers

Extend BaseProvider to add new LLM providers:

import { BaseProvider } from 'sanskript/providers';

class CustomProvider extends BaseProvider {
  async complete(prompt, options) {
    // Your implementation
  }
  
  async *stream(prompt, options) {
    // Your streaming implementation
  }
}

RAG Configuration

Customize chunking and embedding:

const rag = createRAG({
  dimensions: 384,      // Embedding size
  chunkSize: 500,      // Characters per chunk
  overlap: 50,         // Overlap between chunks
  recursive: true,     // Recursive folder loading
  extensions: ['.txt', '.md', '.pdf']
});

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

git clone https://github.com/codermoderSD/sanskript.git
cd sanskript
npm install
npm link

# Run examples
sanskript examples/hello.sanskript

# Debug mode
DEBUG=1 sanskript examples/demo.sanskript

Roadmap

โœ… Completed

  • [x] Custom tokenizer, parser, interpreter
  • [x] Real RAG with vector search
  • [x] Multiple LLM providers (OpenAI, Ollama, Mock)
  • [x] Streaming responses
  • [x] Production-ready packaging
  • [x] Configuration system
  • [x] TypeScript definitions
  • [x] Python SDK

๐Ÿšง Planned

  • [ ] Variables and state management
  • [ ] Control flow (if/else, loops)
  • [ ] User-defined functions
  • [ ] REPL mode
  • [ ] More LLM providers (Anthropic, Cohere)
  • [ ] Advanced embeddings (real ML models)
  • [ ] Persistent vector storage
  • [ ] Web UI
  • [ ] VS Code extension

License

MIT License - see LICENSE file.

Acknowledgments

  • Inspired by Sanskrit, India's ancient language of knowledge ๐Ÿ‡ฎ๐Ÿ‡ณ
  • Built with modern web technologies
  • Powered by open-source community