sanskript
v1.0.1
Published
Production-ready AI scripting language with RAG, pluggable LLM providers, and streaming support ๐ฎ๐ณ
Downloads
7
Maintainers
Readme
Sanskript ๐ฎ๐ณ
What is Sanskript?
Sanskript is a minimal, local-first AI scripting language designed for building AI workflows, chatbots, and RAG (Retrieval-Augmented Generation) applications. Write readable AI workflows in a simple DSL, run them locally or in production.
Why Sanskript?
- ๐ Production-Ready: npm package, SDK, cross-platform CLI
- ๐ Local-First: Run completely offline with Ollama or mock provider
- ๐ Real RAG: Built-in vector search and document processing
- ๐ Pluggable LLMs: OpenAI, Ollama, or custom providers
- โก Streaming: Real-time response streaming
- ๐ฏ Simple Syntax: Readable DSL inspired by Sanskrit (India's ancient language)
- ๐ Multi-Language: JavaScript and Python SDKs
Quick Start
Install
# Via npm (recommended)
npm install -g sanskript
# Or use without installing
npx sanskript --versionHello World
Create hello.sanskript:
vakya("Namaste! Welcome to Sanskript ๐ฎ๐ณ")
manana("What is artificial intelligence?")Run it:
sanskript hello.sanskriptThat's it! ๐
Features
โจ Core Capabilities
5 Simple Commands
vakya(message)- Print outputmanana(prompt)- Call LLMmanana(prompt, context)- RAG-enhanced LLM callrag(path)- Load documents for RAGjs(code)/py(code)- Execute JavaScript/Python
Real RAG System
- Automatic document chunking
- Lightweight vector embeddings
- In-memory vector search
- Recursive folder loading
Multiple LLM Providers
- OpenAI (GPT-3.5, GPT-4)
- Ollama (local Llama2, Mistral, etc.)
- Mock (for testing)
- Auto-detection with fallback
Production Features
- Streaming responses
- Configuration files
- Environment variables
- Cross-platform support
- TypeScript definitions
- Python SDK
Installation
Prerequisites
- Node.js 16 or higher
- Python 3.7+ (optional, for Python SDK or
pycommand) - Ollama (optional, for local LLMs)
Install Sanskript
# Global installation (recommended)
npm install -g sanskript
# Local installation
npm install sanskript
# Development
git clone https://github.com/codermoderSD/sanskript.git
cd sanskript
npm install
npm linkVerify Installation
sanskript --version
sanskript --helpBuild a PDF Chatbot in 60 Seconds
Let's build a document Q&A system in just 3 steps:
Step 1: Create Your Documents
mkdir my-docs
echo "Artificial Intelligence (AI) is intelligence demonstrated by machines." > my-docs/ai.txt
echo "Machine Learning is a subset of AI that learns from data." > my-docs/ml.txtStep 2: Create chatbot.sanskript
# Load documents into RAG system
vakya("๐ Loading knowledge base...")
rag("my-docs")
# Ask questions with context
vakya("\n๐ค Answering questions...")
manana("What is AI?", "artificial intelligence")
manana("What is machine learning?", "machine learning")
manana("How are AI and ML related?", "AI ML relationship")Step 3: Run It
# With mock provider (no API needed)
sanskript chatbot.sanskript
# With OpenAI
export OPENAI_API_KEY=your-key
sanskript --provider openai chatbot.sanskript
# With local Ollama
ollama pull llama2
sanskript --provider ollama chatbot.sanskriptThat's it! You have a working document chatbot! ๐
Documentation
Command Reference
vakya(message)
Print messages to console.
vakya("Hello, World!")
vakya("Multiple", "arguments", "supported")
vakya() # Empty linemanana(prompt [, context])
Call LLM with optional RAG context.
# Basic LLM call
manana("Explain quantum computing")
# With RAG context
rag("docs/")
manana("What is quantum entanglement?", "quantum")rag(path)
Load documents for RAG.
# Single file
rag("document.txt")
# Entire folder (recursive)
rag("docs/")js(code) / py(code)
Execute JavaScript or Python.
js("console.log('From JavaScript')")
py("print('From Python')")CLI Usage
# Basic usage
sanskript <file.sanskript>
# With options
sanskript --stream --provider openai demo.sanskript
sanskript --config custom.json workflow.sanskript
# Get help
sanskript --help
sanskript --versionCLI Flags
--stream- Enable streaming mode--provider <name>- Set LLM provider (auto/mock/openai/ollama)--config <path>- Use custom config file--help- Show help--version- Show version
Configuration
Create .sanskript.config.json in your project:
{
"provider": "auto",
"stream": false,
"providers": {
"openai": {
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"maxTokens": 1000
},
"ollama": {
"baseURL": "http://localhost:11434",
"model": "llama2"
}
},
"rag": {
"chunkSize": 500,
"overlap": 50,
"extensions": [".txt", ".md", ".json"]
}
}Environment Variables
# Provider settings
SANSKRIPT_PROVIDER=openai # Set default provider
SANSKRIPT_STREAM=true # Enable streaming
# OpenAI settings
OPENAI_API_KEY=sk-... # Your API key
OPENAI_MODEL=gpt-4 # Model name
# Ollama settings
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama2
# Debugging
DEBUG=1 # Show debug infoExamples
Example 1: Simple Chatbot
# hello.sanskript
vakya("๐ค Sanskript Chatbot")
manana("What is machine learning?")
manana("Give me a simple example")Example 2: Document Q&A
# qa.sanskript
vakya("๐ Document Q&A System")
# Load knowledge base
rag("examples/docs")
# Query with context
manana("What are the types of AI?", "types of AI")
manana("Explain neural networks", "neural networks")Example 3: Workflow with Mixed Languages
# workflow.sanskript
vakya("Starting AI workflow...")
# Load documents
rag("data/")
# Process with LLM
manana("Summarize the key findings", "research results")
# Post-process with JavaScript
js("console.log('Timestamp:', new Date().toISOString())")
# Optional: Python analysis
py("import json; print('Analysis complete')")
vakya("โ
Workflow completed!")Example 4: Streaming Responses
# Run with --stream flag for real-time output
sanskript --stream examples/streaming-demo.sanskriptAPI
JavaScript/Node.js SDK
import { run } from 'sanskript';
// Run Sanskript code
await run(`
vakya("Hello from Node.js!")
manana("What is AI?")
`, {
provider: 'mock',
stream: false
});
// Import individual components
import { tokenize, parse, interpret } from 'sanskript';
import { createProvider } from 'sanskript/providers';
import { createRAG } from 'sanskript/rag';
// Use providers directly
const provider = await createProvider('openai');
const response = await provider.complete("Hello!");
// Stream responses
for await (const chunk of provider.stream("Tell me a story")) {
process.stdout.write(chunk);
}
// Use RAG system directly
const rag = createRAG();
await rag.loadDocuments('./docs');
const results = await rag.query("machine learning", 5);Python SDK
from sanskript import run_sanskript, run_file
# Run Sanskript code
result = run_sanskript('''
vakya("Hello from Python!")
manana("What is AI?")
''', provider='mock')
# Run a file
result = run_file('examples/hello.sanskript')
# With configuration
from sanskript import SanskriptConfig
config = SanskriptConfig()
config.set('provider', 'openai')
config.save()TypeScript Support
Sanskript includes TypeScript definitions:
import { run, RunOptions, tokenize, parse } from 'sanskript';
const options: RunOptions = {
provider: 'openai',
stream: true
};
await run('vakya("Hello")', options);Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CLI / SDK โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Tokenizer โ Parser โ Interpreter โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Provider System โ RAG System โ
โ โโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโ โ
โ โ OpenAI โ โ โEmbeddings โ โ
โ โ Ollama โ โ โVectorStoreโ โ
โ โ Mock โ โ โLoader โ โ
โ โโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ- Tokenizer: Lexical analysis
- Parser: Syntax analysis (AST)
- Interpreter: Execution engine
- Providers: Pluggable LLM backends
- RAG: Document loading, chunking, vector search
LLM Providers
Mock Provider (Default)
Perfect for testing without API keys:
sanskript --provider mock demo.sanskriptOpenAI Provider
Requires API key:
export OPENAI_API_KEY=sk-...
sanskript --provider openai demo.sanskriptOllama Provider (Local LLMs)
Run LLMs completely locally:
# Install Ollama from https://ollama.ai
ollama pull llama2
sanskript --provider ollama demo.sanskriptAuto Provider (Smart Selection)
Automatically tries: OpenAI โ Ollama โ Mock
sanskript demo.sanskript # Uses auto by defaultAdvanced Topics
Custom Providers
Extend BaseProvider to add new LLM providers:
import { BaseProvider } from 'sanskript/providers';
class CustomProvider extends BaseProvider {
async complete(prompt, options) {
// Your implementation
}
async *stream(prompt, options) {
// Your streaming implementation
}
}RAG Configuration
Customize chunking and embedding:
const rag = createRAG({
dimensions: 384, // Embedding size
chunkSize: 500, // Characters per chunk
overlap: 50, // Overlap between chunks
recursive: true, // Recursive folder loading
extensions: ['.txt', '.md', '.pdf']
});Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Development Setup
git clone https://github.com/codermoderSD/sanskript.git
cd sanskript
npm install
npm link
# Run examples
sanskript examples/hello.sanskript
# Debug mode
DEBUG=1 sanskript examples/demo.sanskriptRoadmap
โ Completed
- [x] Custom tokenizer, parser, interpreter
- [x] Real RAG with vector search
- [x] Multiple LLM providers (OpenAI, Ollama, Mock)
- [x] Streaming responses
- [x] Production-ready packaging
- [x] Configuration system
- [x] TypeScript definitions
- [x] Python SDK
๐ง Planned
- [ ] Variables and state management
- [ ] Control flow (if/else, loops)
- [ ] User-defined functions
- [ ] REPL mode
- [ ] More LLM providers (Anthropic, Cohere)
- [ ] Advanced embeddings (real ML models)
- [ ] Persistent vector storage
- [ ] Web UI
- [ ] VS Code extension
License
MIT License - see LICENSE file.
Acknowledgments
- Inspired by Sanskrit, India's ancient language of knowledge ๐ฎ๐ณ
- Built with modern web technologies
- Powered by open-source community
