tytorch
v0.1.0
Published
TypeScript bindings for PyTorch via libtorch
Maintainers
Readme
TyTorch
TypeScript bindings for PyTorch via libtorch. Brings the full power of PyTorch to Node.js with complete type safety.
Features
- 🔥 Full TypeScript support with complete type definitions
- ⚡ Direct bindings to libtorch (PyTorch C++ API)
- 🎯 Familiar PyTorch-style API
- 🖥️ Support for CPU, CUDA, and Apple Silicon (MPS) devices
- 🚀 Zero-copy operations where possible
- 🧪 Comprehensive test coverage (500+ tests across unit/cpu/mps suites)
- 📦 ES Module support
Installation
Step 1: Install PyTorch (libtorch)
TyTorch requires PyTorch's C++ library (libtorch) to be installed on your system. This is a critical prerequisite.
macOS (Homebrew + Python)
# Install Python and PyTorch via pip
brew install [email protected]
pip3 install torch torchvision torchaudio
# Set environment variables (add to your shell config)
# For bash/zsh (~/.bashrc or ~/.zshrc):
export LIBTORCH="/opt/homebrew/lib/python3.11/site-packages/torch"
export DYLD_LIBRARY_PATH="$LIBTORCH/lib:$DYLD_LIBRARY_PATH"
# For nushell (~/.config/nushell/env.nu):
$env.LIBTORCH = "/opt/homebrew/lib/python3.11/site-packages/torch"
$env.DYLD_LIBRARY_PATH = ($env.LIBTORCH | path join "lib")Linux
# Install PyTorch via pip
pip3 install torch torchvision torchaudio
# Set environment variables (add to ~/.bashrc or ~/.zshrc)
export LIBTORCH="$(python3 -c 'import torch; print(torch.__path__[0])')"
export LD_LIBRARY_PATH="$LIBTORCH/lib:$LD_LIBRARY_PATH"Windows
# Install PyTorch via pip
pip install torch torchvision torchaudio
# Set environment variables (PowerShell)
$env:LIBTORCH = python -c "import torch; print(torch.__path__[0])"
$env:PATH = "$env:LIBTORCH\lib;$env:PATH"Note: Adjust Python version (3.11) to match your installation. Verify with python3 --version.
Step 2: Install TyTorch
npm install tytorch
# or
pnpm install tytorchThis will automatically build the native addon using node-gyp.
Requirements
- Node.js >= 24.0.0
- PyTorch (libtorch) installed (see Step 1 above)
- C++17 compatible compiler
- Environment variables set correctly (
LIBTORCH,DYLD_LIBRARY_PATH/LD_LIBRARY_PATH)
Platform Support
- macOS (Apple Silicon and Intel) - Fully tested ✅
- Linux (x86_64) - Should work ⚠️
- Windows - Experimental, may be broken ⚠️
Note: This is a prototype package currently tested only on macOS. Linux and Windows support is in progress.
Quick Start
import { torch } from 'tytorch';
// Create tensors
const a = torch.ones([2, 3]);
const b = torch.zeros([2, 3]);
const c = torch.randn([2, 3]);
// Basic operations
const sum = a.add(b);
const product = a.mul(c);
const matmul = a.matmul(b.transpose());
// Functional API
const result = torch.add(a, b);
// Properties
console.log(a.shape); // [2, 3]
console.log(a.dtype); // 'float32'
console.log(a.device); // 'cpu'
// Device management
if (torch.cuda.is_available()) {
const gpu_tensor = a.cuda();
}
// Apple Silicon MPS support
if (torch.mps.is_available()) {
const mps_tensor = a.mps();
}Documentation
- TODO.md - Development roadmap and feature status
- CLAUDE.md - Detailed project architecture and development guide
Current Status
Phase 1: Core Refactoring ✅ COMPLETE
- Modular C++ operations (one file per operation)
- Test suite organized into unit/cpu/mps suites
- 500+ tests passing
- Clean, maintainable architecture
Phase 2: Essential ML Methods 🚧 IN PROGRESS
- ✅ Shape operations (reshape, transpose, squeeze, unsqueeze, permute, flatten)
- ✅ Autograd support (backward, gradients, requires_grad)
- ✅ Activation functions (relu, sigmoid, tanh, softmax, log_softmax)
- ✅ Loss functions (mse_loss, cross_entropy, nll_loss, binary_cross_entropy)
- 🚧 Next: Indexing, slicing, concatenation
- See TODO.md for detailed roadmap
API Reference
Tensor Creation
torch.zeros(shape: number[], options?: TensorOptions): Tensor
torch.ones(shape: number[], options?: TensorOptions): Tensor
torch.randn(shape: number[], options?: TensorOptions): Tensor
torch.tensor(data: number[], options?: TensorOptions): TensorOptions:
interface TensorOptions {
dtype?: 'float32' | 'float64' | 'int32' | 'int64';
device?: 'cpu' | 'cuda' | 'mps';
}Arithmetic Operations
// Element-wise operations
tensor.add(other: Tensor | number): Tensor
tensor.sub(other: Tensor | number): Tensor
tensor.mul(other: Tensor | number): Tensor
tensor.div(other: Tensor | number): Tensor
// In-place variants (modify tensor in place)
tensor.add_(other: Tensor | number): Tensor
tensor.sub_(other: Tensor | number): Tensor
tensor.mul_(other: Tensor | number): Tensor
tensor.div_(other: Tensor | number): TensorMatrix Operations
tensor.matmul(other: Tensor): Tensor // Matrix multiplicationReductions
tensor.sum(): Tensor // Sum all elements to scalar
tensor.mean(): Tensor // Mean of all elementsDevice Management
tensor.cpu(): Tensor // Move to CPU
tensor.cuda(): Tensor // Move to CUDA (if available)
tensor.mps(): Tensor // Move to MPS (Apple Silicon)
tensor.to(device: string, dtype?: string): Tensor // Generic conversionDtype Conversions
tensor.float(): Tensor // Convert to float32
tensor.double(): Tensor // Convert to float64
tensor.int(): Tensor // Convert to int32
tensor.long(): Tensor // Convert to int64Properties
tensor.shape: number[] // Tensor shape
tensor.dtype: string // Data type ('float32', 'float64', 'int32', etc.)
tensor.device: string // Device ('cpu', 'cuda', 'mps')Utilities
tensor.toArray(): number[] // Convert to JavaScript array
tensor.toString(): string // String representation (PyTorch format)Functional API
Most operations are available in functional form:
torch.add(a: Tensor, b: Tensor | number): Tensor
torch.sub(a: Tensor, b: Tensor | number): Tensor
torch.mul(a: Tensor, b: Tensor | number): Tensor
torch.div(a: Tensor, b: Tensor | number): Tensor
torch.matmul(a: Tensor, b: Tensor): TensorDevelopment
Build Commands
# Build native addon
pnpm build:native
# Build TypeScript
pnpm build:ts
# Build everything
pnpm buildTesting
# Run all tests
pnpm test
# Run unit tests only (no device requirements)
pnpm test:unit
# Run CPU tests only
pnpm test:cpu
# Run MPS tests only (requires Apple Silicon)
pnpm test:mps
# Watch mode
pnpm test:watch
# UI mode
pnpm test:uiTest Organization
- Unit Tests (
test/unit/) - Device-agnostic core functionality - CPU Tests (
test/cpu/) - CPU-specific tests - MPS Tests (
test/mps/) - Apple Silicon GPU tests
All three test suites run sequentially via pnpm test. Each operation typically has three test files (one per suite).
Project Structure
tytorch/
├── src/
│ ├── index.ts # TypeScript entry point
│ └── native/
│ ├── addon.cpp # N-API initialization
│ ├── tensor.h/cpp # Tensor class
│ ├── utils.h/cpp # Utilities
│ └── ops/ # Modular operations (one file per op)
│ ├── add.h/cpp
│ ├── mul.h/cpp
│ ├── matmul.h/cpp
│ └── ...
├── test/
│ ├── unit/ # Unit tests
│ ├── cpu/ # CPU tests
│ └── mps/ # MPS tests
├── binding.gyp # Node-gyp configuration
└── package.jsonRoadmap
See TODO.md for the complete development roadmap.
Recently Completed:
- ✅ Tensor shape operations (reshape, transpose, squeeze, unsqueeze, permute, flatten)
- ✅ Autograd support (backward, gradients, requires_grad, detach, zero_grad, no_grad)
- ✅ Activation functions (relu, sigmoid, tanh, softmax, log_softmax)
- ✅ Loss functions (mse_loss, cross_entropy, nll_loss, binary_cross_entropy)
Coming Soon:
- Indexing and slicing (
slice,index_select,[]operator) - Concatenation and stacking (
cat,stack,split,chunk) - Advanced reductions (
max,min,argmax,argminwith dimensions) - Element-wise math operations (
pow,sqrt,exp,log,abs) - Comparison operations (
eq,ne,gt,lt) - More tensor creation utilities (
clone,randint,arange,linspace,eye)
Performance
TyTorch uses libtorch (PyTorch's C++ API) directly, providing:
- Zero-copy operations where possible
- Full access to PyTorch's optimized kernels
- Native CUDA and MPS acceleration
- Minimal JavaScript/C++ boundary overhead
Contributing
See CLAUDE.md for detailed development guidelines and architecture documentation.
Platform Support
- macOS (Apple Silicon and Intel) - Fully tested ✅
- Linux (x86_64 with CUDA support) - Should work ⚠️
- Windows - Experimental, may be broken ⚠️
Note: This is a prototype package currently tested only on macOS. Linux and Windows support is in progress.
License
Apache 2.0
Copyright (C) 2025 Identellica LLC
Acknowledgments
Built on top of PyTorch via libtorch.
