npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

tytorch

v0.1.0

Published

TypeScript bindings for PyTorch via libtorch

Readme

TyTorch

TypeScript bindings for PyTorch via libtorch. Brings the full power of PyTorch to Node.js with complete type safety.

Features

  • 🔥 Full TypeScript support with complete type definitions
  • ⚡ Direct bindings to libtorch (PyTorch C++ API)
  • 🎯 Familiar PyTorch-style API
  • 🖥️ Support for CPU, CUDA, and Apple Silicon (MPS) devices
  • 🚀 Zero-copy operations where possible
  • 🧪 Comprehensive test coverage (500+ tests across unit/cpu/mps suites)
  • 📦 ES Module support

Installation

Step 1: Install PyTorch (libtorch)

TyTorch requires PyTorch's C++ library (libtorch) to be installed on your system. This is a critical prerequisite.

macOS (Homebrew + Python)

# Install Python and PyTorch via pip
brew install [email protected]
pip3 install torch torchvision torchaudio

# Set environment variables (add to your shell config)
# For bash/zsh (~/.bashrc or ~/.zshrc):
export LIBTORCH="/opt/homebrew/lib/python3.11/site-packages/torch"
export DYLD_LIBRARY_PATH="$LIBTORCH/lib:$DYLD_LIBRARY_PATH"

# For nushell (~/.config/nushell/env.nu):
$env.LIBTORCH = "/opt/homebrew/lib/python3.11/site-packages/torch"
$env.DYLD_LIBRARY_PATH = ($env.LIBTORCH | path join "lib")

Linux

# Install PyTorch via pip
pip3 install torch torchvision torchaudio

# Set environment variables (add to ~/.bashrc or ~/.zshrc)
export LIBTORCH="$(python3 -c 'import torch; print(torch.__path__[0])')"
export LD_LIBRARY_PATH="$LIBTORCH/lib:$LD_LIBRARY_PATH"

Windows

# Install PyTorch via pip
pip install torch torchvision torchaudio

# Set environment variables (PowerShell)
$env:LIBTORCH = python -c "import torch; print(torch.__path__[0])"
$env:PATH = "$env:LIBTORCH\lib;$env:PATH"

Note: Adjust Python version (3.11) to match your installation. Verify with python3 --version.

Step 2: Install TyTorch

npm install tytorch
# or
pnpm install tytorch

This will automatically build the native addon using node-gyp.

Requirements

  • Node.js >= 24.0.0
  • PyTorch (libtorch) installed (see Step 1 above)
  • C++17 compatible compiler
  • Environment variables set correctly (LIBTORCH, DYLD_LIBRARY_PATH / LD_LIBRARY_PATH)

Platform Support

  • macOS (Apple Silicon and Intel) - Fully tested ✅
  • Linux (x86_64) - Should work ⚠️
  • Windows - Experimental, may be broken ⚠️

Note: This is a prototype package currently tested only on macOS. Linux and Windows support is in progress.

Quick Start

import { torch } from 'tytorch';

// Create tensors
const a = torch.ones([2, 3]);
const b = torch.zeros([2, 3]);
const c = torch.randn([2, 3]);

// Basic operations
const sum = a.add(b);
const product = a.mul(c);
const matmul = a.matmul(b.transpose());

// Functional API
const result = torch.add(a, b);

// Properties
console.log(a.shape);   // [2, 3]
console.log(a.dtype);   // 'float32'
console.log(a.device);  // 'cpu'

// Device management
if (torch.cuda.is_available()) {
  const gpu_tensor = a.cuda();
}

// Apple Silicon MPS support
if (torch.mps.is_available()) {
  const mps_tensor = a.mps();
}

Documentation

  • TODO.md - Development roadmap and feature status
  • CLAUDE.md - Detailed project architecture and development guide

Current Status

Phase 1: Core RefactoringCOMPLETE

  • Modular C++ operations (one file per operation)
  • Test suite organized into unit/cpu/mps suites
  • 500+ tests passing
  • Clean, maintainable architecture

Phase 2: Essential ML Methods 🚧 IN PROGRESS

  • ✅ Shape operations (reshape, transpose, squeeze, unsqueeze, permute, flatten)
  • ✅ Autograd support (backward, gradients, requires_grad)
  • ✅ Activation functions (relu, sigmoid, tanh, softmax, log_softmax)
  • ✅ Loss functions (mse_loss, cross_entropy, nll_loss, binary_cross_entropy)
  • 🚧 Next: Indexing, slicing, concatenation
  • See TODO.md for detailed roadmap

API Reference

Tensor Creation

torch.zeros(shape: number[], options?: TensorOptions): Tensor
torch.ones(shape: number[], options?: TensorOptions): Tensor
torch.randn(shape: number[], options?: TensorOptions): Tensor
torch.tensor(data: number[], options?: TensorOptions): Tensor

Options:

interface TensorOptions {
  dtype?: 'float32' | 'float64' | 'int32' | 'int64';
  device?: 'cpu' | 'cuda' | 'mps';
}

Arithmetic Operations

// Element-wise operations
tensor.add(other: Tensor | number): Tensor
tensor.sub(other: Tensor | number): Tensor
tensor.mul(other: Tensor | number): Tensor
tensor.div(other: Tensor | number): Tensor

// In-place variants (modify tensor in place)
tensor.add_(other: Tensor | number): Tensor
tensor.sub_(other: Tensor | number): Tensor
tensor.mul_(other: Tensor | number): Tensor
tensor.div_(other: Tensor | number): Tensor

Matrix Operations

tensor.matmul(other: Tensor): Tensor  // Matrix multiplication

Reductions

tensor.sum(): Tensor   // Sum all elements to scalar
tensor.mean(): Tensor  // Mean of all elements

Device Management

tensor.cpu(): Tensor    // Move to CPU
tensor.cuda(): Tensor   // Move to CUDA (if available)
tensor.mps(): Tensor    // Move to MPS (Apple Silicon)
tensor.to(device: string, dtype?: string): Tensor  // Generic conversion

Dtype Conversions

tensor.float(): Tensor   // Convert to float32
tensor.double(): Tensor  // Convert to float64
tensor.int(): Tensor     // Convert to int32
tensor.long(): Tensor    // Convert to int64

Properties

tensor.shape: number[]   // Tensor shape
tensor.dtype: string     // Data type ('float32', 'float64', 'int32', etc.)
tensor.device: string    // Device ('cpu', 'cuda', 'mps')

Utilities

tensor.toArray(): number[]    // Convert to JavaScript array
tensor.toString(): string     // String representation (PyTorch format)

Functional API

Most operations are available in functional form:

torch.add(a: Tensor, b: Tensor | number): Tensor
torch.sub(a: Tensor, b: Tensor | number): Tensor
torch.mul(a: Tensor, b: Tensor | number): Tensor
torch.div(a: Tensor, b: Tensor | number): Tensor
torch.matmul(a: Tensor, b: Tensor): Tensor

Development

Build Commands

# Build native addon
pnpm build:native

# Build TypeScript
pnpm build:ts

# Build everything
pnpm build

Testing

# Run all tests
pnpm test

# Run unit tests only (no device requirements)
pnpm test:unit

# Run CPU tests only
pnpm test:cpu

# Run MPS tests only (requires Apple Silicon)
pnpm test:mps

# Watch mode
pnpm test:watch

# UI mode
pnpm test:ui

Test Organization

  • Unit Tests (test/unit/) - Device-agnostic core functionality
  • CPU Tests (test/cpu/) - CPU-specific tests
  • MPS Tests (test/mps/) - Apple Silicon GPU tests

All three test suites run sequentially via pnpm test. Each operation typically has three test files (one per suite).

Project Structure

tytorch/
├── src/
│   ├── index.ts              # TypeScript entry point
│   └── native/
│       ├── addon.cpp         # N-API initialization
│       ├── tensor.h/cpp      # Tensor class
│       ├── utils.h/cpp       # Utilities
│       └── ops/              # Modular operations (one file per op)
│           ├── add.h/cpp
│           ├── mul.h/cpp
│           ├── matmul.h/cpp
│           └── ...
├── test/
│   ├── unit/                 # Unit tests
│   ├── cpu/                  # CPU tests
│   └── mps/                  # MPS tests
├── binding.gyp               # Node-gyp configuration
└── package.json

Roadmap

See TODO.md for the complete development roadmap.

Recently Completed:

  • ✅ Tensor shape operations (reshape, transpose, squeeze, unsqueeze, permute, flatten)
  • ✅ Autograd support (backward, gradients, requires_grad, detach, zero_grad, no_grad)
  • ✅ Activation functions (relu, sigmoid, tanh, softmax, log_softmax)
  • ✅ Loss functions (mse_loss, cross_entropy, nll_loss, binary_cross_entropy)

Coming Soon:

  • Indexing and slicing (slice, index_select, [] operator)
  • Concatenation and stacking (cat, stack, split, chunk)
  • Advanced reductions (max, min, argmax, argmin with dimensions)
  • Element-wise math operations (pow, sqrt, exp, log, abs)
  • Comparison operations (eq, ne, gt, lt)
  • More tensor creation utilities (clone, randint, arange, linspace, eye)

Performance

TyTorch uses libtorch (PyTorch's C++ API) directly, providing:

  • Zero-copy operations where possible
  • Full access to PyTorch's optimized kernels
  • Native CUDA and MPS acceleration
  • Minimal JavaScript/C++ boundary overhead

Contributing

See CLAUDE.md for detailed development guidelines and architecture documentation.

Platform Support

  • macOS (Apple Silicon and Intel) - Fully tested ✅
  • Linux (x86_64 with CUDA support) - Should work ⚠️
  • Windows - Experimental, may be broken ⚠️

Note: This is a prototype package currently tested only on macOS. Linux and Windows support is in progress.

License

Apache 2.0

Copyright (C) 2025 Identellica LLC

Acknowledgments

Built on top of PyTorch via libtorch.