@aumiqx/gesture
v0.1.1
Published
Pure-math gesture recognition. No DOM, no events, no framework. ~6KB, zero dependencies.
Maintainers
Readme
@aumiqx/gesture
Pure-math gesture recognition for JavaScript. No DOM, no events, no framework, zero dependencies.
~6KB | TypeScript | Works everywhere: Browser, Node.js, Deno, Bun, React Native, Canvas, WebGL
Install
npm install @aumiqx/gestureQuick Start
import { recognize } from "@aumiqx/gesture";
// Feed it raw coordinates from any source
const gesture = recognize([
{ x: 100, y: 200, t: 0 },
{ x: 250, y: 195, t: 32 },
{ x: 400, y: 201, t: 64 },
]);
console.log(gesture);
// {
// type: "swipe",
// direction: "right",
// velocity: 4.69,
// confidence: 0.92,
// distance: 300,
// duration: 64,
// curvature: 0.01,
// points: 3,
// startPoint: { x: 100, y: 200 },
// endPoint: { x: 400, y: 201 },
// predictedEnd: { x: 520, y: 203 }
// }Why?
Every gesture library is event-driven and DOM-dependent. They hook into browser events, maintain internal state, and only work inside a browser.
A gesture is just a mathematical pattern in a sequence of (x, y, t) coordinates. This library does the math. That's it.
| Feature | Hammer.js | use-gesture | @aumiqx/gesture | |---------|-----------|-------------|-----------------| | DOM required | Yes | Yes | No | | Framework | None | React | Any / none | | Server-side | No | No | Yes | | Canvas/WebGL | No | No | Yes | | Mid-gesture prediction | No | No | Yes | | Maintained | No (deprecated) | Yes | Yes |
API
recognize(points, options?)
Classify a completed gesture from a sequence of points.
import { recognize } from "@aumiqx/gesture";
const result = recognize(points);
// result.type: "tap" | "long-press" | "swipe" | "flick" | "pan" | "unknown"
// result.confidence: 0-1
// result.duration: ms
// result.distance: pxpredict(points, options?)
Classify a gesture while it's still happening. Call mid-gesture for snappy UI.
import { predict } from "@aumiqx/gesture";
const { likely, alternatives } = predict(partialPoints);
// likely: most probable gesture so far
// alternatives: sorted by confidencerecognizeDoubleTap(first, second, options?)
Detect double-tap from two separate gesture sequences.
import { recognizeDoubleTap } from "@aumiqx/gesture";
const result = recognizeDoubleTap(firstTapPoints, secondTapPoints);
// result: DoubleTapResult | nullGesture Types
| Type | Description | Detection |
|------|-------------|-----------|
| tap | Quick touch, minimal movement | duration < 300ms, drift < 10px |
| long-press | Touch and hold | duration > 500ms, drift < 15px |
| swipe | Directional drag | distance > 30px, velocity > 0.3 px/ms |
| flick | Ultra-fast short gesture | velocity > 1.5 px/ms, duration < 200ms |
| pan | Slow drag | movement > 5px, doesn't qualify as swipe |
| unknown | Ambiguous input | doesn't match any pattern |
Configuration
All thresholds are configurable:
const result = recognize(points, {
tapMaxDistance: 10, // max drift for tap (px)
tapMaxDuration: 300, // max duration for tap (ms)
longPressMinDuration: 500, // min hold time (ms)
longPressMaxDrift: 15, // max drift during hold (px)
swipeMinDistance: 30, // min distance for swipe (px)
swipeMinVelocity: 0.3, // min velocity for swipe (px/ms)
flickMinVelocity: 1.5, // min velocity for flick (px/ms)
});Math Utilities
All internal math functions are exported for custom use:
import {
distance, // Euclidean distance between two points
straightLineDistance, // Distance from first to last point
totalPathLength, // Sum of all segment distances
velocity, // distance / time
curvature, // path length / straight distance - 1
maxDrift, // max distance from centroid
centroid, // average position
angle, // direction in radians
predictEnd, // extrapolate endpoint from velocity
swipeDirection, // "up" | "down" | "left" | "right"
} from "@aumiqx/gesture";Use Cases
- Canvas/WebGL games - gesture controls without DOM elements
- Session replay analytics - classify gestures from logged pointer data on a server
- Gesture prediction - start UI responses before the gesture completes
- Accessibility tools - classify imprecise input from users with motor impairments
- Cross-platform - same recognition on web, mobile, desktop
License
MIT
