tanstack-filesystem-collection
v0.1.3
Published
Filesystem collection adapter for TanStack DB
Maintainers
Readme
TanStack Filesystem Collection
A filesystem-based collection adapter for TanStack DB. Stores your data as JSON or CSV files on disk.
⚠️ Heads up: This is a proof-of-concept. The sync engine is a bit flaky and definitely not production-ready. Great for testing, CLI tools, and local dev though.
Why?
TanStack DB is runtime-agnostic, so why not use the filesystem as a backend? This lets you:
- Persist data to JSON/CSV files
- Use TanStack DB in Node.js or Bun environments
- Build CLI apps with reactive data (think OpenTUI or similar React-in-terminal renderers)
- Prototype without setting up a database
Note: This only works in Node.js and Bun. No browser support (obviously), and Deno isn't planned.
Install
npm install tanstack-filesystem-collection @tanstack/db
# or
bun add tanstack-filesystem-collection @tanstack/dbQuick Start
import { createCollection } from "@tanstack/db"
import { filesystemCollectionOptions } from "tanstack-filesystem-collection"
import { z } from "zod"
const todoSchema = z.object({
id: z.string(),
text: z.string(),
done: z.boolean(),
})
const todos = createCollection(
filesystemCollectionOptions({
id: "todos",
schema: todoSchema,
getKey: (item) => item.id,
})
)This creates a todos.json file in .tanstack-collection-cache/ and keeps it in sync with your collection.
Config Options
| Option | Default | What it does |
|--------|---------|--------------|
| id | required | Collection name (also the filename) |
| getKey | required | Function to get unique key from an item |
| schema | - | Zod/Standard Schema for validation & types |
| format | "json" | "json" or "csv" |
| cacheDir | ".tanstack-collection-cache" | Where to store files |
| rowUpdateMode | "partial" | "partial" merges changes, "full" replaces |
| runtime | auto-detected | Force "bun" or "node" |
| codec | - | Transform data on read/write |
| enableFileWatch | false | Watch file for external changes |
| fileWatchDebounce | 100 | Debounce ms for file watching |
Persistence Handler Options
You can hook into mutations for backend sync:
| Option | Default | What it does |
|--------|---------|--------------|
| onInsert | - | Called after filesystem write on insert |
| onUpdate | - | Called after filesystem write on update |
| onDelete | - | Called after filesystem write on delete |
| awaitPersistence | false | Wait for handlers to complete |
| persistenceTimeoutMs | 5000 | Timeout for handlers |
| swallowPersistenceErrors | true | Log errors instead of throwing |
Examples
CSV Format
const logs = createCollection(
filesystemCollectionOptions({
id: "logs",
format: "csv",
getKey: (item) => item.timestamp,
})
)Custom Directory
const config = createCollection(
filesystemCollectionOptions({
id: "config",
cacheDir: "./data",
getKey: (item) => item.key,
})
)Data Transformation
const events = createCollection(
filesystemCollectionOptions({
id: "events",
getKey: (item) => item.id,
codec: {
parse: (raw) => ({ ...raw, date: new Date(raw.date) }),
serialize: (item) => ({ ...item, date: item.date.toISOString() }),
},
})
)Utility Methods
The collection exposes some handy utils via collection.utils:
// Get the file path
todos.utils.getFilePath() // ".tanstack-collection-cache/todos.json"
// Clear the cache file
await todos.utils.clearCache()
// Local operations (bypass user handlers)
await todos.utils.insertLocally(item)
await todos.utils.updateLocally(id, item)
await todos.utils.deleteLocally(id)
// Bulk operations
await todos.utils.bulkInsertLocally(items)
await todos.utils.bulkUpdateLocally(items)
await todos.utils.bulkDeleteLocally(ids)Using with React (CLI)
Works great with @tanstack/react-db in custom React renderers like OpenTUI:
import { useLiveQuery } from "@tanstack/react-db"
function TodoList() {
const { data: todos } = useLiveQuery(todosCollection)
return (
<box>
{todos.map(todo => (
<text key={todo.id}>{todo.text}</text>
))}
</box>
)
}Limitations
- Node/Bun only - No browser, no Deno
- Sync is experimental - File watching works but can be flaky
- Not for production - This is a proof-of-concept
- Single process - No multi-process locking (yet)
License
MIT
