arc-vcs
v0.3.1
Published
Arc - Checkpoint-based version control system
Maintainers
Readme
Arc
Checkpoint-based version control. Full project snapshots instead of diffs, content-addressed storage, and a backend abstraction that runs against filesystems or databases.
Install
npm install -g arc-vcsQuick Start
arc init
echo "hello" > main.stn
arc checkpoint "initial"Initialized arc in .arc/
cp-a1b2c3d4e5f6 "initial" (1 files)arc logcp-a1b2c3d4e5f6 25/02/2026, 14:30:00 "initial" (1 files) <- HEAD (main)Concepts
Checkpoint -- An immutable snapshot of every file in the project. IDs are content-addressed (SHA-256 hash of the manifest), so identical project states always produce the same ID.
Manifest -- A map of file paths to content hashes. Every checkpoint stores a full manifest — no deltas, no chain resolution. Each checkpoint is self-contained and independently transferable.
Branch -- A mutable pointer to a checkpoint ID. Creating a checkpoint on a branch advances the pointer.
Tag -- An immutable label pointing to a specific checkpoint.
CLI Reference
Initialization
arc initCreates a .arc/ directory with an empty config. Idempotent.
Config
# Set author for this project
arc config user.name "Ada Lovelace"
arc config user.email "[email protected]"
# Set author globally (~/.arcconfig)
arc config --global user.name "Ada Lovelace"
arc config --global user.email "[email protected]"
# Read a config value
arc config user.nameAuthor is resolved in order: --author flag > repo config > global config (~/.arcconfig).
Checkpoints
# Create a checkpoint
arc checkpoint "save point"
arc checkpoint "feature done" -d "Implemented the new parser"
# Override author for a single checkpoint
arc checkpoint "patch" --author "Bob <[email protected]>"
# Create a checkpoint on a new branch
arc checkpoint "experiment" -b experiment
# Shorthand
arc cp "quick save"
# View history
arc log # Current branch
arc log --all # All branches
# Show changes since last checkpoint
arc status
# Compare two checkpoints
arc diff <cp-a> <cp-b>
# Restore to a checkpoint (creates a new checkpoint recording the restore)
arc restore <cp-id>
# Visit a checkpoint (detached HEAD, read-only time travel)
arc visit <cp-id>Undo & Squash
# Undo the last checkpoint
arc undo
# Undo the last 3 checkpoints
arc undo 3
# Squash the last 4 checkpoints into one
arc squash 4 "consolidated"
# Squash with a description
arc squash 3 "refactor" -d "Combined three refactoring steps"Branches
# Create a branch at HEAD
arc branch feature-x
# Create a branch at a specific checkpoint
arc branch hotfix cp-a1b2c3d4e5f6
# List branches
arc branches
# Switch to a branch (syncs working directory)
arc switch feature-x
# Rename a branch
arc branch rename old-name new-name
# Delete a branch
arc branch -d feature-xMerging
# Merge a branch into the current branch
arc merge feature-x
# Resolve all conflicts as current branch
arc merge feature-x --ours
# Resolve all conflicts as incoming branch
arc merge feature-x --theirs
# Continue after resolving conflicts manually
arc merge --continue
# Abort an in-progress merge
arc merge --abortArc uses three-way merging with the Myers diff algorithm -- the same algorithm Git uses. It finds the common ancestor, auto-resolves files changed in only one branch, and performs line-level three-way merge for text files changed in both. Overlapping edits produce Git-style conflict markers (<<<<<<<, =======, >>>>>>>). Binary files that differ in both branches are file-level conflicts resolved with --ours or --theirs.
Stash
# Stash dirty working directory
arc stash -m "work in progress"
# List stashes
arc stash list
# Apply and remove the latest stash
arc stash pop
# Apply without removing
arc stash apply 0
# Drop a stash
arc stash drop 0Cherry-Pick
# Apply a checkpoint's changes onto the current branch
arc cherry-pick <cp-id>
# Continue after resolving conflicts
arc cherry-pick --continue
# Abort
arc cherry-pick --abortCherry-pick computes the diff from the checkpoint's parent to the checkpoint, then three-way merges those changes onto HEAD.
Blame
# Per-line history attribution
arc blame <file>
arc blame <file> <cp-id>Walks backward through checkpoint history using Myers diff to track which checkpoint last modified each line.
Tags
# Tag the current HEAD
arc tag v1.0.0
# Tag a specific checkpoint
arc tag release cp-a1b2c3d4e5f6
# List tags
arc tags
# Delete a tag
arc tag -d v1.0.0Garbage Collection
# Delete orphaned blobs not referenced by any checkpoint
arc gc
# Preview what would be deleted
arc gc --dry-runRemote & Sync
# Add a remote
arc remote add origin http://localhost:3000 --token arc_...
# List remotes
arc remote list
# Remove a remote
arc remote remove origin
# Sync with remote (fetch then push)
arc sync
# Fetch only (download, no push)
arc fetch
arc fetch --depth 5
# Clone a remote project
arc clone http://localhost:3000 my-project
arc clone http://localhost:3000 my-project --depth 10Arc syncs checkpoints and blobs to a remote server. sync = fetch then push. If the remote has diverged (e.g. a checkpoint was undone locally but exists on the remote), sync reports the conflict so you can resolve it.
For hosting, see arc-host — a lightweight zero-dependency server.
Prune
# Delete old checkpoints, keep the latest 10
arc prune --keep 10
# Delete checkpoints older than a date
arc prune --before 2026-01-01
# Preview what would be deleted
arc prune --dry-runPrune deletes old checkpoints and runs garbage collection to remove unreferenced blobs. Pruned checkpoints are gone — parent pointers that reference them become dangling (shallow history), which is handled gracefully by log, blame, and undo.
Update
# Update arc to the latest version from npm
arc updateProgrammatic API
Arc exports pure functions that take a backend as their first argument. No classes, no instantiation.
import {
createCheckpoint,
restoreCheckpoint,
listCheckpoints,
createBranch,
switchBranch,
computeMerge,
executeMerge,
diffCheckpoints,
createTag,
buildManifest,
} from 'arc-vcs';
import { createDbBackend } from 'arc-vcs/db';
import { createFsAdapter } from 'arc-vcs/fsAdapter';
import { createSqliteStorage } from 'arc-vcs/sqliteStorage';
const backend = createDbBackend(
createFsAdapter('/path/to/project'),
createSqliteStorage('/path/to/project/.arc/arc.db')
);
// Create a checkpoint
const cp = await createCheckpoint(backend, 'initial', 'First save');
console.log(cp.id); // "cp-a1b2c3d4e5f6"
// Branch and switch
await createBranch(backend, 'experiment');
await switchBranch(backend, 'experiment');
// Make changes, checkpoint, then merge back
await createCheckpoint(backend, 'experiment done');
await switchBranch(backend, 'main');
const result = await executeMerge(backend, 'experiment');Full API
| Module | Functions |
|--------|-----------|
| checkpoint | createCheckpoint, restoreCheckpoint, loadCheckpoint, loadCheckpointResolved, listCheckpoints, listAllCheckpoints, syncWorkingDir, getHead, readConfig, writeConfig, resolveRef, resolveManifest, generateId, detectRenames |
| branches | createBranch, deleteBranch, listBranches, switchBranch, getCurrentBranch, renameBranch, getBranchHead |
| merge | findCommonAncestor, computeMerge, executeMerge |
| diff | diffCheckpoints, diffManifests, diffFile |
| lineDiff | myersDiff, diffLines, formatUnifiedDiff, splitLines, isBinaryContent |
| merge3 | merge3 |
| tags | createTag, deleteTag, listTags, resolveTag |
| stash | stash, stashPop, stashApply, stashList, stashDrop |
| cherryPick | cherryPick, cherryPickContinue, cherryPickAbort |
| blame | blame |
| undo | undo, squash, visit, restore |
| store | storeBlob, readBlob, hasBlob |
| manifest | buildManifest |
| gc | gc |
| remote | addRemote, removeRemote, listRemotes, getRemote |
| sync | arcFetch, arcPush, sync |
| clone | clone |
| prune | prune |
| transport | fetchRefs, fetchCheckpoints, fetchBlobs, pushCheckpoints, pushBlobs, casBranchTip, pushTag, haveCheckpoints, haveBlobs, createLocalTransport |
All functions take an ArcBackend as their first argument.
Custom Backends
Arc uses createDbBackend(fileAdapter, arcStorage) to compose a backend from two interfaces. The default setup uses createFsAdapter (filesystem) + createSqliteStorage (SQLite). For web apps, databases, or cloud storage, swap in your own implementations.
Architecture
Arc Core (checkpoint, merge, branch, diff...)
│
▼
ArcBackend (8 methods)
│
┌────┴────┐
│ Composer │ ← createDbBackend(fileAdapter, arcStorage)
└──┬───┬──┘
│ │
▼ ▼
FileAdapter ArcStorage
(your files) (arc data)FileAdapter
Implement 6 methods to map Arc's path-based operations onto your file storage:
const fileAdapter = {
read: async (path) => Uint8Array, // Read file content
write: async (path, data) => void, // Write/create file
remove: async (path) => void, // Delete file
list: async (dirPath) => ['a.txt', ...], // List directory entries
exists: async (path) => true/false, // Check existence
isDir: async (path) => true/false, // Check if directory
// Optional -- no-op default for flat-path systems
mkdir: async (path) => void,
};ArcStorage
Implement 9 methods to store Arc's internal data (blobs, checkpoints, config):
const arcStorage = {
// Blobs -- content-addressable file storage
readBlob: async (hash) => Uint8Array,
writeBlob: async (hash, content) => void,
hasBlob: async (hash) => true/false,
// Checkpoints -- JSON-encoded as Uint8Array
readCheckpoint: async (cpId) => Uint8Array,
writeCheckpoint: async (cpId, data) => void,
listCheckpoints: async () => ['cp-xxx.json', ...],
// Config -- JSON-encoded as Uint8Array
readConfig: async () => Uint8Array,
writeConfig: async (data) => void,
configExists: async () => true/false,
};Composing a Backend
import { createDbBackend } from 'arc-vcs/db';
const backend = createDbBackend(fileAdapter, arcStorage);
// Now use it exactly like the filesystem backend
await createCheckpoint(backend, 'saved to database');The composer routes .arc/* paths to your ArcStorage and everything else to your FileAdapter. Hashing is always SHA-256, handled internally.
Example: PostgreSQL
import { createDbBackend } from 'arc-vcs/db';
function createPostgresFileAdapter(pool, projectId) {
return {
read: async (path) => {
const res = await pool.query(
'SELECT content FROM files WHERE project_id = $1 AND path = $2',
[projectId, path]
);
if (!res.rows.length) throw new Error(`Not found: ${path}`);
return new TextEncoder().encode(res.rows[0].content);
},
write: async (path, data) => {
const content = new TextDecoder().decode(data);
await pool.query(
`INSERT INTO files (project_id, path, content)
VALUES ($1, $2, $3)
ON CONFLICT (project_id, path)
DO UPDATE SET content = $3`,
[projectId, path, content]
);
},
remove: async (path) => {
await pool.query(
'DELETE FROM files WHERE project_id = $1 AND path = $2',
[projectId, path]
);
},
list: async (dirPath) => {
const prefix = dirPath ? dirPath + '/' : '';
const res = await pool.query(
`SELECT DISTINCT split_part(substring(path FROM $2), '/', 1) AS name
FROM files WHERE project_id = $1 AND path LIKE $2 || '%'`,
[projectId, prefix]
);
return res.rows.map(r => r.name);
},
exists: async (path) => {
const res = await pool.query(
'SELECT 1 FROM files WHERE project_id = $1 AND path = $2',
[projectId, path]
);
return res.rows.length > 0;
},
isDir: async (path) => {
const prefix = path + '/';
const res = await pool.query(
'SELECT 1 FROM files WHERE project_id = $1 AND path LIKE $2 || \'%\' LIMIT 1',
[projectId, prefix]
);
return res.rows.length > 0;
},
};
}
function createPostgresArcStorage(pool, projectId) {
return {
readBlob: async (hash) => {
const res = await pool.query('SELECT content FROM arc_blobs WHERE hash = $1', [hash]);
if (!res.rows.length) throw new Error(`Blob not found: ${hash}`);
return new Uint8Array(res.rows[0].content);
},
writeBlob: async (hash, content) => {
await pool.query(
`INSERT INTO arc_blobs (hash, content, size) VALUES ($1, $2, $3)
ON CONFLICT (hash) DO NOTHING`,
[hash, Buffer.from(content), content.length]
);
},
hasBlob: async (hash) => {
const res = await pool.query('SELECT 1 FROM arc_blobs WHERE hash = $1', [hash]);
return res.rows.length > 0;
},
readCheckpoint: async (cpId) => {
const res = await pool.query(
'SELECT data FROM arc_checkpoints WHERE id = $1 AND project_id = $2',
[cpId, projectId]
);
if (!res.rows.length) throw new Error(`Checkpoint not found: ${cpId}`);
return new TextEncoder().encode(res.rows[0].data);
},
writeCheckpoint: async (cpId, data) => {
const json = new TextDecoder().decode(data);
await pool.query(
`INSERT INTO arc_checkpoints (id, project_id, data) VALUES ($1, $2, $3)
ON CONFLICT (id, project_id) DO UPDATE SET data = $3`,
[cpId, projectId, json]
);
},
listCheckpoints: async () => {
const res = await pool.query(
"SELECT id FROM arc_checkpoints WHERE project_id = $1 AND id != '__config__'",
[projectId]
);
return res.rows.map(r => `${r.id}.json`);
},
readConfig: async () => {
const res = await pool.query(
"SELECT data FROM arc_checkpoints WHERE id = '__config__' AND project_id = $1",
[projectId]
);
if (!res.rows.length) throw new Error('Arc not initialized');
return new TextEncoder().encode(res.rows[0].data);
},
writeConfig: async (data) => {
const json = new TextDecoder().decode(data);
await pool.query(
`INSERT INTO arc_checkpoints (id, project_id, data)
VALUES ('__config__', $1, $2)
ON CONFLICT (id, project_id) DO UPDATE SET data = $2`,
[projectId, json]
);
},
configExists: async () => {
const res = await pool.query(
"SELECT 1 FROM arc_checkpoints WHERE id = '__config__' AND project_id = $1",
[projectId]
);
return res.rows.length > 0;
},
};
}
// Usage
const fileAdapter = createPostgresFileAdapter(pool, 'project-123');
const arcStorage = createPostgresArcStorage(pool, 'project-123');
const backend = createDbBackend(fileAdapter, arcStorage);
await createCheckpoint(backend, 'stored in postgres');SQL Schema
Minimal tables for the PostgreSQL example above:
CREATE TABLE files (
project_id TEXT NOT NULL,
path TEXT NOT NULL,
content TEXT,
PRIMARY KEY (project_id, path)
);
CREATE TABLE arc_blobs (
hash TEXT PRIMARY KEY,
content BYTEA NOT NULL,
size INTEGER NOT NULL
);
CREATE TABLE arc_checkpoints (
id TEXT NOT NULL,
project_id TEXT NOT NULL,
data JSONB NOT NULL,
PRIMARY KEY (id, project_id)
);Storage Layout
On the filesystem, Arc stores data in .arc/:
project/
.arc/
config.json # HEAD, branches, tags
store/
a1/a1b2c3d4e5f6... # Content-addressable blobs
f7/f7e8d9c0b1a2...
checkpoints/
cp-a1b2c3d4e5f6.json # Checkpoint metadata + manifest
cp-b2c3d4e5f6a7.json
.arcignore # Patterns to exclude (like .gitignore)
src/
main.stnConfig (config.json)
{
"version": 3,
"created": 1708234567890,
"head": "cp-a1b2c3d4e5f6",
"branch": "main",
"branches": {
"main": "cp-a1b2c3d4e5f6",
"experiment": "cp-b2c3d4e5f6a7"
},
"tags": {
"v1.0.0": "cp-a1b2c3d4e5f6"
},
"remotes": {
"origin": { "url": "http://localhost:3000", "token": "arc_..." }
},
"user": {
"name": "Ada Lovelace",
"email": "[email protected]"
}
}Checkpoint
{
"id": "cp-a1b2c3d4e5f6",
"parent": "cp-previous123",
"name": "Add parser",
"description": "Implemented the new expression parser",
"timestamp": 1708234567890,
"author": { "name": "Ada Lovelace", "email": "[email protected]" },
"manifest": {
"src/main.stn": "hash-of-file-content",
"README.md": "hash-of-readme"
},
"renames": [
{ "from": "old/path.stn", "to": "new/path.stn" }
]
}Every checkpoint stores a full manifest — no deltas, no chain resolution. Identical blobs are deduplicated via content-addressing, so storage remains efficient despite full manifests.
How It Differs from Git
| | Arc | Git | |---|---|---| | Unit of change | Full project snapshot | Commit (tree of blobs with parent pointers) | | IDs | Content-addressed (SHA-256) | Content-addressed (SHA-1) | | Merge | Three-way line-level (Myers diff, conflict markers) | Three-way line-level (conflict markers) | | Storage | SQLite by default, pluggable backends | Filesystem object database | | Backend | Pluggable (filesystem, SQLite, PostgreSQL, custom) | Filesystem only | | Dependencies | Zero npm deps (Node.js built-ins only) | Many |
Arc trades Git's history rewriting (rebase, amend, squash) for simplicity, determinism, and backend flexibility. It keeps the merge and diff features that matter -- three-way line-level merge, cherry-pick, blame, and stash are all built in.
License
MIT
