agentel
v0.2.8
Published
Local-first archive and recall layer for agent coding sessions.
Maintainers
Readme
agentlog
Local-first archive and recall for agent coding sessions.
Core capabilities:
- raw source-file backup before normalization, so provider history can be re-parsed later
- markdown-primary, redacted local archive under
~/.agentlog/data/agentlog/ - canonical event JSONL alongside each transcript for provider-independent search
- canonical repo keying from git remotes, first commits, or path hashes
- Codex CLI, Codex Desktop, Codex SDK jobs, ChatGPT export, Claude Code CLI, Claude Code Desktop, Claude Workspace, Claude.ai export, Gemini CLI, Antigravity, Devin CLI, and Cursor imports
- event-first
agentlog historysearch with markdown/transcript fallback agentlog-recallMCP stdio server exposingsearch_past_sessions- installable recall commands, workflows, skills, and MCP hooks for common coding agents
- local web viewer for full conversation history inspection
- device-scoped S3-compatible upload sync for backup and future multi-device recall
- lifecycle, config, watcher login, and MCP config helper commands
- local update/reset commands for rebuilding archives after package changes
The local HTTP OTLP collector, S3-compatible remote sync, and team deployment surfaces are configurable extension points for environments that install those services.
Install
Install the CLI globally so agent-facing recall commands can call agentlog
from PATH:
npm install -g agentel
agentlog initYou can also install directly from the GitHub repository. Use a tag or commit ref for repeatable installs:
npm install -g brianlzhou/agentlog
# or
npm install -g brianlzhou/agentlog#v0.2.8
agentlog initYou can also install it into a project and run it through npm/npx:
npm install agentel
npx agentel initOr run the published package without keeping a local install:
npx agentel initRequirements:
- Node.js 20 or newer
sqlite3for Codex, Cursor, and Devin local database importsrg/ripgrep for faster history searchunzipfor ZIP web exportszstdorunzstdfor compressed Codex session files
Run agentlog doctor after install to check optional tools and configured
sources.
Try From Source
npm install
npm test
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js init --yes --skip-import --no-autostart --no-claude --no-recall --no-telemetry
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source codex-cli --since 30d
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source codex-desktop --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import chatgpt
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import claude-web
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import chatgpt ~/Downloads/chatgpt-export.zip --username [email protected]
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import chatgpt "~/Downloads/OpenAI-export/User Online Activity" --username [email protected]
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import claude-web ~/Downloads/claude-export --username you --display-name "Personal Claude"
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source claude --since 30d
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source claude-code-desktop --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source claude-workspace --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source gemini-cli --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source antigravity --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source devin-cli --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source cursor --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source cline --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source opencode --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js import --source aider --since all
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js history "database migration issue"
AGENTLOG_HOME=/tmp/agentlog-demo node ./bin/agentlog.js reset --yesUse AGENTLOG_HOME=/tmp/agentlog-demo to test without touching the default
~/.agentlog directory. The --no-autostart --no-claude --no-recall --no-telemetry
flags avoid editing launch-agent, Claude, Codex, Gemini, or Cline settings while
running from source.
Install the source checkout globally while developing:
npm install -g .
# or, while developing from this repo:
npm linkThen run:
agentlog initBefore publishing a release, run:
npm run check
npm test
npm run pack:dry
npm run smoke:packRecall
agentlog integrations add-to codex
agentlog integrations add-to claude
agentlog integrations add-to gemini
agentlog integrations add-to antigravity
agentlog integrations add-to devin
agentlog integrations add-to cursor
agentlog integrations add-to cline
agentlog integrations add-to opencode
agentlog integrations add-to aiderintegrations add-to codex installs the MCP server plus a Codex recall skill.
integrations add-to claude installs the MCP server plus a Claude /recall
command plus a Claude skill under ~/.claude/skills/agentlog-recall/SKILL.md.
integrations add-to gemini installs the MCP server plus a Gemini /recall command
under ~/.gemini/commands/recall.toml.
Additional native surfaces are installed where the client exposes one:
| Agent | Recall surface |
| --- | --- |
| Antigravity | MCP config in ~/.gemini/antigravity/mcp_config.json plus an Agent Skill at ~/.gemini/antigravity/skills/recall/SKILL.md |
| Devin | MCP config in ~/.config/devin/config.json plus /recall skill at ~/.config/devin/skills/recall/SKILL.md |
| Cursor | MCP config in ~/.cursor/mcp.json plus project files .cursor/commands/recall.md and .cursor/rules/agentlog-recall.mdc |
| Cline | MCP config in ~/.cline/data/settings/cline_mcp_settings.json plus /recall.md workflow at ~/Documents/Cline/Workflows/recall.md |
| OpenCode | MCP config in ~/.config/opencode/opencode.json plus /recall command at ~/.config/opencode/commands/recall.md |
| Aider | Loadable recall instructions under ~/.agentlog/aider/; Aider does not expose a documented custom slash-command registry, so load them with /load ~/.agentlog/aider/load-recall.aider or configure the generated markdown as read: in .aider.conf.yml |
The generated recall files use a skill-style layout with command tables,
workflow steps, query-selection guidance, archive layout notes, and
troubleshooting. They ask the agent to infer a focused search query, then call
agentlog history and agentlog show, so agents do not need direct filesystem
access to ~/.agentlog. Users can ask for context naturally, such as /recall
the migration bug and update the test.
Skill/command files can also be installed directly:
agentlog recall install-skill codex
agentlog recall install-skill claude
agentlog recall install-skill gemini
agentlog recall install-skill antigravity
agentlog recall install-skill devin
agentlog recall install-skill cursor
agentlog recall install-skill cline
agentlog recall install-skill opencode
agentlog recall install-skill aiderThe standalone MCP binary remains available for clients that only need the tool server:
agentlog-recallDaily CLI
agentlog status, agentlog history, agentlog show, agentlog import, agentlog index, and
agentlog watcher login use styled human-readable output by default. Use --json
on status, history, and import status/results for script-friendly
structured output. agentlog status includes the most recently archived
conversation threads and the sources currently monitored by the supervisor;
agentlog history is for search and recall.
Useful history commands:
agentlog history
agentlog history "codebase explanation" --provider codex-cli --limit 10
agentlog history --repo github.com/acme/widgets --since 90d
agentlog show <session-id>
agentlog show <session-id> --path
agentlog web
agentlog web --no-openagentlog web starts the local conversation viewer and opens it in
your default browser. Use --no-open when you only want to print the local URL
and keep the server running. The left rail is a
repo tree sorted by the latest updated session, each folder pages through
sessions with a load-more control, search results reuse the same session
reader, and the transcript pane can switch between readable chat bubbles and
the raw markdown archive. Readable chat bubbles load from pre-baked
*.view.json payloads; raw markdown is fetched only when source view is
opened. Search runs as you type with a short debounce. The web endpoint uses a
warm compatible index when available, but it will not synchronously parse or
rebuild an obsolete index during an interactive query or scan every rendered
conversation as a fallback; rebuilds are left to agentlog index rebuild or
the supervisor. The static viewer uses shadcn/ui-style design tokens
and compact button/input/select/sidebar patterns without requiring a frontend
build step. Archives still keep stable path:<hash> keys for folders without
git identity, but the UI displays the local path.
Provider filters use one stable order: OpenAI (codex-cli, codex-desktop,
codex-sdk, chatgpt), Anthropic (claude, claude-code-desktop, claude-workspace,
claude-web, claude-sdk), Google (gemini-cli, antigravity), Cognition
(devin-cli), then other local tools (cursor, cline, opencode,
aider).
The supervisor is agentlog's local background watcher. When it is running, it
polls the watcher source list chosen during init every 30 seconds using the
configured import window, defaulting to the last 30 days. If you opt out of
starting it at login, agentlog does not install a login item. During setup,
uncheck Start watcher at login or run agentlog init --no-autostart; you can
still run agentlog import --source all for a one-time catch-up,
agentlog watcher start to watch for the current session, or
agentlog watcher login enable later. The default
watcher choices are Codex CLI, Codex Desktop, Claude Code CLI, Claude Code
Desktop, Claude Workspace, Gemini CLI, Antigravity, Devin CLI, Cursor, Cline,
OpenCode CLI, OpenCode Desktop, OpenCode Web, and Aider. New configs still support
imports.autoDiscoverSources=true, but init records the chosen watcher list
exactly by setting imports.autoDiscoverSources=false.
Cursor raw SQLite recovery is intentionally left to explicit imports such as
agentlog import --source cursor --since all; the supervisor handles
incremental Cursor logs going forward and prunes duplicate transcript snapshots.
The detailed watcher/import contract, including sourcePath replacement rules for
multi-session stores such as Cursor SQLite and Devin sessions.db, lives in
docs/history-source-handling.md.
Remote sync can be configured during init or from the CLI. Local storage remains canonical. Normal sync is upload-only: this machine pushes changed archive objects to the remote target and never deletes objects from the bucket just because they are missing locally.
agentlog sync configure
agentlog syncagentlog sync configure opens a picker in a terminal. Choose an existing
configured/env remote, or choose the last option to configure a new R2, S3,
S3-compatible, or local-folder target. After a target is selected, the CLI asks
what to do next: sync now, create a snapshot, change autosync cadence, or exit.
Non-interactive scripts can still pass flags such as --endpoint, --bucket,
and --access-key-id.
The same values can be supplied with AGENTLOG_REMOTE_ENDPOINT,
AGENTLOG_REMOTE_BUCKET, AGENTLOG_REMOTE_ACCESS_KEY_ID, and
AGENTLOG_REMOTE_SECRET_ACCESS_KEY. R2 also accepts R2_ENDPOINT,
R2_BUCKET, R2_ACCESS_KEY_ID, and R2_SECRET_ACCESS_KEY.
The bucket should already exist before running sync. For Cloudflare R2, copy the
S3 API URL from the bucket's Settings > General page. Cloudflare may include the
bucket in that URL, such as
https://<account-id>.r2.cloudflarestorage.com/<bucket>; agentlog accepts that
and derives the bucket automatically. Create an R2 API token from the R2 Object
Storage overview with Manage API Tokens, scoped to the agentlog bucket with
read, write, and list permissions. For AWS S3, use an IAM access key scoped to
the bucket or agentlog/ prefix with ListBucket and object read/write
permissions.
During agentlog init, a configured cloud destination is uploaded immediately
after the initial backfill. If the watcher is allowed to start at login, init
also asks for a cloud autosync cadence; the supervisor then uploads new archive
changes on that interval whenever it is running.
Remote objects are namespaced by device so multiple machines can upload into one bucket without overwriting each other's archive roots:
s3://<bucket>/agentlog/
devices/
work-laptop/
sessions/...
raw-sources/...
indexes/...
snapshots/
20260504T173000Z/
work-laptop/
sessions/...In a terminal, agentlog sync asks you to choose the remote target, previews
the upload-only plan, and requires a confirmation phrase before writing. This is
intentional even when only one remote is configured, because multiple remotes
can exist in config/env and a normal sync can overwrite same-key objects for the
selected device namespace. Use --dry-run to preview in scripts, or --yes to
skip the guided confirmation.
Use agentlog sync snapshot to upload a redundant point-in-time copy under
agentlog/snapshots/<timestamp>/<device>/.... In a terminal it asks you to
choose the remote, lists existing snapshots, asks for the snapshot name, previews
the write, and then confirms before uploading:
agentlog sync snapshotIf the remote copy has stale objects after a local reset, cleanup, or full
reimport, use the explicit replace path. In a terminal it previews the selected
remote, shows how many remote objects will be deleted, and requires typed
confirmation before it deletes this device namespace under
agentlog/devices/<device>/ and uploads the current local archive.
agentlog sync replace
agentlog sync wipeagentlog sync wipe is delete-only. It asks which remote to use, asks for the
scope (device, one snapshot, all snapshots, prefix, or bucket),
previews the exact target and prefix, then requires a two-step typed
confirmation. device leaves snapshots untouched; snapshots leaves normal
device uploads untouched. Run agentlog sync afterward to upload this device
again. Scripts can use --dry-run or --yes.
Receive-only and two-way sync are intentionally not active yet. The intended v1 shape is to read other device namespaces and merge normalized session metadata without treating absence on one machine as a deletion. Remote deletes should be an explicit garbage-collection command, not a side effect of sync. For extra protection, enable provider-native versioning, retention, or lifecycle rules where available so accidental overwrites remain recoverable outside agentlog too.
Archived sessions are stored as readable markdown first, with metadata, raw transcript JSONL, normalized canonical event JSONL, and a raw source-file folder alongside it:
~/.agentlog/data/agentlog/
sessions/
repo=<encoded_repo>/
provider=<provider>/
year=2026/
month=04/
day=27/
session=<id>.conversation.md
session=<id>.metadata.json
session=<id>.transcript.jsonl
session=<id>.events.jsonl
session=<id>.raw/
manifest.json
001-<original-file-name>For large multi-session stores such as Cursor SQLite, the per-session raw
manifest may reference one shared copy under raw-sources/ instead of copying
the same database into every session folder.
Web chat imports may also reference a shared raw export archive; ChatGPT
attachments remain preserved there and fresh imports render image/file cards in
the readable transcript when the export includes the file bytes.
events.jsonl uses the local agentlog.events.v2 canonical event shape:
session.started, prompt.submitted, response.generated, tool.called, and
tool.completed; completed tool events link back to the matching call when the
source exposes stable ids or matching names. Parser versions are stamped by
source type so importer output changes can trigger reimport with a new
fingerprint. Recall/search builds a keyword index over event text first and
falls back to transcript/markdown for legacy archives without events. The local
search index stores compact term postings for CLI compatibility plus a SQLite
FTS5 sidecar for fast web queries; when either index format changes,
agentlog history and agentlog index rebuild it from archived
transcripts/events without a full source
reimport. The web viewer avoids doing that rebuild on a keystroke so a large
old index, or a full-archive Markdown fallback, cannot block interactive
search.
Stats are import-time metadata, not viewer-time transcript repair. Archive
metadata stores message counts, user-message counts, token usage, and models for
each session, and the web stats view reads those fields directly. Token totals
include cache-read/cache-write tokens when providers report them, while the
stats payload and UI also keep input, output, cache, and reasoning sub-counts
separately when available. Codex imports preserve threads.tokens_used as the
provider total and split rollout token_count events into fresh input, cache
read, output, and reasoning metadata. Codex SDK and
Claude SDK batch jobs are kept out of primary activity totals, streaks, folder
rankings, and provider/model charts; the stats payload and web view expose them
as a separate SDK jobs section so high-volume automation does not drown out
interactive work. Cursor sessions
without provider-reported token usage can also carry separately labeled
estimatedUsage, which the stats view includes while reporting estimated token
coverage. ChatGPT and Claude.ai exports without provider usage get estimated
metadata.usage on their native chat messages, split into non-assistant input,
assistant output, and Claude thinking output where the export provides separate
parts. During pre-v1 development, if those stats fields or parser semantics
change, rebuild the local archive with
agentlog update --yes --since all.
ChatGPT and Claude.ai are manual export providers. Run agentlog import chatgpt
or agentlog import claude-web for current export instructions; after the
provider emails a download link, pass the official .zip, unzipped export
folder, or direct JSON file back to agentlog. These imports are stored as local
scoped web-chat archives and displayed through virtual conversation roots such
as [chatgpt]conversations/<account-id> and
[claude]conversations/<account-id>/<project>. The importer records account
metadata in ~/.agentlog/state/web-accounts.json; use
agentlog import accounts list to inspect mappings and
agentlog import accounts rename <provider> <account-id-or-username> --display-name <name>
to change the viewer display name.
For newer OpenAI privacy exports named OpenAI-export, unzip the download and
import the User Online Activity folder. Running agentlog import chatgpt
without a path starts a walkthrough that asks for export paths one at a time,
then account username/email and display name. ChatGPT
conversations may be split across multiple
Conversations__...chatgpt...part-000N ZIPs or folders; passing the parent
folder is best, but the walkthrough can also collect the split part folders
individually and preserve chat.html, manifests, ZIPs, and attached files in
the shared raw export archive. Claude.ai exports preserve conversation summaries
and split structured
thinking parts from visible assistant answers when the export includes that
detail. Repeated manual uploads are incremental: unchanged conversations are
skipped, and updated conversations replace the stable session for that
provider/account/conversation id. Existing malformed pre-v1 web-chat archives
are not migrated automatically; reimport from the original export after a reset
or cleanup.
Tool calls and tool results are normalized before archive write where provider
data is available. For example, Devin tool calls live in
message.metadata.toolCalls[] and render as tool cards in the viewer without
being appended to assistant prose as synthetic Grep(...) text. Canonical
tool events also carry viewer-facing category, categoryLabel, icon,
inputPreview, and target fields so the web viewer can render Bash, edit,
read, search, web, task, skill, and MCP calls consistently across providers.
Use agentlog reset to stop agentlog, disable autostart, and remove agentlog's
local home, config, state, cache, logs, and archive objects. Source application
histories and MCP configuration files are not changed. Add --keep-autostart
to leave the login item in place.
Use agentlog update after installing a newer npm package when you want the new
importer/parser logic to rebuild the local archive without redoing setup:
npm install -g agentel@latest
agentlog update --yesagentlog update preserves config.json, redaction settings, web account
labels, manually imported ChatGPT/Claude.ai archives, source histories, and
recall integrations. It removes derived local archive, import, index, cache, and
sync bookkeeping, then reimports configured local sources from the stored
preferences. The rebuild window comes from the initial backfill or an explicit
all-source import such as agentlog import --source all --since all; the
fallback for legacy configs is all. The watcher's rolling
imports.defaultSinceDays is not used by agentlog update. It does not touch
remote sync objects by default; use agentlog sync replace when the remote
should match the rebuilt local archive.
Use agentlog config to change ~/.agentlog/config.json without rerunning the
init wizard:
agentlog config path
agentlog config setup
agentlog config setup --watch-sources codex-cli,cursor --default-since-days 90
agentlog config setup --sync-interval-minutes manual --no-autostart
agentlog config set imports.defaultSinceDays 90
agentlog config set sync.intervalMinutes manual
agentlog config sources edit
agentlog config sources set codex-cli,cursor,cline
agentlog config sources add gemini-cli
agentlog config sources remove claude-workspaceagentlog config setup reopens the preferences parts of init, including what
the watcher polls, the rolling import window, autosync cadence, and login
startup. It does not run an import.
agentlog config sources edit opens just the watcher source picker from init,
with the current config preselected.
Import Windows
agentlog init starts with interactive setup: choose archive destinations,
choose the full local archive/cache path, choose whether the local watcher starts
at login, and install recall commands or skills, then discover sources and
optionally backfill history.
After backfill, init asks which sources the background watcher should keep
polling, then offers local OTel bridges for Claude Code, Gemini CLI, and Cline.
Local archive storage is always enabled; R2, S3, and custom remote sync targets
can be added as upload-only optional destinations with a device name for the
remote namespace. Discovery and import phases show progress bars while they scan
local stores.
After discovery, init offers a checkbox-style source picker. Rows marked [x]
are selected; type one or more row numbers, such as 1 3 8, to toggle sources
on or off, then press Enter with no input to accept the current selection.
Codex SDK jobs and Claude SDK jobs are shown as separate opt-in sources because
batch SDK traffic can exceed interactive sessions. The selected sources are
saved in config and used by later agentlog import --source all runs unless
--sources is provided explicitly.
Default init sources:
- Codex CLI sessions and Codex Desktop sessions from Codex state, shown as
separate toggles, including linked Codex subagent child sessions when
thread_spawn_edgesmetadata is present; Codex SDK jobs are available as an opt-in batch source - Claude Code CLI transcripts from
~/.claude/projects, including subagent definition snapshots andsubagents/*.jsonlruns imported as child sessions - Claude Code Desktop metadata and Claude Workspace/local-agent sessions from the Claude app data, shown as separate toggles
- Gemini CLI saved chats/checkpoints under
~/.gemini/tmp, plus session/export JSONL stores with tool, usage, and checkpoint metadata - Antigravity task/plan/walkthrough artifacts under
~/.gemini/antigravity/brain, plus partial trajectory summaries from Antigravity app state when no readable artifacts exist - Devin for Terminal sessions from
~/.local/share/devin/cli/sessions.db - Cursor chats from older workspace
state.vscdbSQLite stores and globalcursorDiskKVComposer/Agent rows, includingaiServiceprompt/generation fallbacks, raw SQLite salvage from Cursor global/workspace backups and WAL files, conservative matching of raw assistant/tool companion fragments back to same-project workspace sessions, duplicate prefix pruning, and newer~/.cursor/projects/<project>/agent-transcriptsfiles - Cline task folders from VS Code/JetBrains globalStorage, including checkpoint diffs when present
- OpenCode CLI/core SQLite and project JSON storage under
~/.local/share/opencode, plus OpenCode Desktop app storage and Web sessions when present - Aider repo-local
.aider.chat.history.mdtranscripts, with.aider.llm.historymodel/usage enrichment,.aider.input.historybackups, and matching auto-commit diffs
The Claude Code desktop registry mostly stores metadata pointing back to the standard Claude Code transcript, so agentlog imports the transcript when it exists and only imports Claude Workspace sessions that contain actual prompt content.
Windsurf local cache scanning is disabled for now. Current Cascade transcripts
are encrypted binary stores, so agentlog can detect that a session exists but
cannot archive readable conversation text from that cache. Use Windsurf's
"Download trajectory" Markdown export with agentlog import windsurf <path> to
archive readable Cascade output. If you bulk-export multiple trajectories to a
folder, import the folder directly, for example
agentlog import windsurf ~/windsurf-cascade-export.
For the full source-by-source implementation map, see History Source Handling. For the current module and function map, see Code Reference.
agentlog init also includes a numbered history-import window picker:
1last 30 days2last 90 days3last 180 days4everything5custom interval such as7d,12h,60m, orall6skip
The same choices can be run directly:
agentlog import --source all --since all
agentlog import --sources codex-cli,codex-desktop,claude,claude-code-desktop,claude-workspace,gemini-cli,antigravity,devin-cli,cursor,cline,opencode-cli,opencode-desktop,opencode-web,aider --since all
agentlog import --source codex-desktop --since 90d
agentlog import --source codex-cli --since 30d
agentlog import --source codex-sdk --since all
agentlog import chatgpt
agentlog import claude-web
agentlog import chatgpt ~/Downloads/chatgpt-export.zip --username [email protected]
agentlog import chatgpt "~/Downloads/OpenAI-export/User Online Activity" --username [email protected]
agentlog import claude-web ~/Downloads/claude-export --username you --display-name "Personal Claude"
agentlog import --source claude --since 30d
agentlog import --source claude-code-desktop --since all
agentlog import --source claude-workspace --since all
agentlog import --source gemini-cli --since all
agentlog import --source antigravity --since all
agentlog import --source devin-cli --since all
agentlog import --source cursor --since all
agentlog import --source cursor --since all --explain-skips
agentlog import --source cline --since all
agentlog import --source opencode --since all
agentlog import --source aider --since all
agentlog import --source claude-sdk --since all
agentlog import accounts list
agentlog import accounts rename claude-web you --display-name "Personal Claude"