facult
v2.7.0
Published
Manage canonical AI capabilities, sync surfaces, and evolution state.
Maintainers
Readme
fclt
fclt is a CLI for building and evolving AI faculties across tools, users, and projects.
Most AI tooling manages files. fclt manages faculties: the instructions, snippets, templates, skills, agents, rules, and learning loops that should compound, improve, and survive the next session.
It helps you:
- turn repeated friction into reusable capability
- preserve learning through writeback and evolve canonical assets over time
- consolidate AI behavior into one canonical store
- compose prompts, agents, skills, and tool outputs from reusable snippets and templates
- discover what exists, what depends on what, and what should change next
- sync managed outputs into Codex, Cursor, and Claude
- review trust/security before installing remote content
- keep that operating layer in a git-backed store under
~/.aiand repo-local.ai/
What fclt Is
If your agent setup feels scattered, fclt gives it memory, structure, and a way to improve.
A faculty is a reusable piece of AI behavior: an instruction, snippet, template, skill, agent, rule, or learned improvement that you want to keep around and make better.
That matters because a lot of useful AI behavior is compositional. You want small reusable blocks, a clean way to assemble them into bigger prompts and operating layers, and a safe way to render the final tool-native outputs without losing the source structure.
Think of it as:
- a canonical home for your AI faculties
- a composition system for snippets, templates, and rendered AI behavior
- a sync layer for projecting them into real tools
- a discovery graph for seeing what exists and what depends on what
- a writeback/evolution loop for turning repeated friction into durable improvements
- an inventory and trust boundary for the assets you let into the system
What fclt Does
fclt is not a skills folder with a nicer CLI.
It works as five connected layers:
- Canonical source
- global capability in
~/.ai - project capability in
<repo>/.ai - optional built-in Facult capability packs for bootstrap and defaults
- global capability in
- Discovery
- inventory across skills, agents, snippets, instructions, MCP, and rendered surfaces
- merged views across builtin, global, and project provenance
- explicit dependency graph queries
- Sync
- managed tool outputs for Codex, Claude, Cursor, and other file-backed surfaces
- rendered docs, agents, skills, MCP, config, and rules
- Automation
- background autosync for local propagation
- optional git autosync for the canonical store
- Evolution
- writeback capture
- proposal drafting and review
- controlled apply back into canonical assets
Default Operating Model
fclt ships with a built-in operating model for learning, writeback, and capability evolution. That pack includes default:
- instructions for evolution, integration, and project capability
- specialist agents such as
writeback-curator,evolution-planner, andscope-promoter - skills such as
capability-evolutionandproject-operating-layer-design
When managed sync is enabled, these built-in assets are available by default even if you never copy them into ~/.ai.
That means:
- builtin skills sync into managed tool skill directories by default
- builtin agents sync into tool agent directories when the tool supports agents
- if you do not author your own
AGENTS.global.md,fcltrenders a builtin global baseline doc into tool-native global docs
The activation point is managed mode:
- until you run
fclt manage <tool>, the builtin operating-model layer is just packaged capability - once a tool is managed, the default operating-model layer becomes live for that tool automatically
- for Codex, Claude, and Cursor, that means the core global doc surface plus the bundled writeback/evolution agents and skills are what agents actually see on disk
- this is why the normal setup step is to manage the tools you care about first, then sync
This is intentionally virtual at the canonical level:
- builtin defaults remain part of the packaged tool
- your personal
~/.aistays clean unless you explicitly vendor or override something - the live tool output on disk still contains the rendered defaults, so users and agents can read them directly
In practice, this means the system is meant to learn by default. The CLI is there when you want to operate it directly, but the default skills, agents, and global docs are supposed to make writeback and evolution available without ceremony.
More concretely:
- the normal path is not a human manually typing
fclt ai ...after every task - the bundled operating-model layer is meant to instruct synced agents and skills to notice reusable signal, preserve it, and push it toward writeback/evolution
- the CLI remains the explicit operator surface for inspection, review, cleanup, and controlled apply
- the generated state under
.ai/.facult/gives those agents a durable thread of what was learned, when it was learned, what asset it pointed at, and what proposals or reviews happened afterward
If you want to disable the builtin default layer for a specific global or project canonical root:
version = 1
[builtin]
sync_defaults = falsePut that in config.toml or config.local.toml under the active canonical root.
Core Concepts
Canonical vs rendered
fclt separates source-of-truth from tool-native output.
- canonical source lives in
~/.aior<repo>/.ai - rendered outputs live in tool homes like
~/.codex,<repo>/.codex,~/.claude, or~/.cursor - generated Facult-owned state lives in
~/.ai/.facultor<repo>/.ai/.facult
This keeps authored capability portable and reviewable while still producing the exact files each tool expects.
Global vs project capability
Use global ~/.ai for reusable personal defaults:
- cross-project instructions
- reusable specialist agents
- shared skills
- default tool config and rules
Use project .ai/ for repo-owned capability:
- project-specific instructions and snippets
- local architecture/testing doctrine
- project agents and skills that should travel with the codebase
- repo-local rendered outputs for teammates
Project capability is allowed to extend or shadow global capability in merged views, but it does not silently mutate the global source of truth.
The capability graph
fclt builds a generated graph of explicit relationships between canonical assets and rendered outputs.
That graph tracks things like:
- snippet markers
@ai/...and@project/...refs${refs.*}symbolic refs- rendered-target edges from canonical source to live tool files
This makes it possible to answer:
- what capability do I already have?
- what instructions or snippets does this agent depend on?
- what rendered files change if I update this canonical asset?
- what project asset is shadowing a global asset?
Writeback and evolution
fclt treats repeated failures, weak loops, missing context, and reusable patterns as signal worth preserving.
Writeback is the act of recording that signal in a structured way. Evolution is the act of grouping that signal into reviewable proposals and applying it back into canonical assets.
The intended workflow is agent-driven by default:
- synced global docs, agents, and skills should push your tooling toward creating writebacks when something important was learned
- specialist agents such as
writeback-curator,evolution-planner, andscope-promoterare there to help turn that signal into cleaner proposals and scope decisions - the CLI is what you use when you want to inspect, override, review, reject, apply, or otherwise operate the system directly
- the point is not a new UI. The point is that the operating layer itself can accumulate memory and context across tasks, sessions, and tools
This matters because otherwise the same problems repeat in chat without ever improving the actual operating layer. With fclt, you can:
- record a weak verification pattern
- group repeated writebacks around an instruction or agent
- draft a proposal to tighten that canonical asset
- review and apply the change in a controlled way
The point is not just better storage. The point is that your AI setup can change shape as it learns.
That is the core idea behind fclt: not just syncing skills, but growing faculties.
Quick Start
1. Install fclt
Recommended global install:
brew tap hack-dance/tap
brew install hack-dance/tap/fclt
fclt --helpPackage-manager install:
npm install -g facult
# or
bun add -g facult
fclt --helpThe npm package name stays facult for registry compatibility. The installed command is still fclt.
One-off usage without global install:
npx --yes -p facult fclt --helpDirect binary install from GitHub Releases (macOS/Linux):
curl -fsSL https://github.com/hack-dance/fclt/releases/latest/download/fclt-install.sh | bashWindows and manual installs can download the correct binary from each release page:
fclt-<version>-<platform>-<arch>.
Update later with:
fclt self-update
# or
fclt update --selfPin to a specific version:
fclt self-update --version 0.0.12. Start with a read-only inventory (recommended first)
fclt scan --show-duplicates
# optional machine-readable output
fclt scan --jsonscan is read-only. It inspects local configs and reports what facult found without changing files.
3. Import existing skills/configs
fclt consolidate --auto keep-current --from ~/.codex/skills --from ~/.agents/skills
fclt indexWhy keep-current: it is deterministic and non-interactive for duplicate sources.
Canonical source root: ~/.ai for global work, or <repo>/.ai for project-local work.
Generated AI state that belongs with the canonical root lives inside that root:
- global:
~/.ai/.facult/ai/... - project:
<repo>/.ai/.facult/ai/...
Machine-local operational state lives outside the canonical root:
- macOS state:
~/Library/Application Support/fclt/... - macOS cache:
~/Library/Caches/fclt/... - Linux/other state:
${XDG_STATE_HOME:-~/.local/state}/fclt/... - Linux/other cache:
${XDG_CACHE_HOME:-~/.cache}/fclt/...
3b. Bootstrap a repo-local .ai
cd /path/to/repo
bunx fclt templates init project-ai
bunx fclt indexThis seeds <repo>/.ai from the built-in Facult operating-model pack and writes a merged project index/graph under <repo>/.ai/.facult/ai/.
Wide learning-review automations should use this same bootstrap when they hit a local writable repo with durable project-local signal but no repo-local .ai yet.
4. Inspect what you have
fclt list skills
fclt list instructions
fclt list mcp
fclt show requesting-code-review
fclt show instruction:WRITING
fclt show mcp:github
fclt find verification
fclt graph show instruction:WRITING
fclt graph deps AGENTS.global.md
fclt graph dependents @ai/instructions/WRITING.md
fclt ai writeback add --kind weak_verification --summary "Checks were too shallow" --asset instruction:VERIFICATION
fclt ai evolve propose
fclt ai evolve draft EV-00001
fclt ai evolve accept EV-00001
fclt ai evolve apply EV-00001Context controls:
fclt list instructions --global
fclt list instructions --project
fclt find verification --scope merged --source project
fclt sync codex --project
fclt autosync status --global
fclt list agents --root /path/to/repo/.ai5. Enable managed mode for your tools
fclt manage codex --dry-run
fclt manage codex --adopt-existing
fclt sync codex --builtin-conflicts overwrite
fclt manage cursor
fclt manage claude
fclt enable requesting-code-review receiving-code-review brainstorming systematic-debugging --for codex,cursor,claude
fclt syncAt this point, your selected skills are actively synced to all managed tools.
This is also the point where the default operating-model layer becomes active for those tools. If you manage Codex or Claude, the bundled learning/writeback/evolution guidance is no longer just discoverable in fclt; it is rendered into the managed global doc surface and synced alongside the bundled agents and skills.
If you run these commands from inside a repo that has <repo>/.ai, facult targets the project-local canonical store and repo-local tool outputs by default.
On first entry to managed mode, use --dry-run first if the live tool already has local content. facult will show what it would adopt into the active canonical store across skills, agents, docs, rules, config, and MCP, plus any conflicts. Then rerun with --adopt-existing; if names or files collide, add --existing-conflicts keep-canonical or --existing-conflicts keep-existing.
For builtin-backed rendered defaults, facult now tracks the last managed render hash. If a user edits the generated target locally, normal sync warns and preserves that local edit instead of silently overwriting it. To replace the local edit with the latest packaged builtin default, rerun sync with --builtin-conflicts overwrite.
6. Turn on background autosync
fclt autosync install --git-remote origin --git-branch main --git-interval-minutes 60
fclt autosync statusThis installs a macOS LaunchAgent that:
- watches the active canonical root (
~/.aior<repo>/.ai) for local changes and syncs managed tool outputs automatically - tracks dirty state for the canonical repo
- runs a slower git autosync loop that batches changes, auto-commits them, rebases on the configured remote branch, and pushes on success
If the repo hits a rebase conflict, remote autosync stops and reports the blocked state, but local tool sync continues.
7. Turn on source trust and strict install flow
fclt sources list
fclt verify-source skills.sh --json
fclt sources trust skills.sh --note "reviewed"
fclt install skills.sh:code-review --as code-review-skills-sh --strict-source-trustUse fclt from your agents
facult is CLI-first. The practical setup is:
- Install
facultglobally so any agent runtime can execute it. - Put allowed
facultworkflows in your agent instructions/skills. - Optionally scaffold MCP wrappers if you want an MCP entry that delegates to
facult.
# Scaffold reusable templates in the canonical store
fclt templates init agents
fclt templates init claude
fclt templates init skill facult-manager
# Enable that skill for managed tools
fclt manage codex
fclt manage cursor
fclt manage claude
fclt enable facult-manager --for codex,cursor,claude
fclt syncOptional MCP scaffold:
fclt templates init mcp facult-cli
fclt enable mcp:facult-cli --for codex,cursor,claude
fclt syncNote: templates init mcp ... is a scaffold, not a running server by itself.
The .ai Model
facult treats both ~/.ai and <repo>/.ai as canonical AI stores. The global store is for personal reusable capability; the project store is for repo-owned capability that should travel with the codebase.
Typical layout:
~/.ai/
AGENTS.global.md
AGENTS.override.global.md
config.toml
config.local.toml
instructions/
snippets/
agents/
skills/
mcp/
templates/
tools/
codex/
config.toml
rules/
<repo>/
.ai/
config.toml
instructions/
snippets/
agents/
skills/
tools/
.facult/
ai/
index.json
graph.json
.codex/
.claude/Important split:
.ai/is canonical source.ai/.facult/ai/is generated AI state that belongs with the canonical root- machine-local Facult state such as managed-tool state, autosync runtime/config, install metadata, and launcher caches lives outside
.ai/ - tool homes such as
.codex/and.claude/are rendered outputs - the generated capability graph lives at
.ai/.facult/ai/graph.json
Asset types
The canonical store can contain several distinct asset classes:
instructions/: reusable doctrine and deeper conceptual guidancesnippets/: small composable blocks that can be inserted into rendered markdownagents/: role-specific agent manifestsskills/: workflow-specific capability foldersmcp/: canonical MCP server definitionsmcp/servers.local.jsonormcp/mcp.local.json: ignored machine-local MCP secret overlaytools/<tool>/config.toml: canonical tool configtools/<tool>/config.local.toml: machine-local tool config overlaytools/<tool>/rules/*.rules: canonical tool rules- global docs such as
AGENTS.global.mdandAGENTS.override.global.md
Not every asset syncs directly to a tool. Some exist primarily to support rendered outputs or to be discovered and reused by other canonical assets.
Canonical conventions
- Use
instructions/for reusable markdown documents - Use
snippets/for composable partial blocks injected into markdown templates - Use
tools/codex/rules/*.rulesfor actual Codex approval-policy rules - Use logical refs such as
@ai/instructions/WRITING.mdin tracked source - Use
@builtin/facult-operating-model/...for packaged Facult defaults - Use
@project/...when a tracked ref must resolve inside a repo-local.ai - Use config-backed refs in prompts where you want stable named references such as
${refs.writing_rule}
Config and env layering
Canonical render context is layered explicitly:
- built-ins injected by
facult - active canonical root
config.toml - active canonical root
config.local.toml - explicit runtime overrides
Built-ins currently include:
AI_ROOTHOMEPROJECT_ROOTPROJECT_SLUGTARGET_TOOLTARGET_PATH
Recommended split:
~/.ai/config.tomlor<repo>/.ai/config.toml: tracked, portable, non-secret refs/defaults~/.ai/config.local.tomlor<repo>/.ai/config.local.toml: ignored, machine-local paths and secrets~/.ai/mcp/servers.jsonor<repo>/.ai/mcp/servers.json: tracked canonical MCP definitions~/.ai/mcp/servers.local.jsonor<repo>/.ai/mcp/servers.local.json: ignored machine-local MCP env overlay for secrets and per-machine auth~/.ai/tools/<tool>/config.tomlor<repo>/.ai/tools/<tool>/config.toml: tracked tool defaults~/.ai/tools/<tool>/config.local.tomlor<repo>/.ai/tools/<tool>/config.local.toml: ignored, machine-local tool overrides merged after tracked tool config during sync[builtin].sync_defaults = false: disable builtin default sync/materialization for this rootfclt sync --builtin-conflicts overwrite: allow packaged builtin defaults to overwrite locally modified generated targetsfclt audit fix ...: move inline MCP secrets from tracked canonical config into the local MCP overlay and re-sync managed tool configs
Snippets
Snippets use HTML comment markers:
<!-- fclty:global/codex/baseline -->
<!-- /fclty:global/codex/baseline -->Resolution rules:
- unscoped marker
codingstylepreferssnippets/projects/<project>/codingstyle.md, then falls back tosnippets/global/codingstyle.md - explicit marker
global/codex/baselineresolves directly tosnippets/global/codex/baseline.md
Commands:
fclt snippets list
fclt snippets show global/codex/baseline
fclt snippets sync [--dry-run] [file...]Snippets are already used during global Codex AGENTS.md rendering.
Graph inspection
The generated graph in .ai/.facult/ai/graph.json is queryable directly:
fclt graph show instruction:WRITING
fclt graph deps AGENTS.global.md
fclt graph dependents @project/instructions/TESTING.mdThis is the explicit dependency layer for:
- snippet markers like
<!-- fclty:... --> - config-backed refs like
${refs.*} - canonical refs like
@ai/... - project refs like
@project/... - rendered outputs such as managed agents, docs, MCP configs, tool configs, and tool rules
Writeback and evolution
facult also has a local writeback/evolution substrate built on top of the graph:
fclt ai writeback add \
--kind weak_verification \
--summary "Verification guidance did not distinguish shallow checks from meaningful proof." \
--asset instruction:VERIFICATION \
--tag verification \
--tag false-positive
fclt ai writeback list
fclt ai writeback show WB-00001
fclt ai writeback group --by asset
fclt ai writeback summarize --by kind
fclt ai evolve propose
fclt ai evolve list
fclt ai evolve show EV-00001
fclt ai evolve draft EV-00001
fclt ai evolve review EV-00001
fclt ai evolve accept EV-00001
fclt ai evolve reject EV-00001 --reason "Needs a tighter draft"
fclt ai evolve supersede EV-00001 --by EV-00002
fclt ai evolve apply EV-00001
fclt ai evolve promote EV-00003 --to global --projectRuntime state stays generated and local inside the active canonical root:
- global writeback state:
~/.ai/.facult/ai/global/... - project writeback state:
<repo>/.ai/.facult/ai/project/...
That split is intentional:
- canonical source remains in
~/.aior<repo>/.ai - writeback queues, journals, proposal records, trust state, autosync state, and other Facult-owned runtime/config state stay inside
.ai/.facult/rather than inside the tool homes - those records create a historical thread agents can inspect over time: what changed, what triggered it, which asset it pointed at, what proposal was drafted, how it was reviewed, and whether it was applied or rejected
Use writeback when:
- a task exposed a weak or misleading verification loop
- an instruction or agent was missing key context
- a pattern proved reusable enough to become doctrine
- a project-local pattern deserves promotion toward global capability
Do not think of writeback as “taking notes.” Think of it as preserving signal that should change the system, not just the current conversation.
For many users, the normal entrypoint is not the CLI directly. The builtin operating-model layer is designed so synced agents, skills, and global docs can push the system toward writeback and evolution by default, while the fclt ai ... commands remain the explicit operator surface when you want direct control.
In other words:
- agents should be the ones noticing friction and capturing it
- skills should be the ones teaching when writeback or evolution is warranted
- proposal history should give future agents enough context to understand why a rule, instruction, or prompt changed
- you drop to the CLI when you want to inspect the thread, steer it, or make the final call
Current apply semantics are intentionally policy-bound:
- targets are resolved through the generated graph when possible and fall back to canonical ref resolution for missing assets
- apply is limited to markdown canonical assets
- proposals must be drafted before they can be applied; higher-risk proposals still require explicit acceptance
- supported proposal kinds currently include
create_instruction,update_instruction,create_agent,update_agent,update_asset,create_asset,extract_snippet,add_skill, andpromote_asset - low-risk project-scoped additive proposals such as
create_instructioncan be applied directly after drafting, while global and higher-risk proposals still require review/acceptance
Current review/draft semantics:
writeback groupandwriteback summarizeexpose recurring patterns acrossasset,kind, anddomainwithout mutating canonical assets- drafted proposals emit both a human-readable markdown draft and a patch artifact under generated state
- rerunning
evolve draft <id> --append ...revises the draft and records draft history evolve promote --to globalcreates a new high-risk global proposal from a project-scoped proposal; that promoted proposal can then be drafted, reviewed, and applied into~/.ai
Scope and source selection
Most inventory and sync commands support explicit canonical-root selection:
--globalto force~/.ai--projectto force the nearest repo-local.ai--root /path/to/.aito point at a specific canonical root--scope merged|global|projectfor discovery views--source builtin|global|projectto filter provenance in list/find/show/graph flows
Security and Trust
facult has two trust layers:
- Item trust:
fclt trust <name>/fclt untrust <name> - Source trust:
fclt sources ...with levelstrusted,review,blocked
facult also supports two audit modes:
- Interactive audit workflow:
fclt audit- Static audit rules (deterministic pattern checks):
fclt audit --non-interactive --severity high
fclt audit --non-interactive mcp:github --severity medium --json- Agent-based audit (Claude/Codex review pass):
fclt audit --non-interactive --with claude --max-items 50
fclt audit --non-interactive --with codex --max-items all --jsonRecommended security flow:
fclt verify-source <source>fclt sources trust <source>only after review- use
--strict-source-trustforinstall/update - run both static and agent audits on a schedule
Comprehensive Reference
Capability categories
- Inventory: discover local skills, MCP configs, hooks, and instruction files
- Management: consolidate, index, manage/unmanage tools, enable/disable entries, manage canonical AI config
- Security: static audit, agent audit, item trust, source trust, source verification
- Distribution: search/install/update from catalogs and verified manifests
- DX: scaffold templates and sync snippets into instruction/config files
- Automation: background autosync for local tool propagation and canonical repo git sync
Command categories
- Inventory and discovery
fclt scan [--from <path>] [--json] [--show-duplicates]
fclt list [skills|mcp|agents|snippets|instructions] [--enabled-for <tool>] [--untrusted] [--flagged] [--pending]
fclt show <name>
fclt show instruction:<name>
fclt show mcp:<name> [--show-secrets]
fclt find <query> [--json]- Canonical store and migration
fclt consolidate [--auto keep-current|keep-incoming|keep-newest] [--from <path> ...]
fclt index [--force]
fclt migrate [--from <path>] [--dry-run] [--move] [--write-config]- Managed mode and rollout
fclt manage <tool> [--dry-run] [--adopt-existing] [--existing-conflicts keep-canonical|keep-existing]
fclt unmanage <tool>
fclt managed
fclt enable <name> [--for <tool1,tool2,...>]
fclt enable mcp:<name> [--for <tool1,tool2,...>]
fclt disable <name> [--for <tool1,tool2,...>]
fclt sync [tool] [--dry-run] [--builtin-conflicts overwrite]
fclt autosync install [tool] [--git-remote <name>] [--git-branch <name>] [--git-interval-minutes <n>] [--git-disable]
fclt autosync status [tool]
fclt autosync restart [tool]
fclt autosync uninstall [tool]- Remote catalogs and policies
fclt search <query> [--index <name>] [--limit <n>]
fclt install <index:item> [--as <name>] [--force] [--strict-source-trust]
fclt update [--apply] [--strict-source-trust]
fclt verify-source <name> [--json]
fclt sources list
fclt sources trust <source> [--note <text>]
fclt sources review <source> [--note <text>]
fclt sources block <source> [--note <text>]
fclt sources clear <source>- Templates and snippets
fclt templates list
fclt templates init project-ai
fclt templates init skill <name>
fclt templates init mcp <name>
fclt templates init snippet <marker>
fclt templates init agents
fclt templates init claude
fclt templates init automation <template-id> --scope global|project|wide [--name <name>] [--project-root <path>] [--cwds <path1,path2>] [--rrule <RRULE>] [--status PAUSED|ACTIVE]
fclt snippets list
fclt snippets show <marker>
fclt snippets create <marker>
fclt snippets edit <marker>
fclt snippets sync [--dry-run] [file...]Codex automations
templates init automation can scaffold three Codex automation forms:
--scope project(single repo): set--project-root(or infer from current working directory)--scope wide|global(multiple repos): set--cwdsexplicitly; if omitted, created automation has nocwdsby default.- If you run it interactively without
--scope,fcltprompts for scope and, where possible, known workspaces (git worktrees, configured scan roots, and existing Codex automation paths). - Built-in automation templates are opinionated: they reference the global Codex operating model, point at relevant Codex skills, and tell Codex when to use focused subagents for bounded review work.
Recommended topology:
- Use
learning-review --scope projectfor repo-local writeback and evolution. This keeps review state, verification, and follow-up scoped to the repo that actually produced the evidence. - Use
evolution-reviewon a slower cadence, usually weekly, to triage open proposals and proposal-worthy clusters and suggest the next operator action (draft,review,accept,reject,promote, orapply). - Use a separate wide/global automation only for cross-repo or shared-surface review, such as global doctrine, shared skills, or repeated tool/agent patterns across repos.
- If you do use a wide learning review, keep the
cwdslist intentionally small and related. The prompt is designed to partition by cwd first, not to blur unrelated repos together. - A practical default is daily
learning-reviewplus weeklyevolution-review. The first finds and records durable signal; the second keeps proposal review from stalling.
Files are written to:
~/.codex/automations/<name>/automation.toml~/.codex/automations/<name>/memory.md
When Codex is in managed mode, canonical automation sources live under:
~/.ai/automations/<name>/...for global automation state<repo>/.ai/automations/<name>/...for project-scoped canonical state
Managed sync renders those canonical automation directories into the shared live Codex automation store at ~/.codex/automations/ and only removes automation files that were previously rendered by the same canonical root.
Example project automation:
fclt templates init automation tool-call-audit \
--scope project \
--project-root /path/to/repo \
--name project-tool-audit \
--status ACTIVEExample global automation:
fclt templates init automation learning-review \
--scope wide \
--cwds /path/to/repo-a,/path/to/repo-b \
--status PAUSEDExample weekly evolution automation:
fclt templates init automation evolution-review \
--scope wide \
--cwds /path/to/repo-a,/path/to/repo-b \
--name weekly-evolution-review \
--status PAUSEDInteractive prompt example:
fclt templates init automation learning-review
# prompts for scope, then lets you select known workspaces or add custom paths.For full flags and exact usage:
fclt --help
fclt <command> --helpRoot resolution
facult resolves the canonical root in this order:
FACULT_ROOT_DIR- nearest project
.aifrom the current working directory for CLI-facing commands ~/.ai/.facult/config.json(rootDir)~/.ai~/agents/.facult(or a detected legacy store under~/agents/)
Runtime env vars
FACULT_ROOT_DIR: override canonical store locationFACULT_VERSION: version selector forscripts/install.sh(latestby default)FACULT_INSTALL_DIR: install target dir forscripts/install.sh(~/.ai/.facult/binby default)FACULT_INSTALL_PM: force package manager detection for npm bootstrap launcher (npmorbun)
State and report files
Under canonical generated AI state (~/.ai/.facult/ or <repo>/.ai/.facult/):
sources.json(latest inventory scan state)consolidated.json(consolidation state)ai/index.json(generated canonical AI inventory)audit/static-latest.json(latest static audit report)audit/agent-latest.json(latest agent audit report)trust/sources.json(source trust policy state)
Under machine-local Facult state:
install.json(machine-local install metadata)global/managed.jsonorprojects/<slug-hash>/managed.json(managed tool state).../autosync/services/*.json(autosync service configs).../autosync/state/*.json(autosync runtime state).../autosync/logs/*(autosync service logs)runtime/<version>/<platform-arch>/...under the machine-local cache root (npm launcher binary cache)
Config reference
~/.ai/.facult/config.json supports:
rootDirscanFromscanFromIgnorescanFromNoDefaultIgnorescanFromMaxVisitsscanFromMaxResults
scanFrom* settings are used by scan/audit unless --no-config-from is passed.
Example:
{
"rootDir": "~/.ai",
"scanFrom": ["~/dev", "~/work"],
"scanFromIgnore": ["vendor", ".venv"],
"scanFromNoDefaultIgnore": false,
"scanFromMaxVisits": 20000,
"scanFromMaxResults": 5000
}Source aliases and custom indices
Default source aliases:
facult(builtin templates)smitheryglamaskills.shclawhub
Custom remote sources can be defined in ~/.ai/.facult/indices.json (manifest URL, optional integrity, optional signature keys/signature verification settings).
Local Install Modes
For local CLI setup (outside npm global install), use:
bun run install:dev
bun run install:bin
bun run install:statusDefault install path is ~/.ai/.facult/bin/fclt. You can pass a custom target dir via --dir=/path.
Autosync
fclt autosync is the background propagation layer for managed installs.
Current v1 behavior:
- macOS LaunchAgent-backed
- immediate local managed-tool sync on the configured canonical root
- periodic git autosync for the canonical repo
- automatic autosync commits with source-tagged commit messages such as:
chore(facult-autosync): sync canonical ai changes from <host> [service:all]
Recommended usage:
fclt autosync install
fclt autosync statusProject-local usage:
cd /path/to/repo
fclt autosync install codex
fclt autosync status codexTool-scoped service:
fclt autosync install codexOne-shot runner for verification/debugging:
fclt autosync run --service all --onceRemote git policy:
- do not sync on every file event
- mark the canonical repo dirty on local changes
- on the configured timer, fetch, auto-commit local canonical changes if needed, pull
--rebase, then push - if rebase conflicts occur, remote autosync is blocked and reported, but local managed-tool sync keeps running
CI and Release Automation
- CI workflow:
.github/workflows/ci.yml - Release workflow:
.github/workflows/release.yml - Semantic-release config:
.releaserc.json
Release behavior:
- Every push to
mainruns full checks. semantic-releasecreates the version/tag and GitHub release (npm publish is disabled in this phase).- The same release workflow then builds platform binaries and uploads them to that GitHub release.
- npm publish runs only after binary asset upload succeeds (
publish-npmdepends onpublish-assets). - Published release assets include platform binaries,
fclt-install.sh,facult-install.sh, andSHA256SUMS. - When
HOMEBREW_TAP_TOKENis configured, the release workflow also updates the Homebrew tap athack-dance/homebrew-tap. - The npm package launcher resolves your platform, downloads the matching release binary, caches it under the machine-local cache root (
~/Library/Caches/fclt/runtime/...on macOS or${XDG_CACHE_HOME:-~/.cache}/fclt/runtime/...elsewhere), and runs it.
Current prebuilt binary targets:
darwin-x64darwin-arm64linux-x64windows-x64
Self-update behavior:
- npm/bun global install: updates via package manager (
npm install -g facult@...orbun add -g facult@...). - Direct binary install (release script/local binary path): downloads and replaces the binary in place.
- Use
fclt self-update(orfclt update --self).
Required secrets for publish:
NPM_TOKENHOMEBREW_TAP_TOKEN(fine-grained token with contents write access tohack-dance/homebrew-tap)
Local semantic-release dry-runs require a supported Node runtime (>=24.10).
Recommended one-time bootstrap before first auto release:
git tag v0.0.0
git push origin v0.0.0This makes the first semantic-release increment land at 0.0.1 for patch-level changes.
Commit Hygiene
Some MCP config files can contain secrets. Keep local generated artifacts and secret-bearing config files ignored and out of commits.
Local Development
bun run install:status
bun run install:dev
bun run install:bin
bun run build
bun run build:verify
bun run type-check
bun run test:ci
bun test
bun run check
bun run fix
bun run pack:dry-run
bun run release:dry-runFAQ
Does fclt run its own MCP server today?
Not as a first-party fclt mcp serve runtime.
facult currently focuses on inventory, trust/audit, install/update, and managed sync of skills/MCP configs.
Does fclt now manage global AI config, not just skills and MCP?
Yes. The core model now includes:
- canonical personal AI source in
~/.ai - rendered managed outputs in tool homes such as
~/.codex - global instruction docs such as
AGENTS.global.md, rendered by default into~/.codex/AGENTS.md,~/.claude/CLAUDE.md, and~/.cursor/AGENTS.md - tool-native configs such as
~/.codex/config.toml - tool-native rule files such as
~/.codex/rules/*.rules
Do I still need to run fclt sync manually?
If autosync is not installed, yes.
If autosync is installed, local changes under ~/.ai propagate automatically to managed tools. Manual fclt sync is still useful for explicit repair, dry-runs, and non-daemon workflows.
