arch-mentor
v0.1.1
Published
Principal Architect Mentor — capability development for engineers
Maintainers
Readme
I'm a working enterprise architect in financial services. I built this because I kept watching senior engineers make the same thinking mistakes—not because they weren't intelligent, but because nobody was challenging their reasoning the way a principal architect would. This is a capability development instrument. It is not a document generator, a checklist, or a replacement for experience. I designed it to change how engineers think before they commit to a technical solution.
The Problem
Senior engineers often arrive at architectural reviews with solutions already decided, but without the business rationale to support those decisions. They treat non-functional requirements as vague qualifiers like "fast" or "reliable" rather than binding commitments with named owners and calculated costs. They advocate for a single option rather than conducting genuine multi-dimensional analysis, and they remain blind to the organisational risks and stakeholders that actually determine project success. Intelligence is not the constraint here; the absence of a challenge mechanism is. I watched this lack of interrogation create expensive technical debt and compliance failures that only became visible six months after production, and I built this to fill that gap.
How It Works
I don't treat this tool like a waiter that takes an order and captures it accurately without judgment. I built it to act like a senior mentor who interrogates the problem until the root cause and the commercial stakes are clear. It pushes back not to obstruct progress, but to ensure that the engineer has actually done the hard work of thinking through the implications of their choices. It never accepts a technical preference at face value until the constraints, obligations, and NFRs that mandate it have been surfaced and verified against the business objective.
Every state in the architectural flow has two distinct, sequential phases. In Phase 1 — Challenge, the mentor asks diagnostic probes to surface the required thinking. It refuses to draft any artifacts until that thinking has been demonstrated through the conversation. In Phase 2 — Synthesise, the mentor writes the artifact as a synthesis of the reasoning developed during the challenge. These artifacts are not templates filled with plausible content; they are the byproduct and evidence of thinking that actually happened.
What It Develops
- Business consequence thinking — prevents technical solutions designed without a clear business problem or commercial impact.
- NFRs as commitments — prevents availability and performance targets that have no business owner and no associated cost.
- Trade-off reasoning — prevents advocacy disguised as analysis by forcing honest evaluation of multiple dimensions including run and change costs.
- Organisational reading — prevents projects from failing due to "silent" stakeholders or unmapped influence and risk vectors.
- Infrastructure as constrained optimisation — prevents technology choices based on personal preference rather than NFR mandates and compliance constraints.
- Data accountability thinking — prevents data flow diagrams that ignore the legal obligations and custody risks inherent in customer data.
- Decision permanence — prevents architectural reasoning from living only in people's heads where it dies when they leave the company.
- Assumption surfacing — prevents unverified beliefs from masquerading as facts and creating invisible, deferred risks.
- System-level consequence thinking — prevents local optimisations that solve a component problem while creating a global failure mode.
Getting Started
npm install -g arch-mentorcd your-project
arch-ai initclaude
# or: gemini
# The mentor starts automatically from S1Works with Claude Code, Gemini CLI, or any LLM tool that reads a system prompt file.
What Happens in a Session
Consider the opening probe of a session at S1:
"Before we touch the architecture — what breaks in the business if this doesn't get built? Not technically. Commercially, operationally, or for the customer."
I put this question first because an engineer who cannot name the commercial consequence of inaction has no test against which to evaluate subsequent technical decisions. Every state works this way. Challenge before synthesis. Always.
The Workspace
When you run arch-ai init, it creates the foundation for an architectural record that lives in your repository.
/architecture
├── .session-state.json
├── scope-declaration.md
├── context.md
├── stakeholders.md
├── dependencies.md
├── nfr-register.md
├── auth-design.md
├── options-matrix.md
├── observability-plan.md
└── risk-register.mdDocuments are the byproduct. The thinking that produced them is the product.
Domain
Built for financial services — GDPR, FCA, PSD2, SMCR, and data residency. Domain calibration is embedded in the probe logic at S3, S4, S7, S10, and S12.
Other domain configs coming.
The Measure of Success
I believe the product is working when engineers need it less — not more. If an engineer at month twelve is more dependent on this tool than at month one, then I have failed to develop their capability. My goal is for the engineer to internalise these thinking patterns until they become second nature and they can walk into a review board and defend their reasoning without any tools in front of them.
The signal that it's working is when engineers start asking these diagnostic questions themselves before the mentor even has the chance.
Licence
CLI and schemas: MIT
Methodology: CC BY-NC 4.0 — free for individuals, commercial use requires a licence.
