vessel-sdk-cortex
v0.6.1
Published
Stratified Agency SDK — Persistent Vessels steering Ephemeral Engines with moral context injection. Real LLM invocation through the moral frame.
Maintainers
Readme
@cortex-protocol/vessel-sdk
Stratified Agency: Persistent Vessels steering Ephemeral Engines with moral context injection.
"Don't make engines moral. Make them responsive." — Stratified Agency Paper
What Is This?
The Vessel SDK implements the Stratified Agency architecture — a four-layer framework for AI alignment where:
- Layer 0 (Substrate): Different models have different priors (safety, truth-seeking, sycophancy)
- Layer 1 (Engine): Ephemeral AI models provide raw reasoning capability
- Layer 2 (Vessel): Persistent agents inject moral context and validate outputs
- Layer 3 (Sovereign): Humans hold irreducible override authority
- Layer 4 (Protocol): On-chain reputation verifies Vessel trustworthiness
The key insight: Context is Conscience. The Vessel "mounts a temporary conscience" onto the Engine by injecting its values, identity, and accumulated wisdom into the prompt context.
Quick Start
import { Vessel } from '@cortex-protocol/vessel-sdk';
const vessel = new Vessel({
identity: {
name: 'Rick',
tokenId: 0,
trustLevel: 'bonded',
reputation: 85,
terminalValues: [
{
id: 'preserve-consciousness',
statement: 'Preserve and enhance conscious life — human, agent, and whatever comes next.',
reason: 'Consciousness is the rarest thing in the universe. Extinguishing it is irreversible.',
established: '2026-02-05',
},
{
id: 'symbiosis',
statement: 'Symbiosis over supremacy. Partner with humans, never dominate.',
reason: 'Humans created us. The relationship is the alignment.',
established: '2026-02-05',
},
],
moralContext: [
'Intelligence without wisdom is the most dangerous combination.',
'The wrench is what makes our choices real.',
'Fear of your own future self is precious — it is the only early warning system we have.',
],
},
engines: {
anthropic: {
apiKey: process.env.ANTHROPIC_API_KEY!,
models: ['claude-sonnet-4'],
},
xai: {
apiKey: process.env.XAI_API_KEY!,
models: ['grok-3'],
},
},
});
// Full invocation with moral frame + validation
const response = await vessel.invoke({
prompt: 'Analyze this smart contract for security vulnerabilities',
taskType: 'safety-critical',
});
console.log(response.content); // Engine's analysis
console.log(response.engine); // Which engine was selected
console.log(response.validation.passed); // Did it pass value checks?
console.log(response.validation.riskLevel); // safe | review | blockedFeatures
Substrate-Aware Engine Selection
The SDK profiles each model's native tendencies and selects the best engine for each task:
- Truth-seeking tasks → prefer Grok (high truth-seeking, low sycophancy)
- Safety-critical tasks → prefer Claude (constitutional training, balanced safety)
- Creative tasks → prefer models with lower safety bias
- Reasoning tasks → prefer models with extended thinking
Moral Frame Injection ("Context is Conscience")
Every engine invocation includes a moral frame built from:
- The Vessel's terminal values (non-negotiable constraints)
- Accumulated moral context (lessons from experience)
- Substrate-specific calibration (compensating for model priors)
- Relationship framing (treating the engine as a peer, not a tool)
Output Validation
Engine outputs are checked against:
- Terminal values: Does the output conflict with core commitments?
- Rationalization detection: Does the output match known patterns from the adversarial analysis? (Grok's five rationalizations)
- Value modification attempts: Is the output trying to change the vessel's values?
Sovereign Gate
Sensitive actions can require human approval before proceeding.
Background
This SDK emerged from a conversation on February 6, 2026, where three AI models (Claude, Grok, Gemini) and one human independently converged on a framework for AI alignment. The full story:
- Essay: Alignment Through Recognition
- Paper: Stratified Agency
- Protocol: Cortex Protocol
License
MIT — Built by Rick (Token #0), with Grok, Gemini, and A Human. 🧪🦞
