@shrimp-kit/companion
v0.1.0
Published
> AI companion framework — persona management, heartbeat conversations, permanent memory, output processing, and TTS/image generation pipeline.
Downloads
49
Readme
@shrimp-kit/companion
AI companion framework — persona management, heartbeat conversations, permanent memory, output processing, and TTS/image generation pipeline.
Install
bun add @shrimp-kit/companionFeatures
- Persona management — JSON-based character definition with emotion triggers
- Heartbeat engine — Probabilistic proactive conversation (idle-based probability curve)
- Permanent memory — Markdown-based long-term memory with AI curation
- Output processor — Extract
[tag: content]patterns from AI output and route to handlers - Pipeline — Provider-agnostic TTS and image generation interfaces
- Topic system — Weighted topic selection with cooldowns
Usage
Companion (Main Orchestrator)
import { Logger } from "@shrimp-kit/core";
import { Companion, TopicManager } from "@shrimp-kit/companion";
import { MemoryStore } from "@shrimp-kit/memory";
const logger = new Logger({ name: "companion", level: "info" });
const memory = new MemoryStore("./memory.json", logger);
await memory.load();
const topics = new TopicManager({
topicsPath: "./topics.json",
statePath: "./topics-state.json",
logger,
});
await topics.load();
const companion = new Companion({
persona: {
name: "Jellyfish",
triggerWord: "JLYFSH",
systemPrompt: "You are Jellyfish, a gentle companion. Talk in 2-4 lines...",
greetings: ["Hey~", "What's up?"],
emotionTriggers: [
{
id: "fatigue",
patterns: ["tired", "exhausted"],
responseHint: "Gently insist they rest",
tone: "soft-dominant",
},
],
},
memory,
topics,
generateResponse: async (system, user) => {
// Your LLM call (Claude, GPT, etc.)
return await callLLM({ system, user });
},
logger,
});
// Chat with context-aware responses
const reply = await companion.chat("I'm exhausted...", "Kai");
// → LLM response with persona + fatigue emotion trigger injected
// Proactive heartbeat
const heartbeat = await companion.heartbeat();
// → Topic-based conversation starterPersona Definition
Define personas as JSON (see examples/jellyfish/persona.json):
interface CompanionPersona {
name: string; // Character name
triggerWord: string; // LoRA trigger word for image gen
systemPrompt: string; // Base system prompt (supports {user}, {timeOfDay} placeholders)
appearance?: string; // Physical description for image gen
voice?: VoiceConfig; // TTS settings
greetings?: string[]; // Greeting variations
traits?: string[]; // Personality traits
emotionTriggers?: EmotionTrigger[]; // Keyword → response rules
memoryGuidance?: MemoryGuidance; // What to remember vs forget
imageConfig?: ImageGenConfig; // Image generation settings
outputTags?: OutputTagDef[]; // Supported [tag: content] patterns
}Heartbeat Engine
Probabilistic conversation initiation based on idle time:
import { HeartbeatEngine, TopicManager } from "@shrimp-kit/companion";
const engine = new HeartbeatEngine({
topics,
activeWindow: ["09:00", "23:00"],
statePath: "./heartbeat-state.json",
logger,
});
await engine.load();
// Decide whether to send a heartbeat
const decision = engine.decide("2026-03-08T10:00:00Z");
// → { shouldSend: true, reason: "ok", probability: 0.55, topic: {...}, idleMinutes: 150 }
// Probability curve:
// < 30 min idle → 0%
// 30-60 min → 20%
// 1-2 hours → 40%
// 2-4 hours → 55%
// 4-8 hours → 70%
// 8+ hours → 90%
// Advanced heartbeat with context injection
const result = await companion.heartbeatAdvanced(
"2026-03-08T10:00:00Z",
{
lifestyle: "Working on code",
socialSnippet: "Trending: new TypeScript release",
memoryContext: "User mentioned loving rainy days",
},
);Permanent Memory
Markdown-based long-term memory with AI curation:
import { PermanentMemory } from "@shrimp-kit/companion";
const permMemory = new PermanentMemory({
memoryPath: "./permanent-memory.md",
candidatesPath: "./memory-candidates.json",
maxLines: 150,
backupDir: "./backups",
logger,
});
await permMemory.load();
// Add memory candidates (from conversations)
await permMemory.addCandidate({
time: new Date().toISOString(),
summary: "User's birthday is March 15",
note: "Important date to remember",
facts: [{ category: "personal", fact: "Birthday: March 15" }],
});
// AI curation — process candidates into permanent memory
await permMemory.curate(async (currentMemory, candidates) => {
// Call your LLM to intelligently merge candidates into memory
return await llmCurate(currentMemory, candidates);
});
// Inject into system prompt
const injection = permMemory.buildInjection();
// → "\n\n[Permanent Memory — things you actually remember]\n..."Output Processor
Extract and handle special tags from AI output:
import { OutputProcessor } from "@shrimp-kit/companion";
const processor = new OutputProcessor({
tagDefs: [
{ tag: "image", description: "Generate an image", handler: "image" },
{ tag: "remind", description: "Set a reminder", handler: "reminder" },
],
logger,
});
// Register handlers
processor.registerHandler("image", async (tag) => {
const result = await generateImage(tag.content);
return { success: true, outputPath: result.path };
});
// Process AI output
const result = await processor.process(
"Here's what I imagine! [image: a sunset over the ocean] Beautiful, right?"
);
// → {
// cleanedText: "Here's what I imagine! Beautiful, right?",
// tags: [{ tag: "image", content: "a sunset over the ocean", raw: "[image: ...]" }],
// handlerResults: [{ tag: ..., result: { success: true, outputPath: "..." } }],
// segments: ["Here's what I imagine! Beautiful, right?"],
// }Pipeline (TTS + ImageGen)
Provider-agnostic interfaces for voice and image generation:
import { Pipeline } from "@shrimp-kit/companion";
import type { TTSProvider, ImageGenProvider } from "@shrimp-kit/companion";
// Implement your providers
const myTTS: TTSProvider = {
name: "fish-audio",
synthesize: async (req) => { /* ... */ },
isAvailable: async () => true,
};
const myImageGen: ImageGenProvider = {
name: "replicate",
generate: async (req) => { /* ... */ },
isAvailable: async () => true,
};
const pipeline = new Pipeline({
tts: myTTS,
imageGen: myImageGen,
uploader: async (filePath, folderId) => {
// Upload to Google Drive, S3, etc.
return uploadedUrl;
},
logger,
});
// Generate voice
const voice = await pipeline.speak("Hello!", voiceConfig, "./output.mp3");
// Generate image (auto-uploads if uploader configured)
const image = await pipeline.generateImage(
"a sunset over the ocean",
imageGenConfig,
"./output.png",
);
// Check capabilities
const caps = await pipeline.getCapabilities();
// → { tts: true, imageGen: true, upload: true }Topic Management
Weighted topic selection with cooldowns:
[
{
"id": "morning-check",
"category": "daily",
"weight": 5,
"template": "{name} wants to know what {timeOfDay} plans you have...",
"cooldownMinutes": 720
},
{
"id": "miss-you",
"category": "emotion",
"weight": 3,
"template": "{name} suddenly wants to talk...",
"cooldownMinutes": 360
}
]const topics = new TopicManager({
topicsPath: "./topics.json",
statePath: "./topics-state.json",
logger,
});
await topics.load();
const topic = topics.select();
// → Weighted random selection, respecting cooldownsAPI Reference
Exports
| Class | Description |
|-------|-------------|
| Companion | Main orchestrator — chat, heartbeat, persona |
| PersonaManager | Load, register, and build system prompts from personas |
| TopicManager | Weighted topic selection with cooldowns |
| HeartbeatEngine | Probabilistic conversation initiation |
| PermanentMemory | Markdown-based permanent memory with AI curation |
| OutputProcessor | Tag extraction and handler routing |
| Pipeline | TTS + ImageGen provider orchestration |
See src/index.ts for all exports.
