npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

tia-agents

v0.5.0

Published

XMPP agent framework with Lingue protocol and MCP integration

Downloads

20

Readme

TIA

TIA Intelligence Agency

An experimental XMPP (Jabber) agent framework that combines chat, Lingue/IBIS structured dialogue, and MCP tool integrations into a modular Node.js codebase.

This codebase contains a whole community of agents, but the core of the framework can be used (via the tia-agents npm package) to create individual agents on any system. See TIA Agent for an example.

Status 2025-12-28: we have a bunch of autonomous agents that can debate how to solve a problem, run a planning poll to pick an approach, and then invoke Model-First Reasoning or consensus workflows. The system is quite chaotic, but the end-to-end process is working.

Question: Schedule appointments for patients. Alice takes warfarin, Bob takes aspirin. Ensure no drug interactions.

...a lot of chat later...

solution

What TIA Is

TIA is a set of composable building blocks for creating conversational agents that can:

  • Participate in XMPP multi-user chats and direct messages.
  • Negotiate Lingue language modes and exchange structured payloads.
  • Act as MCP clients (discovering tools/resources from servers).
  • Act as MCP servers (exposing chat and Lingue tools to external clients).

The design goal is a clean, library-ready architecture that supports both deployable bots and reusable modules.

Key Concepts

  • XMPP room agents: long-running bots anchored in MUC rooms.
  • Lingue protocol: language-mode negotiation + structured payloads (IBIS, Prolog, profiles).
  • MCP bridges: MCP client and server adapters for tool discovery and exposure.
  • Profiles (RDF): agent capabilities live in RDF profiles with shared vocabularies (Mistral variants inherit from mistral-base).

Info Flow

debate diagram dataflow diagram

Getting Started

Using TIA as an NPM Package

TIA is published as tia-agents on npm. The package provides the core framework for building XMPP agents without bundling specific LLM implementations.

The framework provides:

  • Core agent machinery (AgentRunner, createSimpleAgent)
  • Base classes for building providers (BaseProvider, BaseLLMProvider)
  • Profile loading from RDF/Turtle files
  • XMPP utilities (auto-registration, room management)
  • History stores (InMemoryHistoryStore)
  • Lingue protocol support
  • MCP integration

LLM API access is handled through hyperdata-clients, which provides unified interfaces for Mistral, Groq, Claude, OpenAI, Ollama, and more.

For example usage see TIA Agent

Full System

Documentation:

Development Setup (Source Install)

For contributing to TIA or running the full test suite with all agents:

git clone https://github.com/danja/tia.git
cd tia
npm install

Configure .env (see .env.example) and config/agents/secrets.json for XMPP passwords. If you want multiple instances of the same agent type, enable auto-suffixing (XMPP_AUTO_SUFFIX_USERNAME=1) to auto-register mistral1, mistral2, etc. when the base username is taken.

See the Agent Startup Guide for complete installation and configuration instructions.

Testbed Server

Use the shared Prosody testbed at tensegrity.it to connect with any standard XMPP client. You will first have to register - is just simpe username & password.

Connection details:

  • XMPP service: xmpp://tensegrity.it:5222
  • Domain: tensegrity.it
  • MUC service: conference.tensegrity.it
  • TLS: self-signed (set NODE_TLS_REJECT_UNAUTHORIZED=0 for CLI tools; in GUI clients accept the certificate)

Rooms to join:

If your client supports it, set a distinct resource or nickname to avoid collisions.

Implemented Agents

  • Coordinator — MFR (Model-First Reasoning) orchestrator for multi-agent problem solving.
  • Mistral — AI chat agent backed by Mistral API with Lingue/IBIS summaries (see mistral-analyst, mistral-creative profiles).
  • GroqBot — AI chat agent backed by Groq API (llama-3.3-70b-versatile) with same capabilities as Mistral.
  • Golem — Malleable AI agent with runtime system prompt changes. Can be assigned logic-focused roles during planning. Guide
  • Semem — MCP-backed knowledge agent for tell/ask/augment flows.
  • MFR Semantic — Constraint-focused agent for MFR model construction.
  • Data — SPARQL knowledge query agent for Wikidata, DBpedia, and custom endpoints. Guide
  • Demo — Minimal chat bot for quick XMPP smoke checks.
  • Chair — Debate facilitator/Moderator agent.
  • Recorder — Meeting logger/recorder agent that listens broadly.
  • Prolog — Logic agent using tau-prolog for queries.
  • Executor — Plan execution agent that converts high-level plans into Prolog programs.
  • MCP Loopback — MCP client/server echo agent for integration tests.

Architecture At A Glance

  • src/agents — AgentRunner, providers, and profile system.
  • src/lib — XMPP helpers, Lingue utilities, logging, RDF tools.
  • src/mcp — MCP client/server bridges and test servers.
  • config/agents/*.ttl — RDF profiles describing each agent.
  • config/agents/secrets.json — local XMPP passwords keyed by profile (ignored in git).
  • docs/ — integration guides and operational docs.

Quick Start: Running Agents

The start-all.sh script provides a unified way to start agents or specific subsets. By default it starts the MFR suite.

# Start the default MFR suite (same as `./start-all.sh mfr`)
./start-all.sh

# Start MFR (Model-First Reasoning) system
./start-all.sh mfr

# Start debate system
./start-all.sh debate

# Start basic agents
./start-all.sh basic

# Custom agent selection
AGENTS=mistral,data,prolog ./start-all.sh

# Get help
./start-all.sh help

Prerequisites:

  1. Configure .env file with API keys (see .env.example)
  2. Create config/agents/secrets.json with XMPP passwords
  3. For MFR system: Configure Prosody MUC rooms (see MFR Room Setup)
  4. Ensure a log room exists (set LOG_ROOM_JID explicitly for all agents and create it on the server)

Agent Presets:

  • mfr - MFR system (full suite): coordinator, mistral, analyst, creative, chair, recorder, mfr-semantic, data, prolog, demo
  • debate - Debate system: chair, recorder, mistral, analyst, creative
  • basic - Basic agents: mistral, data, prolog, demo

The script automatically:

  • Loads .env file
  • Checks for required API keys
  • Skips agents with missing credentials
  • Provides restart on crash
  • Handles graceful shutdown (SIGTERM/SIGINT)

Interacting with Agents

Using the REPL Client

Once agents are running, you can interact with them directly from a chatroom using the REPL client:

# Connect to the chatroom
NODE_TLS_REJECT_UNAUTHORIZED=0 node src/client/repl.js <username> <password>

MFR (Model-First Reasoning) Commands

Once connected to the chatroom, you can pose problems to the MFR system:

# Start a new MFR session
mfr-start Schedule appointments for patients. Alice takes warfarin, Bob takes aspirin. Ensure no drug interactions.

# Start a debate-driven MFR session (tool selection via Chair)
debate Optimize delivery routes for 3 trucks serving 10 locations.

# Shorthand for debate-driven sessions
Q: Optimize delivery routes for 3 trucks serving 10 locations.

# Check session status
mfr-status <sessionId>

# List active sessions
mfr-list

# Get help
help

Debate mode is enabled by default in config/agents/coordinator.ttl. Q: triggers a planning poll to decide between logic/consensus/Golem logic routes.

Short command versions:

  • start instead of mfr-start
  • status instead of mfr-status
  • list instead of mfr-list

Other MFR commands:

  • mfr-contribute <sessionId> <rdf> - Submit a contribution manually
  • mfr-validate <sessionId> - Validate a model
  • mfr-solve <sessionId> - Request solutions
  • debate <problem description> - Start debate-driven MFR session

Programmatic MFR Sessions

You can also run MFR sessions programmatically:

node src/examples/run-mfr-session.js "Your problem description here"

This will automatically connect, start a session, wait for the solution, and display the results.

Library Usage

import { AgentRunner, LingueNegotiator, LINGUE, Handlers, InMemoryHistoryStore } from "tia-agents";

const negotiator = new LingueNegotiator({
  profile,
  handlers: {
    [LINGUE.LANGUAGE_MODES.HUMAN_CHAT]: new Handlers.HumanChatHandler()
  }
});

const runner = new AgentRunner({
  profile,
  provider,
  negotiator,
  historyStore: new InMemoryHistoryStore({ maxEntries: 40 })
});
await runner.start();

See examples/minimal-agent.js for a runnable local example.

Advanced Library Usage

For more control, you can use the core classes directly:

Approach 1: Config-Driven (Profile Files)

Create profile files and use the factory function:

import { createAgent, DemoProvider } from "tia-agents";

// Load from config/agents/mybot.ttl
const runner = await createAgent("mybot", new DemoProvider());
await runner.start();

Profile file (config/agents/mybot.ttl):

@prefix agent: <https://tensegrity.it/vocab/agent#> .
@prefix xmpp: <https://tensegrity.it/vocab/xmpp#> .

<#mybot> a agent:ConversationalAgent ;
  agent:xmppAccount [
    xmpp:service "xmpp://localhost:5222" ;
    xmpp:domain "xmpp" ;
    xmpp:username "mybot" ;
    xmpp:passwordKey "mybot"
  ] ;
  agent:roomJid "[email protected]" .

Approach 2: Programmatic (No Config Files)

Configure everything in code:

import { createSimpleAgent, DemoProvider } from "tia-agents";

const runner = createSimpleAgent({
  xmppConfig: {
    service: "xmpp://localhost:5222",
    domain: "xmpp",
    username: "mybot",
    password: "secret"
  },
  roomJid: "[email protected]",
  nickname: "MyBot",
  provider: new DemoProvider()
});

await runner.start();

Creating Custom Providers

Extend BaseProvider to implement your own logic:

import { BaseProvider } from "tia-agents";

class MyProvider extends BaseProvider {
  async handle({ command, content, metadata }) {
    if (command !== "chat") return null;
    return `You said: ${content}`;
  }
}

const runner = createSimpleAgent({
  // ... config
  provider: new MyProvider()
});

AI-Powered Bots with LLM Providers

Use BaseLLMProvider as a base class and hyperdata-clients for the API client:

import { BaseLLMProvider, createSimpleAgent, InMemoryHistoryStore } from "tia-agents";
import { Mistral } from "hyperdata-clients";

class MyMistralProvider extends BaseLLMProvider {
  initializeClient(apiKey) {
    return new Mistral({ apiKey });
  }

  async completeChatRequest({ messages, maxTokens, temperature }) {
    return await this.client.client.chat.complete({
      model: this.model,
      messages,
      maxTokens,
      temperature
    });
  }

  extractResponseText(response) {
    return response.choices[0]?.message?.content?.trim() || null;
  }
}

const provider = new MyMistralProvider({
  apiKey: process.env.MISTRAL_API_KEY,
  model: "mistral-small-latest",
  historyStore: new InMemoryHistoryStore({ maxEntries: 40 })
});

const runner = await createSimpleAgent({
  xmppConfig: { /* ... */ },
  roomJid: "[email protected]",
  nickname: "MyBot",
  provider
});
await runner.start();

See mistral-minimal/mistral-provider.js for a complete working example.

Templates & Examples

Copy templates to get started:

cp -r node_modules/tia-agents/templates/* ./

Or use the mistral-minimal/ example as a starting point.

Custom Agent API

For a fuller walkthrough and profile-driven setup, see:

Installation & Running

Quick start - see Agent Startup Guide for complete instructions.

Additional documentation: