@dwn-protocol/sign-assistant-sdk
v0.1.19
Published
AI-powered assistant SDK for Abaxx Sign with MCP integration
Readme
Sign Assistant
An AI-powered assistant for Abaxx Sign built with Next.js, featuring MCP (Model Context Protocol) integration and comprehensive attachment support including images, text files, and PDFs.
This project also exports the @dwn-protocol/sign-assistant-sdk package, which provides reusable React components and server utilities for integrating AI chat functionality into your own applications. See SDK_README.md for SDK documentation.
Features
- 🤖 AI Chat Interface - Custom chat modal with floating action button (FAB) design
- 🔧 MCP Tool Integration - Connect to Abaxx Sign and other MCP servers dynamically
- 📎 Rich Attachments - Support for images, text files, and PDFs with drag & drop
- 🖼️ Vision Capabilities - Send images to vision-capable models like GPT-4o
- 🎨 Modern UI - Beautiful landing page with modal chat interface
- ⚡ Streaming Responses - Real-time AI responses with Vercel AI SDK
- 📄 Smart Tool Loading - MCP tools only load for text queries, attachments use direct processing
Prerequisites
- Node.js 18+ (managed with n)
- npm
- OpenAI API key
Getting Started
1. Clone and Install
cd sign-assistant
npm install2. Configure Environment Variables
Create a .env.local file in the root directory:
cp .env.example .env.localEdit .env.local and add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4oNote: This demo app uses environment variables for convenience. The SDK (@dwn-protocol/sign-assistant-sdk) supports dynamic API keys passed at runtime, allowing you to get API keys from user settings or request bodies instead of environment variables. See SDK_README.md for details.
3. Configure MCP Servers (Optional)
For Abaxx Sign Integration
The project includes a custom MCP server for Abaxx Sign. To configure it, add to your .env.local:
MCP_SERVERS={"sign":{"command":"node","args":["path/to/sign-mcp/build/index.js"],"env":{"SIGN_BASE_URL":"http://localhost:3000","SIGN_API_TOKEN":"your_api_token"}}}For Other MCP Servers
You can add multiple MCP servers as a JSON object:
MCP_SERVERS={"filesystem":{"command":"npx","args":["-y","@modelcontextprotocol/server-filesystem","/Users/yourusername/allowed-directory"]},"sign":{"command":"node","args":["path/to/sign-mcp/build/index.js"],"env":{"SIGN_BASE_URL":"http://localhost:3000","SIGN_API_TOKEN":"your_token"}}}Example MCP servers you can use:
sign-mcp- Custom Abaxx Sign integration (21 tools for document management)@modelcontextprotocol/server-filesystem- File system access@modelcontextprotocol/server-github- GitHub integration@modelcontextprotocol/server-slack- Slack integration- Custom MCP servers
MCP Configuration Example
For easier configuration, see mcp-config.example.json for a formatted example.
4. Run Development Server
npm run devOpen http://localhost:3008 in your browser.
User Interface
The application features a modern design with:
- Landing Page: Beautiful gradient background with feature highlights
- Floating Action Button (FAB): Fixed button in the bottom-right corner to open chat
- Modal Chat Interface: Full-screen modal with:
- Header with app title
- Scrollable message thread
- File attachment button (images, text, PDFs)
- Text input with send button
- Welcome message with capabilities overview
Attachment Support
Sign Assistant supports multiple file types with full processing:
Images ✅
- Formats: JPEG, PNG, WebP, GIF
- Size limit: 20MB
- Features: GPT-4o vision analysis - AI can see and describe images
- Status: Fully working!
Text Files ✅
- Formats: Plain text, HTML, Markdown, CSV
- Size limit: 5MB
- Features: Full text extraction and processing
- Status: Fully working!
PDF Documents ✅
- Formats: PDF
- Size limit: 10MB
- Features: Automatic text extraction using
unpdflibrary (serverless-compatible!) - Status: Fully working!
Project Structure
sign-assistant/
├── src/
│ ├── app/
│ │ ├── api/chat/ # Chat API endpoint with MCP integration
│ │ ├── layout.tsx # Root layout with metadata
│ │ └── page.tsx # Landing page with ChatModal
│ ├── components/
│ │ ├── assistant-ui/
│ │ │ ├── attachment.tsx # Legacy attachment components
│ │ │ ├── thread.tsx # Legacy chat thread UI
│ │ │ └── runtime-provider.tsx # Legacy runtime configuration
│ │ ├── chat-interface.tsx # Custom chat UI (main component)
│ │ └── chat-modal.tsx # Modal wrapper with FAB
│ └── lib/
│ ├── attachment-adapters.ts # Attachment processing logic
│ └── mcp-client.ts # MCP client manager
├── .env.local # Environment variables (not in repo)
├── .env.example # Environment variable template
├── mcp-config.example.json # MCP configuration example
├── package.json
└── README.mdArchitecture
Custom Chat Interface
The project uses a custom chat interface (ChatInterface component) instead of assistant-ui runtime components to avoid infinite loop issues and provide better control over the UI. The interface includes:
- Custom message rendering with markdown support
- File attachment handling with visual previews
- Streaming response handling
- Tool invocation display
MCP Integration
The MCP client manager (src/lib/mcp-client.ts) connects to configured MCP servers and exposes their tools to the AI:
- Server Connection: Connects to MCP servers via stdio transport on first request
- Tool Discovery: Lists all available tools from connected servers
- Schema Conversion: Converts MCP tool schemas (JSON Schema) to Zod schemas for AI SDK
- Tool Execution: Handles tool invocations and returns results to the AI
- Smart Loading: Only loads MCP tools for text-only queries (not when processing attachments)
Known Issues
- Schema Compatibility: Some MCP tools return schemas with
type: "None"which causes OpenAI API validation errors. The code attempts to normalize these totype: "object"but may need further refinement for certain edge cases.
Attachment Processing
Each file type has a dedicated adapter in src/lib/attachment-adapters.ts:
- VisionImageAdapter - Converts images to base64 data URLs for vision models
- SimpleTextAttachmentAdapter - Wraps text content for processing
- PDFAttachmentAdapter - Extracts text from PDFs using
unpdf - CompositeAttachmentAdapter - Routes files to appropriate adapter based on MIME type
Chat API Flow
The /api/chat endpoint (src/app/api/chat/route.ts):
- Receives messages from the frontend
- Detects if any attachments are present
- If attachments: processes them directly without MCP tools
- If no attachments: initializes MCP, loads tools, and enables tool calling
- Calls OpenAI with
generateTextfrom Vercel AI SDK (uses environment variable for API key) - Returns streaming text response
Note: This demo app uses environment variables for the OpenAI API key. The SDK (@dwn-protocol/sign-assistant-sdk) supports dynamic API keys via the apiKey parameter, allowing per-user or per-request API keys. See SDK_README.md for SDK usage examples.
Customization
Adding New MCP Tools
To add new MCP capabilities:
- Create or install an MCP server
- Add the server configuration to
MCP_SERVERSin.env.local - Restart the development server
- Tools will be automatically discovered and made available to the AI
Adding New Attachment Types
Create a new adapter in src/lib/attachment-adapters.ts:
export class CustomAttachmentAdapter implements AttachmentAdapter {
accept = "application/custom";
async add({ file }: { file: File }): Promise<PendingAttachment> {
// Validation logic
if (file.size > MAX_SIZE) {
throw new Error("File too large");
}
// Return pending attachment
}
async send(attachment: PendingAttachment): Promise<CompleteAttachment> {
// Processing logic - convert file to format AI can understand
const content = await processFile(attachment.file);
return { ...attachment, content: [{ type: "text", text: content }] };
}
async remove(attachment: PendingAttachment): Promise<void> {
// Cleanup logic if needed
}
}Then add it to the CompositeAttachmentAdapter in your chat interface.
Configuring MCP Servers
Add servers to the MCP_SERVERS environment variable as a JSON object:
{
"server-name": {
"command": "command-to-run",
"args": ["arg1", "arg2"],
"env": {
"ENV_VAR": "value"
}
}
}Available MCP Tools (sign-mcp)
When connected to the Abaxx Sign MCP server, the following tools are available:
Documents
get_documents- List all documentsget_document- Get specific document detailscreate_document- Create a new documentupdate_document- Update documentdelete_document- Delete documentsend_document- Send document for signingresend_document- Resend documentdownload_document- Download signed document
Recipients
get_recipient- Get recipient detailscreate_recipient- Add recipientupdate_recipient- Update recipientdelete_recipient- Remove recipient
Fields
get_field- Get form field detailscreate_field- Add form fieldupdate_field- Update form fielddelete_field- Remove form field
Templates
get_templates- List all templatesget_template- Get template detailscreate_template- Create templatedelete_template- Delete templategenerate_document_from_template- Create document from template
Tech Stack
- Framework: Next.js 16 - React framework with App Router
- AI SDK: Vercel AI SDK - Streaming AI responses
- AI Provider: OpenAI GPT-4o - Vision and language model
- MCP: Model Context Protocol SDK - Tool integration
- Styling: Tailwind CSS 4 - Utility-first CSS
- Icons: Lucide React - Icon library
- Language: TypeScript - Type-safe JavaScript
- PDF Processing: unpdf - Serverless PDF text extraction
- Schema Validation: Zod - TypeScript-first schema validation
Troubleshooting
MCP Schema Errors
If you see errors like Invalid schema for function 'X': schema must be a JSON Schema of 'type: "object"', got 'type: "None"':
- This is a known compatibility issue with certain MCP tool schemas
- The code attempts to normalize these schemas automatically
- For persistent issues, check your MCP server's tool schema definitions
- Test your MCP server with MCP Inspector to verify it works correctly
Attachments Not Working
- Check file size limits (images: 20MB, text: 5MB, PDFs: 10MB)
- Verify file MIME types are supported
- Check browser console for errors
- Ensure OpenAI API key is configured
Port Already in Use
The dev server runs on port 3008. If it's already in use:
# Find and kill the process
lsof -i :3008
kill -9 <PID>
# Or use a different port
npm run dev -- --port 3009Resources
- Vercel AI SDK Documentation
- Model Context Protocol
- MCP Inspector - Test MCP servers
- Next.js Documentation
- OpenAI API Documentation
SDK Package
This project exports @dwn-protocol/sign-assistant-sdk, a reusable SDK for integrating AI chat functionality into your applications. The SDK includes:
- React Components:
ChatModalandChatInterfacecomponents - Server Utilities:
handleChatStream,handleChatRequest, andparsePDFfunctions - Dynamic API Keys: Support for per-user API keys (no environment variables required!)
- MCP Integration: Built-in MCP server connection and tool management
See SDK_README.md for complete SDK documentation and usage examples.
Package Name: @dwn-protocol/sign-assistant-sdk (private package)
Development Notes
- Custom Chat UI: Uses custom
ChatInterfacecomponent instead ofassistant-uiruntime to avoid infinite loop bugs - Smart Tool Loading: MCP tools only load when no attachments are present to prevent conflicts
- Streaming: All AI responses stream in real-time for better UX
- Error Handling: Comprehensive error logging for debugging MCP and attachment issues
- SDK API Keys: The SDK supports dynamic API keys via the
apiKeyparameter, allowing integration with user settings and per-request authentication
License
MIT
Support
For issues and questions:
- Check the Troubleshooting section
- Review terminal logs for detailed error messages
- Test MCP servers independently with MCP Inspector
- Open an issue on GitHub
