xynapse
v0.0.1
Published
<p align="center"> <a href="http://nestjs.com/" target="blank"><img src="https://nestjs.com/img/logo-small.svg" width="120" alt="Nest Logo" /></a> </p>
Downloads
183
Readme
Xynapse Backend
This is the NestJS backend for the Xynapse application.
Description
The backend provides a RESTful API for the Xynapse application, handling database operations, authentication, and business logic.
Prerequisites
- Node.js (v18 or later)
- PostgreSQL (v14 or later)
- npm or yarn
Installation
# Install dependencies
$ npm installDatabase Setup
Option 1: Local PostgreSQL Installation
- Create a PostgreSQL database for the application
- Update the
.envfile with your database credentials
DATABASE_URL=postgresql://username:password@localhost:5432/xynapse
DATABASE_HOST=localhost
DATABASE_PORT=5432
DATABASE_USERNAME=username
DATABASE_PASSWORD=password
DATABASE_NAME=xynapseOption 2: Docker Compose (Recommended)
A Docker Compose configuration is provided to easily set up a PostgreSQL database for development:
# Start the PostgreSQL database
$ docker-compose up -d
# Check the status of the database
$ docker-compose ps
# View database logs
$ docker-compose logs -f postgres
# Stop the database
$ docker-compose down
# Stop the database and remove volumes (will delete all data)
$ docker-compose down -vThe Docker Compose setup uses the following default credentials:
- Username: postgres
- Password: postgres
- Database: xynapse
- Port: 5432
These match the default values in the .env file, so the application should connect without additional configuration.
The PostgreSQL instance includes the pgvector extension, which will be needed for vector similarity search in the future. Currently, the application uses a JSON column for embeddings, but it's prepared for migration to pgvector.
Running the app
# Development mode
$ npm run start:dev
# Production mode
$ npm run build
$ npm run start:prodDatabase Migrations
The application uses TypeORM for database operations. In development mode, the database schema is automatically synchronized with the entity definitions.
However, you should still create a migration files because Staging and Production databases are not synchronized.
Generate a migration, replacing
MigrationNameby an overview of your changes (e.g. AddFieldXInTableZ)npm run migration:generate ./src/migrations/MigrationName
This will generate a migration automatically based off your changes.
If you want to write it yourself, run npm run migration:create ./src/migrations/MigrationName instead.
Open the generated migration file and complete
up()anddown()functions, using TypeORM migration API.Feel free to take inspiration from past migrations in
src/migrations/directory.Run migrations (this step is automatic and does not require any manual command)
API Documentation
Once the application is running, you can access the API documentation at:
http://localhost:3001/api/docsAuthentication
The application uses JWT (JSON Web Token) based authentication. The authentication module provides the following features:
Authentication Endpoints
POST /auth/sign-up: Register a new user
- Request body:
{ "name": "string", "email": "string", "password": "string" } - Response:
{ "user": User, "accessToken": "string" }
- Request body:
POST /auth/sign-in: Authenticate a user
- Request body:
{ "email": "string", "password": "string" } - Response:
{ "user": User, "accessToken": "string" }
- Request body:
POST /auth/forgot-password: Request a password reset
- Request body:
{ "email": "string" } - Response:
{ "success": boolean }
- Request body:
POST /auth/reset-password: Reset password with token
- Request body:
{ "token": "string", "password": "string" } - Response:
{ "success": boolean }
- Request body:
GET /auth/me: Get current authenticated user
- Headers:
Authorization: Bearer <token> - Response:
{ "user": User, "organization": Organization, "member": Member }
- Headers:
JWT Authentication
The application uses JWT for authentication with the following configuration:
- Token expiration: 1 day (configurable via
JWT_EXPIRES_INenvironment variable) - Token secret: Configurable via
JWT_SECRETenvironment variable - Token extraction: Bearer token from Authorization header
Password Handling
Passwords are securely hashed using bcrypt before storage. The authentication flow includes:
- Password validation during registration and login
- Secure password reset mechanism with unique tokens
- Protection against common security vulnerabilities
Environment Variables
Add the following to your .env file for authentication:
JWT_SECRET=your-secret-key
JWT_EXPIRES_IN=1dProject Structure
src/- Source codecommon/- Common utilities and base classesconfig/- Configuration module and environment variablesdatabase/- Database configuration and seedersseeder/- Database seeding services
modules/- Feature modulesauth/- Authentication modulecontrollers/- Auth controllerdto/- Data transfer objects for auth operationsservices/- Auth servicestrategies/- Passport strategies (JWT)
chat/- Chat functionality and message handlingcontrollers/- Chat controllers (chat, conversation, transcript, id)dto/- Data transfer objects for chat operationsentities/- Chat-related entitiesrepositories/- Chat-related repositoriesservices/- Chat services (chat, tools)
common/- Shared utilities across modulesconfig/- Configuration servicesdatabase/- Database connections and repositoriesorganizations/- Organization managementprojects/- Project managementusers/- User management
scripts/- Utility scriptsinit-db.ts- Database initialization script
pgvector-init/- SQL scripts for pgvector extension initializationinfra/- Infrastructure code
Chat Module
The Chat Module provides chat functionality with AI-powered capabilities using OpenAI. It includes:
Chat Features
- Real-time chat with streaming responses
- Message history management
- Different chat types (conversation, transcript)
- Chat by ID functionality
AI Tools Integration
The Chat Module includes a tools system that allows the AI to perform actions:
- Tools are defined in the
ToolsServiceand categorized by user role (admin, member) - Tools can be executed server-side or client-side
- The system integrates with OpenAI's function calling to enable AI-powered tools
- Tools are automatically provided to the AI based on the user's role
Chat Endpoints
- POST /chat/stream: Stream a chat response
- POST /chat/conversation: Stream a conversation chat response
- POST /chat/transcript: Stream a transcript chat response
- POST /chat/:id/stream: Stream a chat response for a specific chat ID
- POST /chat/tool: Call a tool
- GET /chat/tools: Get available tools for the current user
For more details, see the Chat Module Documentation.
Database Initialization
The application includes a database initialization script that can be used to set up the database for development or testing.
Running the Database Initialization
# Initialize the database
$ npm run db:initWhat the Initialization Does
- Database Creation: Checks if the database exists and creates it if it doesn't
- Schema Creation: Creates tables using TypeORM's schema synchronization
- Data Seeding: Seeds the database with initial data:
- Currencies (USD, EUR, GBP, JPY)
- A demo organization
- Sample users with credentials:
- Admin: [email protected] / admin123
- User: [email protected] / user123
pgvector Extension
The PostgreSQL database includes the pgvector extension for vector similarity search. This is initialized automatically when using Docker Compose. The extension is used for:
- Storing and querying vector embeddings
- Enabling similarity search for AI features
The application is prepared to use pgvector, with initialization scripts located in the pgvector-init/ directory.
Infrastructure
See the Infra readme
License
This project is MIT licensed.
