@stellartech/table-migration
v1.1.20
Published
Migrate selected user tables from the current source Directus instance to another target Directus instance.
Downloads
86
Readme
Directus Table Migration Extension
Migrate selected database tables from one Directus instance to another with ease, using this streamlined extension.
Features
- Selective Table Migration: Choose exactly which tables to migrate
- User Tables Only: Only shows user-created tables (excludes Directus system tables)
- Real-time Progress: Live updates during migration process
- Dry Run Mode: Test migrations without making changes
- File Support: Automatically includes files referenced by selected tables
- Schema Validation: Ensures destination compatibility
Usage
Getting Started
- Install the Extension: Install from the Directus Marketplace or manually
- Navigate to Migration: Go to
/admin/table-migrationor use the navigation menu - Configure Destination: Enter the destination URL and admin token
- Sync Schemas: Synchronize source and destination schemas
- Select Tables: Choose which tables you want to migrate
- Run Migration: Use dry run to test, then execute the actual migration
Step-by-Step Process
Destination Setup
- Enter the destination Directus instance URL
- Provide an admin token from the destination instance
- Click "Check" to verify compatibility
Sync Schemas
- Press "Sync Schema" button in the dry-run mode
- If you see that the schemas are not in sync, run the process without dry-run
Table Selection
- The extension automatically loads all user-created tables
- Each table shows its name, type (regular/singleton), and item count
- Select individual tables or use "Select All" / "Deselect All"
Migration Options
- Dry Run: Test the migration without making changes
- Force: Override schema compatibility warnings (use with caution)
Execute Migration
- Click "Start Migration" to begin
- Monitor real-time progress in the interface
- Migration files are saved to the source instance for reference
Migration Process
The extension follows this sequence:
- Schema Snapshot: Creates a backup of the current schema
- Schema Application: Applies necessary schema changes to destination
- Table Data Extraction: Extracts data from selected tables only
- File Migration: Migrates files referenced by selected tables
- Data Migration: Transfers table data to destination instance
Requirements
- Source Instance: Directus 11.5.0+ with admin privileges
- Destination Instance: Same version and database platform as source
- Admin Access: Admin user and token on both instances
- Network Access: Destination instance must be accessible from source
Permissions
This extension requires admin privileges on both instances to ensure:
- Full access to all user tables
- Ability to create schema changes
- Permission to upload files
- Access to system information
Files and Tables
Supported Table Types
- Regular Collections: Standard database tables with multiple records
- Singleton Collections: Single-record collections (settings, pages, etc.)
Excluded Tables
- All Directus system tables (starting with
directus_) - System collections used for user management, permissions, etc.
- Extensions, flows, and other system-level data
Migration Files
The extension creates timestamped migration files in the source instance:
schema.json: Complete database schemaitems_full_data.json: Selected table dataitems_singleton.json: Singleton collection datafiles.json: File metadata for referenced files
Error Handling
- Failed Migrations: Can be safely re-run (existing data is skipped)
- Compatibility Issues: Force option available for version mismatches
- Network Issues: Built-in retry logic with exponential backoff
- Rate Limiting: Automatic rate limiting prevents overwhelming destination
Best Practices
- Test First: Always run a dry run before actual migration
- Backup Data: Create backups of both source and destination
- Check Compatibility: Ensure versions and database platforms match
- Monitor Progress: Watch the real-time progress for any issues
- Selective Migration: Only migrate tables you actually need
Troubleshooting
Common Issues
- No Tables Available: Check if you have user-created collections
- Connection Failed: Verify destination URL and admin token
- Migration Stuck: Check network connectivity and server resources
- Schema Mismatch: Use force option or align instance versions
S3 Storage Issues
Error: s3:PutObject permission denied
Solution: Set MIGRATION_STORAGE_LOCATION=local in your environment variablesError: s3:DeleteObject permission denied
Solution: The extension now uses timestamped filenames to avoid delete operations.
Update to version 1.0.6+ and use local storage.Error: Service "files" is unavailable
Cause: S3 permissions blocking file operations
Solution: Add MIGRATION_STORAGE_LOCATION=local to bypass S3 entirelyFiles not appearing in expected S3 bucket
Check: Extension is using local storage by default
Location: Files saved to local storage directory instead of S3
Access: Via Directus Admin → Files → Migration foldersPerformance Tips
- Large Datasets: Migration time increases with data volume
- File-Heavy Tables: File transfers can be slow on limited bandwidth
- Concurrent Users: Avoid heavy usage during migration
Installation
From Marketplace
- Go to Settings → Extensions
- Browse Marketplace for "Table Migration"
- Install and enable the extension
Manual Installation
Prerequisites
- Node.js (v18 or higher)
- npm or yarn package manager
- Directus 11.5.0+ instance
Build and Installation Steps
Install Dependencies
npm installBuild the Extension
npm run buildThis creates the
dist/folder with:api.js- Server-side endpoint logicapp.js- Client-side module interface
Install in Directus
Option A: Copy to Extensions Directory
# Copy the entire extension folder to your Directus extensions directory cp -r . /path/to/directus/extensions/table-migration # Or copy just the built files if you have the package structure mkdir -p /path/to/directus/extensions/table-migration cp -r dist/* /path/to/directus/extensions/table-migration/ cp package.json /path/to/directus/extensions/table-migration/Option B: Use the Link Command (Development)
# Link the extension for development npm run linkRestart Directus
# Restart your Directus instance pm2 restart directus # or systemctl restart directus # or restart via your deployment methodEnable the Extension
- Go to Settings → Extensions in your Directus admin panel
- Find "Migration Bundle" in the list
- Enable the extension
Development Commands
npm run dev- Build with watch mode and no minificationnpm run build- Production buildnpm run validate- Validate extension structurenpm run add- Add extension to Directus (interactive)
Local Deployment Script
Use the included script to build and deploy the extension to your local Directus instances defined in tmp/directus/docker-compose.yaml.
cms/extensions/table-migration/deploy-local.shWhat it does:
- Builds the extension (
npm run build) - Stages
dist/*andpackage.json - Syncs files to
tmp/directus/v1/extensions/table-migrationandtmp/directus/v2/extensions/table-migration - Restarts
directus-v1anddirectus-v2via Docker Compose
Environment overrides (optional):
COMPOSE_FILEpath to docker-compose (default:tmp/directus/docker-compose.yaml)EXTENSION_NAMEextension folder name (default:table-migration)TARGET_V1_DIRandTARGET_V2_DIRtarget extension paths
After running, test:
GET http://localhost:8055/migration/testGET http://localhost:8056/migration/test
Directory Structure
After installation, your Directus extensions directory should contain:
extensions/
└── table-migration/
├── dist/
│ ├── api.js # Server-side endpoints
│ └── app.js # Client-side module
├── package.json # Extension metadata
└── node_modules/ # Dependencies (if copied)API Endpoints
The extension provides these endpoints:
GET /migration/test- Test endpoint for verifying extension is workingPOST /migration/tables- Get available tables for migrationPOST /migration/check- Validate destination compatibilityPOST /migration/schema-sync- Compare and synchronize schema only (supports dry-run)POST /migration/dry-run- Test data migration without changes (requires schema to match)POST /migration/run- Execute data and files migration (requires schema to match)
Endpoint Details
GET /migration/test
Simple test endpoint to verify the extension is installed and working.
Response:
{
"message": "Test endpoint working!",
"timestamp": "2024-01-01T00:00:00.000Z"
}POST /migration/tables
Returns all available user-created tables that can be migrated.
Request Body:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {}
}Response:
[
{
"name": "articles",
"type": "table",
"count": 150,
"schema": {...}
},
{
"name": "settings",
"type": "singleton",
"count": 1,
"schema": {...}
}
]POST /migration/check
Validates if the destination instance is compatible for migration.
Request Body:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {}
}Response:
{
"status": "success",
"icon": "check",
"message": "This instance is compatible for migration."
}POST /migration/schema-sync
Compares source schema to destination and applies differences. Use to align schemas before migrating any data/files.
Request Body:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"dryRun": true,
"scope": { "force": false }
}- When
dryRunis true, only compares and validates without applying. - When
dryRunis false (or omitted), applies the schema changes. Usescope.forceto force apply when needed.
Response: Streaming text with comparison and apply results.
POST /migration/dry-run
Tests the migration process without making any changes to the destination.
Request Body:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {
"selectedTables": ["articles", "categories"],
"updatedAfter": "2025-07-01T00:00:00.000Z",
"force": false
}
}Alternatively, you can provide a blacklist of tables using excludedTables to migrate everything else (including Directus system tables). When excludedTables is provided, selectedTables is ignored:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {
"excludedTables": ["temporary_data", "logs"],
"force": false
}
}Response: Streaming text response with migration progress.
POST /migration/run
Executes the actual migration process.
- First compare schemas and will respond with HTTP 409 if schemas differ, asking you to run
/migration/schema-syncfirst.
Request Body:
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {
"selectedTables": ["articles", "categories"],
"force": false
},
"callbackUrl": "https://your-callback-endpoint.com/webhook" // Optional
}Or with excludedTables (blacklist mode):
{
"baseURL": "https://destination.directus.app",
"token": "your-admin-token",
"scope": {
"excludedTables": ["drafts", "logs"],
"updatedAfter": "2025-07-01T00:00:00.000Z",
"force": false
},
"callbackUrl": "https://your-callback-endpoint.com/webhook"
}Parameters:
baseURL(required): Destination Directus instance URLtoken(required): Admin token for destination instancescope(required): Migration configuration objectselectedTables(required unlessexcludedTablesis provided): Array of table names to migrateexcludedTables(optional): Array of table names to exclude; when provided, all existing tables (including Directus system tables) will be migrated except those listed, andselectedTableswill be ignoredupdatedAfter(optional): ISO datetime string. Only records withdate_updated >= updatedAfterare migrated. Ifdate_updatedis null or not present,date_created >= updatedAfteris used. If neither field exists, the record is included.force(optional): Override compatibility warnings
callbackUrl(optional): URL to call when migration completes
Response Options:
1. Synchronous Response (no callbackUrl): Streaming text response with migration progress and results.
2. Asynchronous Response (with callbackUrl):
{
"status": "started",
"processId": "migration_1672531200000_abc123def456",
"message": "Migration started in background",
"timestamp": "2024-01-01T00:00:00.000Z"
}Callback Payload: When migration completes, the callback URL receives:
{
"processId": "migration_1672531200000_abc123def456",
"status": "completed", // or "failed"
"message": "Migration completed successfully",
"timestamp": "2024-01-01T00:00:00.000Z",
"details": {
"selectedTables": ["articles", "categories"],
"tablesMigrated": 2,
"filesMigrated": 15,
"folderId": "folder-uuid",
"isDryRun": false
}
}Error Callback Payload:
{
"processId": "migration_1672531200000_abc123def456",
"status": "failed",
"message": "Schema migration failed: Connection timeout",
"timestamp": "2024-01-01T00:00:00.000Z",
"details": {
"selectedTables": ["articles", "categories"],
"isDryRun": false,
"error": "Error: Schema migration failed: Connection timeout"
}
}Example 409 response:
{ "status": "danger", "icon": "error", "message": "Schema differences detected. Please run /migration/schema-sync first." }Flow Integration
The migration extension can be integrated with Directus flows for automated migration processes. However, there are important limitations to consider when using the endpoints in flows.
Step-by-Step Flow Setup Guide
Prerequisites
- Migration Extension Installed: Ensure the migration bundle extension is installed and enabled
- Admin Access: You need admin privileges on both source and destination instances
- Destination Token: Admin token from the destination Directus instance
- Allow flow requests to localhost: set variable IMPORT_IP_DENY_LIST to "169.254.169.254"
- Set your public URL to env: set variable PUBLIC_URL to "http://localhost:8055" (or your real Directus URL)
- Set your Admin token to env: set variable ADMIN_TOKEN to your admin token
- Alloy use required variables from Flow set variable FLOWS_ENV_ALLOW_LIST to "PUBLIC_URL,ADMIN_TOKEN"
Step 1: Create a New Flow
- Navigate to Settings → Flows in your Directus admin panel
- Click "Create Flow"
- Configure basic settings:
- Name: "Migration Workflow"
- Status: Active
- Trigger: Manual (or any as you need)
- Requires Selection: Disable
Step 2: Add Flow Operations
Operation 1: Test Connection
Purpose: Verify the migration extension is working
- Add Operation: Request URL
- Configuration:
Name: Test Migration Extension
Method: GET
URL: {{$env.PUBLIC_URL}}/migration/test- Expected Response:
{
"message": "Test endpoint working!",
"timestamp": "2025-07-16T15:21:21.460Z"
}Operation 2: Validate Destination
Purpose: Check if destination instance is compatible
- Add Operation: Request URL
- Configuration:
Name: Check Destination Compatibility
Method: POST
URL: {{$env.PUBLIC_URL}}/migration/check
Headers:
Authorization: Bearer {{$env.ADMIN_TOKEN}}
Content-Type: application/json
Body:
{
"baseURL": "https://your-destination.directus.app",
"token": "destination-admin-token-here",
"scope": {}
}- Expected Responses: Success:
{
"status": "success",
"icon": "check",
"message": "This instance is compatible for migration."
}Error:
{
"status": "danger",
"icon": "error",
"message": "Version mismatch or connection failed"
}Operation 3: Conditional Check
Purpose: Only proceed if destination is compatible
- Add Operation: Condition
- Configuration:
Name: Check Compatibility Status
Rule:
{
"check_destination_compatibility": {
"status": {
"_eq": 200
}
}
}Operation 4: Get Available Tables
Purpose: Retrieve a list of tables that can be migrated
- Add Operation: Request URL
- Configuration:
Name: Get Available Tables
Method: POST
URL: {{$env.PUBLIC_URL}}/migration/tables
Headers:
Authorization: Bearer {{$env.ADMIN_TOKEN}}
Content-Type: application/json
Body:
{
"baseURL": "https://your-destination.directus.app",
"token": "destination-admin-token-here",
"scope": {}
}- Expected Response:
[
{
"name": "FirstTable",
"collection": "FirstTable",
"icon": "folder",
"itemCount": 10,
"selected": false,
"singleton": false
},
{
"name": "SecondTable",
"collection": "SecondTable",
"icon": "table",
"itemCount": 20,
"selected": false,
"singleton": true
}
]Operation 4: Align schemas using POST /migration/schema-sync.
Operation 5: Execute Migration (With Limitations)
Purpose: Run the actual migration
IMPORTANT LIMITATIONS: The /migration/run endpoint has significant limitations when used in flows:
Name: Execute Migration
Method: POST
URL: {{$env.PUBLIC_URL}}/migration/run
Headers:
Authorization: Bearer {{$env.ADMIN_TOKEN}}
Content-Type: application/json
Body:
{
"baseURL": "https://your-destination.directus.app",
"token": "destination-admin-token-here",
"scope": {
"selectedTables": ["articles", "categories", "products"],
"force": false
}
}Migration Endpoint Limitations in Flows
Traditional Limitations (Synchronous Mode):
Streaming Text Response:
- Returns
Content-Type: text/plainwithTransfer-Encoding: chunked - Data streams in real-time as migration progresses
- Flows expect JSON responses, making parsing difficult
- Returns
Long Execution Time:
- Migration can take minutes to hours depending on data volume
- Flow operations may time out before completion
- No way to get progress updates mid-execution
Limited Error Handling:
- Errors are embedded in the text stream, not HTTP status codes
- Hard to programmatically detect failure vs. success
- Need to parse text response for error keywords
Asynchronous Mode with Callback Support
To overcome these limitations, use the callbackUrl parameter:
Benefits:
- Instant Response: Immediate JSON response with process ID
- No Timeouts: Migration runs in background
- Proper Error Handling: Structured JSON status reports
- Flow-Friendly: Works seamlessly with Directus flows
Implementation Example:
{
"baseURL": "https://destination.directus.app",
"token": "destination-admin-token",
"scope": {
"selectedTables": ["articles", "categories"],
"force": false
},
"callbackUrl": "https://your-directus.app/flows/webhook/migration-complete"
}Flow gets immediate response:
{
"status": "started",
"processId": "migration_1672531200000_abc123def456",
"message": "Migration started in background"
}Callback webhook receives completion status:
{
"processId": "migration_1672531200000_abc123def456",
"status": "completed",
"details": {
"tablesMigrated": 2,
"filesMigrated": 15
}
}Recommended Flow Structure
Option 1: Asynchronous with Callback (Recommended)
1. Test Extension →
2. Check Destination →
3. Condition Check →
4. Get Tables →
5. Start Migration (with callback) →
6. Send "Started" Notification
[Separate Flow for Callback]
7. Webhook Trigger →
8. Process Migration Results →
9. Send Completion NotificationOption 2: Synchronous (Legacy)
1. Test Extension →
2. Check Destination →
3. Condition Check →
4. Get Tables →
5. Trigger Migration (HTTP-only) →
6. Send NotificationSample Configuration:
Asynchronous Migration (Recommended):
{
"name": "Start Migration with Callback",
"method": "POST",
"url": "{{$env.PUBLIC_URL}}/migration/run",
"headers": {
"Authorization": "Bearer {{$env.ADMIN_TOKEN}}",
"Content-Type": "application/json"
},
"body": {
"baseURL": "https://destination.directus.app",
"token": "dest-token",
"scope": {
"selectedTables": ["articles", "categories"],
"force": false
},
"callbackUrl": "{{$env.PUBLIC_URL}}/flows/webhook/migration-complete"
}
}Callback Flow Setup:
- Create a new Flow with Webhook trigger
- Set webhook key to
migration-complete - Add operations to process the callback data:
{
"name": "Process Migration Results",
"condition": {
"webhook": {
"status": {
"_eq": "completed"
}
}
},
"operations": [
{
"type": "notification",
"recipient": "[email protected]",
"subject": "Migration Completed",
"message": "Migration {{webhook.processId}} completed successfully. Tables migrated: {{webhook.details.tablesMigrated}}"
}
]
}Synchronous Migration (Legacy):
{
"name": "Trigger Migration Only",
"method": "POST",
"url": "{{$env.PUBLIC_URL}}/migration/run",
"headers": {
"Authorization": "Bearer {{$env.ADMIN_TOKEN}}",
"Content-Type": "application/json"
},
"body": {
"baseURL": "https://destination.directus.app",
"token": "dest-token",
"scope": {
"selectedTables": ["{{selected_tables}}"],
"force": false
}
},
"timeout": 30000
}Environment Variables Setup
Add these to your .env file:
# Source instance
PUBLIC_URL=https://your-source.directus.app
ADMIN_TOKEN=your-admin-token-here
# Destination instance
DEST_URL=https://your-destination.directus.app
DEST_TOKEN=destination-admin-token
# Storage configuration (optional)
MIGRATION_STORAGE_LOCATION=localStorage Configuration
The extension supports configurable storage locations for migration files to work around S3 permission restrictions.
Default Behavior
By default, the extension uses local storage for all migration files to avoid S3 permission issues:
- Schema snapshots
- Table data exports
- System configuration files
- File metadata
Storage Options
Option 1: Local Storage (Default)
# Uses local file storage (bypasses S3 permissions)
MIGRATION_STORAGE_LOCATION=localOption 2: Alternative S3 Storage
# Uses a different S3 storage location with proper permissions
MIGRATION_STORAGE_LOCATION=s3-backupOption 3: Auto-detect Alternative Storage
# Remove or comment out to use the first non-default storage location
# MIGRATION_STORAGE_LOCATION=S3 Permission Requirements
If using S3 storage, your IAM role needs these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name/*",
"arn:aws:s3:::your-bucket-name"
]
}
]
}File Naming
All migration files use timestamped filenames to prevent conflicts and avoid delete operations:
schema_2024-01-15T10-30-45-123Z.jsonitems_full_data_2024-01-15T10-30-45-124Z.jsonusers_2024-01-15T10-30-45-125Z.json
This ensures no existing files are overwritten and eliminates the need for S3 delete permissions.
Flow Best Practices
1. Use Dry Run First
Always test with /migration/dry-run before actual migration:
{
"baseURL": "https://destination.directus.app",
"token": "token",
"scope": {
"selectedTables": ["test_table"],
"force": false
}
}2. Add Error Handling
Include error notification operations:
Name: Migration Error Alert
Recipient: Admin User
Subject: Migration Flow Failed
Message: Error in migration process: {{$last.error}}3. Security Considerations
- Store tokens in environment variables
- Use HTTPS for all requests
- Limit flow execution to admin users only
- Audit flow execution logs regularly
4. Alternative: Webhook-Based Approach
For better reliability, consider setting up webhooks:
- Flow triggers migration start
- Extension posts completion status to webhook
- Webhook triggers completion flow
This avoids timeout and streaming response issues while providing proper completion notifications.
Common Flow Issues
- 401 Unauthorized: Check admin token and accountability
- Timeout: Reduce operation timeout or use async approach
- JSON Parse Error: Verify Content-Type headers
- Network Error: Check URL formatting and network connectivity
Migration File Monitoring
After migration completion, check the file library for migration artifacts:
schema.jsonitems_full_data.jsonitems_singleton.jsonfiles.json
