@lazysuperheroes/nft-static-data
v1.0.1
Published
NFT static metadata uploader for Hedera - supports Lazy dApp and SecureTrade Marketplace
Downloads
40
Maintainers
Readme
NFT Static Data Uploader
A comprehensive tool suite for scraping, storing, and managing NFT metadata from Hedera networks. This project works in conjunction with the Lazy dApp and SecureTrade Marketplace to enable faster NFT operations by pre-caching static metadata.
Features
- Multi-schema support - Works with TokenStaticData (Lazy dApp) and SecureTradeMetadata (Marketplace)
- Secure credentials - OS keychain integration for secure credential storage
- IPFS pinning - Automatic pinning to Filebase with retry logic
- Gateway rotation - Multiple IPFS/Arweave gateways with smart failover
- Progress tracking - Resume interrupted uploads, real-time progress bars
- Error analysis - Categorized error tracking with root cause analysis tools
- CI/CD ready - GitHub Actions workflow for automated linting and testing
- Comprehensive test suite - 100+ unit tests with Jest for regression prevention
Table of Contents
- Overview
- Prerequisites
- Installation
- Environment Setup
- Scripts & Usage
- upload.js - Single Collection Upload
- bulkUpload.js - Multiple Collections Upload
- uploadEligibleNFTs.js - Register Eligible Collections
- validatePins.js - Verify IPFS Pins
- checkFileBaseStatus.js - Monitor Filebase
- analyzeErrors.js - Error Analysis
- getStaticData.js - Query Metadata
- getPost.js - Test Database Connection
- Architecture
- Utilities
- Workflow
- Development
- Troubleshooting
Overview
This project solves the problem of slow NFT metadata retrieval by:
- Scraping NFT metadata from Hedera mirror nodes
- Storing metadata in a Directus database for fast access
- Pinning IPFS content to Filebase for reliable availability
- Managing eligible NFT collections for the Lazy dApp
Why This Exists
NFT metadata is typically stored on IPFS or other decentralized storage. Fetching this metadata on-demand can be slow and unreliable due to:
- Gateway timeouts
- Rate limiting
- Network congestion
- Missing or unpinned content
By pre-fetching and storing this data, the Lazy dApp can provide instant access to NFT metadata for staking, farming, and other features.
Prerequisites
- Node.js (v16 or higher)
- Hedera Network Access (mainnet/testnet)
- Directus Database with proper collections configured
- Filebase Account with pinning service API access
- Environment Variables configured (see below)
Installation
npm installDependencies
@directus/sdk- Database operations@hashgraph/sdk- Hedera network interactionaxios- HTTP requestscross-fetch- Universal fetch APIdotenv- Environment variable managementreadline-sync- Interactive CLI prompts
Environment Setup
Create a .env file in the project root:
# Directus Database
DIRECTUS_DB_URL=https://your-directus-instance.com
DIRECTUS_TOKEN=your-static-token-here
# Filebase IPFS Pinning
FILEBASE_PINNING_SERVICE=https://api.filebase.io/v1/ipfs/pins
FILEBASE_PINNING_API_KEY=your-filebase-api-key-here
# Optional: Schema Selection (default: TokenStaticData)
# Use 'SecureTradeMetadata' for marketplace integration
DB_SCHEMA=TokenStaticDataCredential Security
Credentials are validated at startup. Use the --verbose flag to display masked credential values (shows first 2 and last 2 characters only):
node upload.js 0.0.1234567 --verboseExample output:
=== Loaded Credentials ===
[+] Directus Database URL
DIRECTUS_DB_URL: ht********om
[+] Directus API Token
DIRECTUS_TOKEN: ab********yz
[+] Filebase Pinning Service URL
FILEBASE_PINNING_SERVICE: ht********ns
[+] Filebase API Key
FILEBASE_PINNING_API_KEY: sk********23
==========================Required Directus Collections
Your Directus instance needs these collections:
Schema: TokenStaticData (Default - Lazy dApp)
- TokenStaticData
uid(string) - Unique identifier (tokenId!serial)address(string) - Token address (0.0.XXXX)serial(integer) - NFT serial numbermetadata(text) - Original metadata stringrawMetadata(json) - Parsed JSON metadataimage(string) - Image URL/CIDattributes(json) - NFT attributesnftName(string) - NFT namecollection(string) - Collection nameenvironment(string) - Network (mainnet/testnet)
Schema: SecureTradeMetadata (Marketplace)
Set DB_SCHEMA=SecureTradeMetadata to use this schema:
- SecureTradeMetadata
uid(string) - Unique identifier (tokenId-serial)token_id(string) - Token address (0.0.XXXX)serial_number(integer) - NFT serial numbername(string) - NFT namecollection(string) - Collection namecid(string) - Metadata CIDimage(string) - Image URL/CIDdownloaded_to_file(boolean) - Whether image is cached locallyfully_enriched(boolean) - Whether all metadata is completerawMetadataJson(text) - Full JSON metadata
Common Collections (Both Schemas)
eligibleNfts
tokenId(string) - Token addressevmTokenId(string) - EVM-compatible addressNiceName(string) - Display nametype(array) - Allowed types (staking, mission_req, etc.)Environment(array) - Networks where eligible
cidDB
cid(string, primary key) - IPFS CIDpin_confirmed(boolean) - Whether pinned to Filebase
post (used for testing)
Scripts & Usage
upload.js - Single Collection Upload
Purpose: Upload metadata for a single NFT collection to the database.
Usage:
node upload.js <tokenAddress> [options]Options:
--dry-run, -d- Simulate the upload without making changes--resume, -r- Resume from last saved progress--verbose, -v- Show masked credential values
Example:
# Basic upload
node upload.js 0.0.1234567
# Dry run to preview changes
node upload.js 0.0.1234567 --dry-run
# Resume interrupted upload
node upload.js 0.0.1234567 --resume
# Show credentials and upload
node upload.js 0.0.1234567 --verboseInteractive Prompts:
- Choose environment (MAIN/TEST)
- Optionally resume from previous progress (if --resume)
- Confirm or customize collection name (defaults to token symbol)
- Confirm to proceed with upload
What It Does:
- Validates environment credentials
- Preloads CID cache from database (reduces lookups)
- Fetches token details from Hedera mirror node
- Retrieves metadata for all NFT serials
- Parses and normalizes metadata
- Pins IPFS content to Filebase
- Stores in Directus database (using configured schema)
- Saves progress for resume capability
When to Use:
- Adding a new collection to the database
- Re-scraping a collection with updates
- Initial setup for a single NFT project
bulkUpload.js - Multiple Collections Upload
Purpose: Upload metadata for multiple NFT collections in a single run.
Usage:
node bulkUpload.js <address1>,<address2>,<address3> [options]Options:
--dry-run, -d- Simulate the upload without making changes
Example:
# Process multiple collections
node bulkUpload.js 0.0.1234567,0.0.7654321,0.0.9999999
# Dry run to preview
node bulkUpload.js 0.0.1234567,0.0.7654321 --dry-runInteractive Prompts:
- Choose environment (MAIN/TEST)
- Enable/disable interactive mode
- If interactive: confirm each collection name and upload
Interactive vs Non-Interactive:
- Interactive: Prompts for confirmation before each collection
- Non-Interactive: Uses default names and processes all automatically
What It Does:
- Validates all addresses before processing
- Preloads CID cache from database (reduces lookups)
- Processes multiple collections sequentially
- Uses token symbol as default collection name
- Same metadata fetching and storage as single upload
When to Use:
- Onboarding multiple collections at once
- Batch processing for efficiency
- Automated scheduled updates (non-interactive mode)
uploadEligibleNFTs.js - Register Eligible Collections
Purpose: Mark NFT collections as eligible for use in the Lazy dApp (staking, missions, etc.).
Usage:
node uploadEligibleNFTs.js <address1>,<address2>Example:
node uploadEligibleNFTs.js 0.0.1234567,0.0.7654321Interactive Prompts:
- Choose environment (mainnet/testnet/previewnet)
- For each new collection:
- Select allowed types (staking, staking_boost, mission_req, gem_boost)
- Confirm selections
Allowed Types:
- staking - Can be staked for rewards
- staking_boost - Provides boosted staking rewards
- mission_req - Required for missions
- gem_boost - Boosts gem earnings
- NULL - No special functionality
What It Does:
- Checks which collections are already registered
- Filters out duplicates
- Fetches collection details from mirror node
- Prompts for type selection
- Writes to eligibleNfts table
When to Use:
- Enabling a collection for staking/farming
- Adding collections to mission requirements
- Updating collection capabilities
validatePins.js - Verify IPFS Pins
Purpose: Confirm that IPFS content has been successfully pinned to Filebase.
Usage:
node validatePins.js [-force]Options:
- No flag: Check and mark confirmed pins
-force: Re-pin failed or unconfirmed CIDs
What It Does:
- Queries database for unconfirmed pins
- Checks Filebase pinning status
- Updates
pin_confirmedflag in database - Optionally forces re-pinning
Processing:
- Batch size: 20 concurrent checks
- Continues until all unconfirmed pins processed
When to Use:
- After bulk uploads to verify pins
- Troubleshooting missing content
- Maintenance task to ensure availability
- Recovery after Filebase issues
Recommended Schedule:
- Run daily as a maintenance task
- Run immediately after large uploads
- Run with
-forceto recover failed pins
checkFileBaseStatus.js - Monitor Filebase
Purpose: Query and manage Filebase pinning status interactively or via CLI flags.
Usage:
# Interactive mode
node checkFileBaseStatus.js
# CLI mode with flags
node checkFileBaseStatus.js --status queued
node checkFileBaseStatus.js --failed
node checkFileBaseStatus.js --retry ./errors/errors-2025-01-12.json
node checkFileBaseStatus.js --cleanupCLI Flags:
--status <status>- Query pins by status (queued, pinning, pinned, failed)--failed- Shorthand for--status failed--retry <file>- Retry failed CIDs from an error export JSON file--cleanup- Delete all failed pin requests
Interactive Options:
- queued - Show pins waiting to be processed
- pinning - Show pins currently being pinned
- pinned - Show successfully pinned content
- failed - Show failed pins
- cid - Look up specific CID status
- delete - Remove a specific pin request
- delete all failed - Bulk remove all failed pins
What It Does:
- Queries Filebase API for pin status
- Displays request IDs and CIDs
- Allows cleanup of failed pins
- Helps monitor pinning progress
- Retries pins from error export files
When to Use:
- Debugging pinning issues
- Monitoring large batch uploads
- Cleaning up failed pin requests
- Investigating specific CID problems
- Recovering from errors using exported error files
Limits:
- Returns up to 100 results per query
- Delete operations are permanent
analyzeErrors.js - Error Analysis
Purpose: Analyze error patterns and identify root causes from processing logs and exports.
Usage:
# Analyze recent winston logs
node analyzeErrors.js
# Analyze specific error export file
node analyzeErrors.js --file ./errors/errors-2025-01-12.json
# Show detailed breakdown
node analyzeErrors.js --verboseOptions:
--file <path>- Analyze a specific error export JSON file--verbose- Show detailed error breakdown by category
Error Categories:
- fetchMetadata - Failed to retrieve metadata from gateways
- pinMetadata - Failed to pin metadata CID to Filebase
- pinImage - Failed to pin image CID to Filebase
- databaseWrite - Failed to write to Directus database
- gatewayTimeout - Gateway timeout errors
- invalidCID - Invalid or malformed CID detected
Output Includes:
- Error count by category
- Most affected token IDs
- Gateway failure rates
- Root cause recommendations
- Actionable remediation steps
What It Does:
- Parses winston log files (logs/error.log)
- Reads error export JSON files from ProcessingContext
- Correlates errors with Filebase pin status
- Generates root cause recommendations
- Identifies patterns (e.g., specific gateways failing)
When to Use:
- After processing completes with errors
- To identify systematic issues
- Before running retry operations
- To understand why CIDs are failing
getStaticData.js - Query Metadata
Purpose: Retrieve stored metadata for specific NFT serials (testing/debugging).
Usage:
node getStaticData.js <tokenAddress> <serial1,serial2,serial3>Example:
node getStaticData.js 0.0.1234567 1,5,10,25Output:
- Displays full metadata objects
- Shows count of items found
When to Use:
- Verifying data was stored correctly
- Testing database queries
- Debugging metadata issues
- Checking specific NFT details
getPost.js - Test Database Connection
Purpose: Simple test to verify Directus connection.
Usage:
node getPost.jsWhat It Does:
- Queries the
postcollection - Displays results
When to Use:
- Testing Directus credentials
- Verifying database connectivity
- Troubleshooting connection issues
manageCredentials.js - Credential Management
Purpose: Securely manage credentials using OS keychain.
Usage:
node manageCredentials.js <command> [args]Commands:
status- Show credential status and keychain availabilitymigrate- Migrate .env credentials to OS keychainset <name>- Set a credential in keychaindelete <name>- Remove credential from keychain
Examples:
# Check credential status
node manageCredentials.js status
# Migrate sensitive credentials to keychain
node manageCredentials.js migrate
# Set a specific credential
node manageCredentials.js set DIRECTUS_TOKEN
# Remove a credential
node manageCredentials.js delete DIRECTUS_TOKENWhat It Does:
- Stores credentials in OS keychain (Windows Credential Manager, macOS Keychain, Linux Secret Service)
- Allows removing sensitive values from
.envfile - Provides masked display of credential values
Requirements:
- Optional
keytarpackage:npm install keytar - Works without keytar (falls back to .env)
When to Use:
- Setting up secure local development
- Migrating from plaintext .env to keychain
- Managing credentials across multiple projects
Configuration
Schema Selection
The tool supports multiple database schemas for different use cases:
| Schema | Use Case | UID Format | Environment Variable |
|--------|----------|------------|---------------------|
| TokenStaticData | Lazy dApp (default) | tokenId!serial | DB_SCHEMA=TokenStaticData |
| SecureTradeMetadata | Marketplace | tokenId-serial | DB_SCHEMA=SecureTradeMetadata |
Switching Schemas:
# Use TokenStaticData (default)
node upload.js 0.0.1234567
# Use SecureTradeMetadata
DB_SCHEMA=SecureTradeMetadata node upload.js 0.0.1234567Or set permanently in your .env file:
DB_SCHEMA=SecureTradeMetadataCID Cache
The tool maintains a local cache of known CIDs to reduce database lookups. The cache is:
- Loaded from file on startup (
./cache/cid-cache.json) - Preloaded from database before processing (automatic)
- Saved on exit to persist across sessions
Cache location can be configured in config.js:
cache: {
cidCacheFile: './cache/cid-cache.json',
progressStateDir: './state',
}Processing Configuration
Edit config.js to customize:
module.exports = {
processing: {
maxRetries: 18, // Retry attempts per CID
concurrentRequests: 10, // Parallel fetch requests
timeoutMs: 30000, // Request timeout
},
database: {
writeBatchSize: 50, // Records per write batch
queryLimit: 100, // Records per query
schema: 'TokenStaticData' // or 'SecureTradeMetadata'
},
// ... other settings
};Architecture
Data Flow
Hedera Mirror Node → Metadata Scraper → IPFS Gateways
↓
Metadata Validation
↓
┌──────────────────┴──────────────────┐
↓ ↓
Directus Database Filebase Pinning
(Schema Adapter) (IPFS Storage)
↓ ↓
Lazy dApp Filebase Gateway
Marketplace (Reliable Retrieval)Key Components
Mirror Node Helpers (
hederaMirrorHelpers.js)- Interfaces with Hedera mirror nodes
- Fetches token details, NFT metadata
- Supports MAIN, TEST, PREVIEW networks
Metadata Scraper (
metadataScrapeHelper.js)- Retrieves metadata from multiple IPFS gateways
- Handles retries and failover
- Supports IPFS, Arweave, HCS storage
- Extracts and validates CIDs
- Uses ProcessingContext for isolated state per job
Processing Context (
ProcessingContext.js)- Encapsulates all processing state per job
- Enables concurrent processing without state collision
- Supports resume capability via serialization
- Manages gateway rotation and statistics
Schema Adapter (
schemaAdapter.js,schemaWriter.js)- Normalizes metadata between different database schemas
- Supports TokenStaticData and SecureTradeMetadata
- Provides schema-aware database operations
- Enables code reuse across different deployments
Static Data Manager (
tokenStaticDataHelper.js)- Manages Directus database operations
- Handles IPFS pinning via Filebase
- Tracks CIDs and pin status
- Provides CID cache preloading from database
Credential Manager (
credentialManager.js)- Validates required credentials at startup
- Provides masked display (first 2 / last 2 characters)
- Supports interactive credential prompts
Filebase Helper (
filebaseHelper.js)- Checks pin status via HTTP
- Interfaces with Filebase API
- Maintains CID cache
Utilities
hederaMirrorHelpers.js
Core functions for Hedera network interaction:
getBaseURL(env)- Get mirror node URL for environmentgetTokenDetails(env, tokenId)- Fetch token metadatacheckMirrorBalance(env, userId, tokenId)- Check token balancecheckMirrorAllowance(env, userId, tokenId, spenderId)- Check allowances
metadataScrapeHelper.js
Metadata retrieval with resilience:
Multiple IPFS Gateways:
- cloudflare-ipfs.com
- ipfs.eth.aragon.network
- ipfs.io
- ipfs.eternum.io
- dweb.link
Arweave Support:
- arweave.net
- ar-io.dev
- permagate.io
Retry Logic:
- Max 18 attempts per CID
- Gateway rotation
- Exponential backoff with jitter
tokenStaticDataHelper.js
Database and pinning operations:
getStaticData(address, serials)- Query specific NFTswriteStaticData(dataList)- Bulk insert metadatapinIPFS(cid, name, isImage)- Pin to FilebaseconfirmPin(cid)- Verify pin statusisValidCID(cid)- Validate IPFS CID formatpreloadCIDCacheFromDB()- Preload CID cache from databasegetCIDCacheSize()- Get current cache sizeloadCIDCache()/saveCIDCache()- File-based cache persistence
schemaAdapter.js
Schema abstraction layer:
createAdapter(schemaName)- Create adapter for schemaNormalizedMetadata- Schema-agnostic metadata classgetAvailableSchemas()- List supported schemas- Field mapping between TokenStaticData and SecureTradeMetadata
schemaWriter.js
Schema-aware database operations:
createWriter(schemaName)- Create writer for schemagetExistingSerials(tokenId)- Query existing recordswriteMetadata(dataList)- Write normalized metadatadeleteToken(tokenId)- Delete all records for token
credentialManager.js
Credential handling utilities:
maskCredential(value)- Mask sensitive values (show first/last 2 chars)validateCredentials()- Validate all required credentialsdisplayCredentialStatus()- Show masked credential summaryensureCredentials()- Validate with optional interactive prompts
envValidator.js
Environment validation:
validateEnvironment()- Check required env varsdisplayMaskedCredentials()- Show credentials with masking
filebaseHelper.js
Filebase API integration:
checkPinHttp(cid)- HTTP availability checkcheckPinStatus(cid)- Query Filebase API- CID caching for performance
Workflow
Adding a New Collection
Verify Token Address
# Get token details first node upload.js 0.0.XXXXXX # Follow prompts to upload metadataMake Collection Eligible
node uploadEligibleNFTs.js 0.0.XXXXXX # Select allowed types (staking, missions, etc.)Verify Pins
node validatePins.js
Bulk Processing
Prepare Address List
- Collect token addresses
- Verify they're valid (0.0.XXXXX format)
Run Bulk Upload
node bulkUpload.js 0.0.111111,0.0.222222,0.0.333333 # Choose non-interactive for automationRegister as Eligible
node uploadEligibleNFTs.js 0.0.111111,0.0.222222,0.0.333333Validate
node validatePins.js -force
Maintenance
Daily Tasks:
# Check for unconfirmed pins
node validatePins.js
# Monitor Filebase status
node checkFileBaseStatus.js
# Select: queued/pinning to see progressWeekly Tasks:
# Clean up failed pins
node checkFileBaseStatus.js
# Select: delete all failed
# Force re-pin unconfirmed
node validatePins.js -forceDevelopment
Running Tests
The project uses Jest for unit testing with 100+ tests covering core functionality.
# Run all tests
npm test
# Run tests with coverage report
npm run test:coverage
# Run tests in watch mode during development
npm run test:watchTest Coverage
Tests cover the following modules:
| Module | Tests | Coverage | |--------|-------|----------| | ProcessingContext | State management, serialization, error tracking | Core job state | | metadataScrapeHelper | CID extraction, URL parsing | IPFS/Arweave URL formats | | tokenStaticDataHelper | CID validation, data structures | CID format validation | | analyzeErrors | Error categorization, recommendations | Error analysis | | checkFileBaseStatus | API mocking, status queries | Filebase integration |
Linting
# Run ESLint
npm run lint
# Auto-fix issues
npm run lint:fixCI/CD
The project uses GitHub Actions for continuous integration:
- Lint job: Runs ESLint on all JavaScript files
- Test job: Runs Jest test suite with coverage
- Build job: Verifies imports work on Node.js 18, 20, and 22
CI runs automatically on:
- Push to
mainbranch - Pull requests targeting
main
Adding New Tests
Test files are located in __tests__/ directory. Follow existing patterns:
// __tests__/myModule.test.js
const { myFunction } = require('../utils/myModule');
describe('myModule', () => {
describe('myFunction', () => {
it('should handle valid input', () => {
expect(myFunction('input')).toBe('expected');
});
it('should handle null input', () => {
expect(myFunction(null)).toBeNull();
});
});
});Troubleshooting
Common Issues
"Invalid address" Error
- Ensure format is
0.0.XXXXXX(no spaces) - Check token exists on the chosen network
- Verify network selection (MAIN vs TEST)
"No NFTs found" Error
- Token might not be an NFT (check if it's fungible)
- Token might be on wrong network
- Mirror node might be temporarily unavailable
Metadata Fetch Timeouts
- Normal for large collections (retries are automatic)
- Check IPFS gateway availability
- Consider running during off-peak hours
- Some NFTs may have unpinned/missing metadata
Pin Verification Failures
- Run
validatePins.js -forceto retry - Check Filebase account status/quota
- Verify API key is valid
- Some CIDs may be permanently unavailable
Database Connection Issues
- Verify
.envfile exists and is configured - Test connection:
node getPost.js - Check Directus token permissions
- Ensure collections are properly configured
Error Codes
Mirror Node Status:
- 200-299: Success
- 400-499: Client error (check address/parameters)
- 500-599: Server error (retry later)
Filebase Pin Status:
queued: Waiting to be processedpinning: Currently being pinnedpinned: Successfully pinnedfailed: Pin failed (will retry or needs manual intervention)
Performance Tips
Batch Processing:
- Use bulkUpload.js for multiple collections
- Non-interactive mode for automation
- Process during off-peak hours
Network Issues:
- Script automatically retries with multiple gateways
- Failed serials are logged for review
- Re-run script to catch missed items
Database Optimization:
- Script checks for existing data before writing
- Uses batch operations where possible
- Filters duplicate entries
Extending & Forking
If you want to customize this tool for your own use case, here's where to look:
Adding a New Database Schema
Define the schema in
utils/schemaAdapter.js:// Add to SCHEMAS object MyCustomSchema: { tableName: 'MyCustomTable', primaryKey: 'uid', fields: { uid: { type: 'string', required: true }, // ... your fields }, createUid: (tokenId, serial) => `${tokenId}_${serial}`, }Add field mappings in
FIELD_MAPPINGS:MyCustomSchema: { tokenId: 'my_token_field', serial: 'my_serial_field', // ... map normalized fields to your schema }Set via environment:
DB_SCHEMA=MyCustomSchema
Adding New IPFS Gateways
Edit config.js:
ipfs: {
gateways: [
'https://your-gateway.com/ipfs/',
// ... existing gateways
],
}Adding New Storage Backends (Arweave, etc.)
- Add gateway config in
config.js - Update
fetchIPFSJson()inmetadataScrapeHelper.jsto detect and handle the new protocol - Add CID validation in
tokenStaticDataHelper.jsif needed
Customizing Processing Logic
Key extension points:
| File | Function | Purpose |
|------|----------|---------|
| metadataScrapeHelper.js | processNFT() | Per-NFT processing logic |
| metadataScrapeHelper.js | fetchIPFSJson() | Gateway selection & retry |
| ProcessingContext.js | Constructor | Add custom state tracking |
| schemaAdapter.js | NormalizedMetadata | Add custom metadata fields |
Adding New CLI Commands
- Create a new file (e.g.,
myCommand.js) - Import utilities from
utils/ - Use
validateEnvironment()for credential checks - Use
preloadCIDCacheFromDB()before processing
Key Files Overview
├── upload.js # Single collection CLI
├── bulkUpload.js # Multi-collection CLI
├── config.js # All configuration ← START HERE
└── utils/
├── metadataScrapeHelper.js # Core scraping logic
├── ProcessingContext.js # Job state management
├── schemaAdapter.js # Schema definitions
├── schemaWriter.js # Database operations
├── tokenStaticDataHelper.js # Directus + pinning
├── credentialManager.js # Credential handling
└── gatewayManager.js # Gateway rotationUsing as a Library
This package can be used programmatically in your own code:
const { getStaticDataViaMirrors, ProcessingContext } = require('@lazysuperheroes/nft-static-data');
const { createAdapter, NormalizedMetadata } = require('@lazysuperheroes/nft-static-data/utils/schemaAdapter');
const { preloadCIDCacheFromDB } = require('@lazysuperheroes/nft-static-data/utils/tokenStaticDataHelper');
// Preload cache
await preloadCIDCacheFromDB();
// Process a collection
const ctx = await getStaticDataViaMirrors(
'MAIN', // environment
'0.0.1234567', // tokenId
'MyCollection', // collection name
null, // existing serials (null = fetch from DB)
null, // routeUrl (internal)
false, // dryRun
(completed, total, errors) => {
console.log(`Progress: ${completed}/${total}`);
}
);
console.log('Results:', ctx.getSummary());Secure Credential Storage
The tool supports multiple credential sources that work together. Credentials are loaded in this priority order:
- Environment variables (from
.envfile or shell) - OS Keychain (if keytar is installed and credentials are stored)
This means you can:
- Use
.envonly (simple setup) - Use keychain only (most secure)
- Use both (keychain for sensitive values, .env for URLs)
Option 1: Environment File Only (Simple)
Create a .env file with all credentials:
DIRECTUS_DB_URL=https://your-directus-instance.com
DIRECTUS_TOKEN=your-secret-token
FILEBASE_PINNING_SERVICE=https://api.filebase.io/v1/ipfs/pins
FILEBASE_PINNING_API_KEY=your-api-keyThis works out of the box with no additional setup.
Option 2: OS Keychain (Recommended for Security)
Store sensitive credentials in your OS keychain (Windows Credential Manager, macOS Keychain, or Linux Secret Service):
# Check current status
node manageCredentials.js status
# Migrate sensitive credentials from .env to keychain
node manageCredentials.js migrate
# Or set credentials individually
node manageCredentials.js set DIRECTUS_TOKEN
node manageCredentials.js set FILEBASE_PINNING_API_KEYAfter migration, update your .env to only contain non-sensitive values:
# .env - safe to have less sensitive values here
DIRECTUS_DB_URL=https://your-directus-instance.com
FILEBASE_PINNING_SERVICE=https://api.filebase.io/v1/ipfs/pins
# Sensitive values now in OS keychain - remove these lines:
# DIRECTUS_TOKEN=...
# FILEBASE_PINNING_API_KEY=...Benefits of keychain storage:
- Credentials encrypted by OS
- Not stored in plaintext files
- Survives
.envfile deletion - Works across terminal sessions
Option 3: Hybrid Approach (Recommended for Teams)
Use .env for non-sensitive configuration and keychain for secrets:
# .env - committed to repo or shared
DIRECTUS_DB_URL=https://your-directus-instance.com
FILEBASE_PINNING_SERVICE=https://api.filebase.io/v1/ipfs/pins
DB_SCHEMA=TokenStaticData# Each developer sets up their own secrets locally
node manageCredentials.js set DIRECTUS_TOKEN
node manageCredentials.js set FILEBASE_PINNING_API_KEYOption 4: Encrypted Environment Files (CI/CD)
For automated deployments, use dotenv-vault or sops:
# Encrypt your .env
npx dotenv-vault encrypt
# In CI/CD, use the encrypted vault
DOTENV_KEY=your-vault-key node upload.js 0.0.1234567Option 5: Cloud Secret Managers (Production)
For production deployments, integrate with cloud providers:
// AWS Secrets Manager example
const { SecretsManager } = require('@aws-sdk/client-secrets-manager');
async function loadFromAWS() {
const client = new SecretsManager({ region: 'us-east-1' });
const secret = await client.getSecretValue({ SecretId: 'nft-scraper-creds' });
const creds = JSON.parse(secret.SecretString);
process.env.DIRECTUS_TOKEN = creds.DIRECTUS_TOKEN;
process.env.FILEBASE_PINNING_API_KEY = creds.FILEBASE_API_KEY;
}Credential Priority & Loading
When the application starts, credentials are loaded in this order:
1. .env file loaded via dotenv
2. OS keychain checked for any missing credentials
3. Environment variables from shell (highest priority)This means:
- Shell
export DIRECTUS_TOKEN=xxxoverrides everything .envvalues are used if set- Keychain fills in any gaps
Security Best Practices
- Never commit
.envfiles - Already in.gitignore - Use masked display -
--verboseshowsab****yzformat - Prefer keychain for secrets - Use
node manageCredentials.js migrate - Rotate credentials - Especially after team changes
- Use least privilege - Directus tokens should have minimal permissions
- Audit access - Enable logging in Directus for API calls
Support
For issues or questions:
- Check this README first
- Review error messages and troubleshooting section
- Verify environment configuration
- Check Hedera mirror node status
- Test database connectivity
License
MIT License - See LICENSE file for details.
