@yofix/storage
v2.0.7
Published
Universal storage manager for handling file operations across S3, GCS, R2, DigitalOcean Spaces, Firebase, and GitHub Actions
Maintainers
Readme
@yofix/storage
Universal storage manager for handling file operations across multiple cloud providers. Upload and download files to GitHub Actions Artifacts, Firebase Storage, AWS S3, Google Cloud Storage, Cloudflare R2, DigitalOcean Spaces, or local filesystem with a unified API.
Features
- 7 Storage Providers: GitHub Actions Artifacts, Firebase Storage, AWS S3, Google Cloud Storage, Cloudflare R2, DigitalOcean Spaces, and local filesystem
- Unified API: Same interface across all providers
- Connection Pooling: Singleton registry with credential-keyed caching for connection reuse
- Upload & Download: Full bidirectional file operations
- Glob Patterns: Upload multiple files using glob patterns
- Progress Tracking: Real-time upload progress callbacks
- Type Safe: Full TypeScript support with comprehensive types
- Quick Helpers:
quickUpload()andquickDownload()for simple operations - OAuth Support: Google Cloud Storage supports OAuth refresh tokens
- S3-Compatible: R2 and DO Spaces use S3 protocol with multipart uploads
- GitHub Action: Ready-to-use as a GitHub Action
Installation
npm install @yofix/storageQuick Start
import { uploadFiles, downloadFiles, quickUpload, quickDownload } from '@yofix/storage'
// Upload files
const result = await uploadFiles({
storage: { provider: 's3', config: { region: 'us-east-1', bucket: 'my-bucket' } },
files: ['dist/**/*', 'package.json']
})
// Download files
const downloaded = await downloadFiles({
storage: { provider: 's3', config: { region: 'us-east-1', bucket: 'my-bucket' } },
files: ['config.json', 'data/users.json']
})
// Quick upload (data directly)
const path = await quickUpload(
{ provider: 'gcs', config: { bucket: 'my-bucket' } },
{ path: 'data.json', data: Buffer.from(JSON.stringify(data)) }
)
// Quick download
const buffer = await quickDownload(
{ provider: 'gcs', config: { bucket: 'my-bucket' } },
{ path: 'data.json' }
)Providers
Local Storage
import { uploadFiles } from '@yofix/storage'
const result = await uploadFiles({
storage: {
provider: 'local',
config: {
directory: './uploads',
createIfNotExists: true,
basePath: 'files'
}
},
files: ['package.json', 'dist/**/*'],
verbose: true
})AWS S3
const result = await uploadFiles({
storage: {
provider: 's3',
config: {
region: 'us-east-1',
bucket: 'your-bucket',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
basePath: 'uploads',
acl: 'public-read'
}
},
files: ['dist/**/*'],
onProgress: (progress) => {
console.log(`${progress.filesUploaded}/${progress.totalFiles}`)
}
})Google Cloud Storage (with OAuth)
// Using service account
const result = await uploadFiles({
storage: {
provider: 'gcs',
config: {
bucket: 'your-bucket',
projectId: 'your-project',
keyFilename: '/path/to/service-account.json'
}
},
files: ['images/**/*.{jpg,png}']
})
// Using OAuth refresh token (user-based auth)
const result = await uploadFiles({
storage: {
provider: 'gcs',
config: {
bucket: 'your-bucket',
refreshToken: 'user-refresh-token',
clientId: 'oauth-client-id',
clientSecret: 'oauth-client-secret'
}
},
files: ['uploads/**/*']
})GCS Authentication Priority:
- Access Token (direct token)
- OAuth Refresh Token (auto-refresh)
- Service Account Credentials (JSON)
- Service Account Key File
- Application Default Credentials
Firebase Storage
const result = await uploadFiles({
storage: {
provider: 'firebase',
config: {
bucket: 'your-project.appspot.com',
credentials: JSON.parse(process.env.FIREBASE_CREDENTIALS),
basePath: 'uploads'
}
},
files: ['images/**/*.{jpg,png}']
})Cloudflare R2
const result = await uploadFiles({
storage: {
provider: 'r2',
config: {
accountId: 'your-account-id',
bucket: 'your-bucket',
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
basePath: 'uploads',
publicUrl: 'https://cdn.example.com' // Optional custom domain
}
},
files: ['dist/**/*']
})DigitalOcean Spaces
const result = await uploadFiles({
storage: {
provider: 'do-spaces',
config: {
region: 'nyc3', // nyc3, sfo3, ams3, sgp1, fra1, syd1
bucket: 'your-space',
accessKeyId: process.env.DO_SPACES_KEY,
secretAccessKey: process.env.DO_SPACES_SECRET,
basePath: 'uploads',
acl: 'public-read'
}
},
files: ['static/**/*']
})GitHub Actions Artifacts
// Only works within GitHub Actions environment
const result = await uploadFiles({
storage: {
provider: 'github',
config: {
artifactName: 'build-artifacts',
retentionDays: 30,
basePath: 'dist'
}
},
files: ['dist/**/*', 'coverage/**/*']
})Download Files
import { downloadFiles, quickDownload } from '@yofix/storage'
// Download multiple files
const result = await downloadFiles({
storage: {
provider: 's3',
config: { region: 'us-east-1', bucket: 'my-bucket' }
},
files: ['config.json', 'data/users.json'],
verbose: true
})
// Access downloaded content
result.files.forEach(file => {
console.log(`Downloaded: ${file.remotePath}`)
console.log(`Size: ${file.size} bytes`)
console.log(`Content: ${file.buffer.toString()}`)
})
// Quick download single file
const buffer = await quickDownload(
{ provider: 's3', config: { region: 'us-east-1', bucket: 'my-bucket' } },
{ path: 'config.json' }
)Connection Pooling (Provider Registry)
By default, providers use singleton mode with connection reuse for better performance:
import { StorageManager } from '@yofix/storage'
// Multiple uploads reuse the same S3 client
await uploadFiles({ storage: s3Config, files: batch1 }) // Creates client
await uploadFiles({ storage: s3Config, files: batch2 }) // Reuses client!
// Configure registry behavior
StorageManager.configure({
singleton: true, // Enable/disable singleton mode (default: true)
autoCleanup: true, // Auto-cleanup idle connections
idleTTL: 5 * 60 * 1000, // 5 minutes idle timeout
cleanupInterval: 60 * 1000 // Check every minute
})
// Disable singleton mode (original per-operation behavior)
StorageManager.configure({ singleton: false })
// Manual cleanup
await StorageManager.cleanup() // Cleanup all providers
await StorageManager.cleanupProvider(s3Config) // Cleanup specific provider
await StorageManager.cleanupIdle(60000) // Cleanup idle > 1 minute
// Get connection statistics
const stats = StorageManager.getStats()
// { totalProviders: 2, activeProviders: 1, idleProviders: 1, providers: [...] }StorageManager Class
import { StorageManager } from '@yofix/storage'
const manager = new StorageManager({
storage: { provider: 'local', config: { directory: './uploads' } },
files: ['*.json'],
verbose: true
})
// Upload
const uploadResult = await manager.upload()
// Download
const downloadResult = await manager.download(['config.json', 'data.json'])
// File operations
const exists = await manager.fileExists('package.json')
const files = await manager.listFiles('uploads/')
const objects = await manager.listObjects({ prefix: 'images/', maxKeys: 100 })
const size = await manager.getFileSize('large-file.zip')
const metadata = await manager.getFileMetadata('document.pdf')
await manager.deleteFile('old-file.txt')
// Direct data upload
const path = await manager.uploadData({
path: 'data.json',
data: Buffer.from(JSON.stringify({ key: 'value' })),
contentType: 'application/json'
})
// Generate signed URLs (S3, GCS, Firebase, R2, DO Spaces)
const downloadUrl = await manager.getSignedUrl('documents/report.pdf', {
action: 'read',
expires: 3600 // 1 hour in seconds
})
const uploadUrl = await manager.getSignedUrl('uploads/new-file.pdf', {
action: 'write',
expires: new Date(Date.now() + 15 * 60 * 1000), // 15 minutes
contentType: 'application/pdf'
})Provider Configuration
GitHub Actions
interface GitHubConfig {
artifactName?: string // Default: 'storage-artifacts'
retentionDays?: number // 1-90, Default: 90
basePath?: string
}Firebase Storage
interface FirebaseConfig {
projectId?: string
credentials?: string | object // Service account JSON (supports base64)
bucket: string // Required
basePath?: string
}AWS S3
interface S3Config {
region: string // Required
bucket: string // Required
accessKeyId?: string
secretAccessKey?: string
basePath?: string
acl?: string // 'public-read', 'private', etc.
endpoint?: string // For S3-compatible services
}Google Cloud Storage
interface GCSConfig {
bucket: string // Required
projectId?: string
keyFilename?: string // Service account key file path
credentials?: string | object
accessToken?: string // OAuth access token
refreshToken?: string // OAuth refresh token
clientId?: string // Required with refreshToken
clientSecret?: string // Required with refreshToken
basePath?: string
}Cloudflare R2
interface R2Config {
accountId: string // Required
bucket: string // Required
accessKeyId: string // Required
secretAccessKey: string // Required
basePath?: string
publicUrl?: string // Custom domain for public buckets
}DigitalOcean Spaces
interface DOSpacesConfig {
region: string // Required: nyc3, sfo3, ams3, sgp1, fra1, syd1
bucket: string // Required
accessKeyId: string // Required
secretAccessKey: string // Required
basePath?: string
acl?: string // 'public-read', 'private', etc.
}Local Storage
interface LocalConfig {
directory: string // Required
createIfNotExists?: boolean // Default: true
basePath?: string
}Signed URLs
Generate pre-signed URLs for temporary access to files without exposing credentials. Useful for:
- Direct browser uploads/downloads
- Sharing temporary access links
- Secure file transfers
import { StorageManager } from '@yofix/storage'
const manager = new StorageManager({
storage: { provider: 's3', config: { region: 'us-east-1', bucket: 'my-bucket' } }
})
// Generate a download URL (expires in 1 hour)
const downloadUrl = await manager.getSignedUrl('reports/annual.pdf', {
action: 'read',
expires: 3600 // seconds
})
// Generate an upload URL (expires in 15 minutes)
const uploadUrl = await manager.getSignedUrl('uploads/user-file.pdf', {
action: 'write',
expires: new Date(Date.now() + 15 * 60 * 1000),
contentType: 'application/pdf' // Required for write on some providers
})
// Use with fetch for direct upload
await fetch(uploadUrl, {
method: 'PUT',
headers: { 'Content-Type': 'application/pdf' },
body: fileBuffer
})Supported Providers: S3, GCS, Firebase, R2, DO Spaces
Options:
| Option | Type | Description |
|--------|------|-------------|
| action | 'read' | 'write' | Download or upload access |
| expires | Date | number | Expiration as Date or seconds from now |
| contentType | string | MIME type (required for write on some providers) |
Feature Matrix
| Feature | GitHub | Firebase | S3 | GCS | R2 | DO Spaces | Local | |---------|:------:|:--------:|:--:|:---:|:--:|:---------:|:-----:| | Upload File | Yes | Yes | Yes | Yes | Yes | Yes | Yes | | Upload Data | - | Yes | Yes | Yes | Yes | Yes | Yes | | Download | - | Yes | Yes | Yes | Yes | Yes | Yes | | File Exists | - | Yes | Yes | Yes | Yes | Yes | Yes | | Delete File | - | Yes | Yes | Yes | Yes | Yes | Yes | | List Files | - | Yes | Yes | Yes | Yes | Yes | Yes | | Get Metadata | - | Yes | Yes | Yes | Yes | Yes | Yes | | OAuth Support | - | - | - | Yes | - | - | - | | Multipart Upload | - | - | Yes | - | Yes | Yes | - | | Signed URLs | - | Yes | Yes | Yes | Yes | Yes | - | | CDN Support | - | - | - | - | - | Yes | - | | Base Path | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
GitHub Action Usage
name: Upload Artifacts
on:
push:
branches: [main]
jobs:
upload:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build
run: npm run build
- name: Upload to GitHub Artifacts
uses: yofix/storage@main
with:
provider: github
files: 'dist/**/*,coverage/**/*'
artifact-name: 'build-artifacts'
retention-days: 30
verbose: true
- name: Upload to S3
uses: yofix/storage@main
with:
provider: s3
files: 'dist/**/*'
s3-region: us-east-1
s3-bucket: my-bucket
s3-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
s3-base-path: builds/${{ github.sha }}
s3-acl: public-read
verbose: trueTypeScript
Full TypeScript support with comprehensive type definitions:
import type {
// Core
StorageProvider,
StorageOptions,
StorageResult,
StorageError,
ProviderConfig,
// Files
FileToUpload,
DataToUpload,
UploadedFile,
UploadProgress,
FileToDownload,
DownloadedFile,
DownloadOptions,
DownloadResult,
// Metadata
FileMetadata,
StorageObject,
ListOptions,
SignedUrlOptions,
// Provider configs
GitHubConfig,
FirebaseConfig,
S3Config,
GCSConfig,
R2Config,
DOSpacesConfig,
LocalConfig,
// Provider interface
IStorageProvider,
// Registry
RegistryOptions
} from '@yofix/storage'Error Handling
const result = await uploadFiles({...})
if (!result.success) {
result.errors?.forEach(error => {
console.error(`[${error.code}] ${error.message}`)
if (error.file) {
console.error(`File: ${error.file}`)
}
})
}Environment Variables
# AWS S3
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
# Google Cloud Storage (OAuth)
export GOOGLE_CLIENT_ID=your-client-id
export GOOGLE_CLIENT_SECRET=your-client-secret
# Firebase
export FIREBASE_CREDENTIALS='{"type":"service_account",...}'
# Cloudflare R2
export R2_ACCESS_KEY_ID=your-r2-key
export R2_SECRET_ACCESS_KEY=your-r2-secret
# DigitalOcean Spaces
export DO_SPACES_KEY=your-spaces-key
export DO_SPACES_SECRET=your-spaces-secretLicense
MIT
Contributing
Contributions are welcome! Please open an issue or submit a pull request on GitHub.
