@flowhash/core
v0.1.3
Published
Enterprise-grade storage adapter for Flowhash with Nextcloud (WebDAV), observability, PII/PHI filtering, and batch operations
Maintainers
Readme
Flowhash Core — Verifiable Event SDK
@flowhash/core is a TypeScript SDK for building verifiable event systems with cryptographic proofs, Merkle trees, and pluggable storage/anchoring.
Overview
Flowhash Core provides the foundational infrastructure for creating tamper-evident event logs with:
- Cryptographic hashing: Deterministic SHA-256 hashes using canonical JSON
- Digital signatures: ECDSA P-256 signing and verification for credentials
- Merkle trees: Balanced trees with verifiable proofs
- Pluggable storage: Filesystem, Nextcloud (WebDAV), S3, and IPFS adapters
- Blockchain anchoring: Support for anchoring Merkle roots to L2 EVM chains
- Verifiable credentials: W3C-compatible credential creation and verification
Architecture
+-------------------------------------------------------------+
| Application Layer |
| (Flowhash Pass, Wallet, etc.) |
+-------------------------------------------------------------+
|
v
+-------------------------------------------------------------+
| @flowhash/core SDK |
+-----------------+------------------+------------------------+
| Event Hashing | Merkle Trees | Credential Signing |
| & Storage | & Proofs | & Verification |
+-----------------+------------------+------------------------+
|
+---------+---------+
v v
+---------------+ +--------------+
| Storage | | Anchor |
| Adapters | | Adapters |
+---------------+ +--------------+
| - Filesystem | | - None |
| - Nextcloud | | - L2 EVM |
| - S3 | | |
| - IPFS | | |
+---------------+ +--------------+Installation
npm install @flowhash/coreQuick Start
1. Hash an Event
import { hashEvent } from '@flowhash/core';
const event = {
id: 'evt-001',
ticketId: 'TO-123',
timestamp: '2025-10-17T10:00:00Z',
eventType: 'status_change',
actor: '[email protected]',
data: { from: 'open', to: 'closed' }
};
const hash = hashEvent(event);
console.log(hash); // Deterministic SHA-256 hash2. Store Events
import { FSStorageAdapter, appendEvent } from '@flowhash/core';
const storage = new FSStorageAdapter('./data');
await appendEvent('TO-123', event, storage);
// Creates: ./data/Flowhash/TO-123/events.jsonl3. Build Daily Merkle Tree
import { dailyMerkle, NoneAnchorAdapter } from '@flowhash/core';
const anchor = new NoneAnchorAdapter();
const record = await dailyMerkle('2025-10-17', storage, anchor);
console.log(record.root); // Merkle root
console.log(record.count); // Number of events
// Writes: ./data/Flowhash/merkle/2025-10-17.json4. Verify Merkle Proofs
import { buildMerkle, getProof, verifyProof } from '@flowhash/core';
const hashes = ['hash1', 'hash2', 'hash3'];
const tree = buildMerkle(hashes);
const proof = getProof(tree, 'hash1');
const isValid = verifyProof(proof);
console.log(isValid); // true5. Sign and Verify Credentials
import {
generateKeyPair,
signCredential,
verifyCredential
} from '@flowhash/core';
const { privateKey, publicKey } = generateKeyPair();
const credential = {
'@context': ['https://www.w3.org/2018/credentials/v1'],
type: ['VerifiableCredential'],
id: 'urn:uuid:test-123',
issuer: 'did:example:issuer',
issuanceDate: '2025-10-17T10:00:00Z',
credentialSubject: {
id: 'did:example:subject',
claim: 'value'
}
};
const signed = await signCredential(
credential,
privateKey,
'did:example:issuer#key-1'
);
const isValid = await verifyCredential(signed, publicKey);
console.log(isValid); // trueStorage Adapters
Filesystem
import { FSStorageAdapter } from '@flowhash/core';
const storage = new FSStorageAdapter('./flowhash-data');Nextcloud (WebDAV) — Canonical v0.1 Production Backend
Nextcloud is the canonical storage backend for Flowhash v0.1 (Grupo Amigo pilot).
import {
NextcloudStorageAdapter,
ConsoleStorageObserver
} from '@flowhash/core';
const storage = new NextcloudStorageAdapter({
url: 'https://cloud.example.com/remote.php/dav/files/username',
username: 'alice',
password: 'secret',
basePath: '/Flowhash', // Default: /Flowhash
// Optional: Observability for production monitoring
observer: new ConsoleStorageObserver(false), // or implement custom observer
// Optional: Actor identification for audit logging
actor: '[email protected]',
// Optional: File size limits (defaults shown)
maxFileSizeBytes: 100 * 1024 * 1024, // 100MB max file size
maxAppendSizeBytes: 10 * 1024 * 1024, // 10MB max per append
});Canonical v0.1 Directory Layout:
- Base path:
/Flowhash(default) - Event logs:
/Flowhash/{ticketId}/events.jsonl - Daily Merkle records:
/Flowhash/merkle/{YYYY-MM-DD}.json
Health Check:
Before deploying to production, validate your Nextcloud configuration:
npx flowhash-doctor nextcloud \
--url https://cloud.example.com/remote.php/dav/files/username \
--username alice \
--password secret123Or use environment variables:
export FLOWHASH_NC_URL=https://cloud.example.com/remote.php/dav/files/username
export FLOWHASH_NC_USERNAME=alice
export FLOWHASH_NC_PASSWORD=secret123
npx flowhash-doctor nextcloudThe doctor performs:
- ✅ Connectivity and authentication check
- ✅ Write/read cycle test
- ✅ Daily Merkle generation test
- ✅ End-to-end pipeline verification
Error Handling:
The adapter provides typed error classes for robust error handling:
StorageConnectionError- Network/connection failuresStorageAuthError- Authentication failures (401/403)StorageWriteError- Write operation failuresStorageReadError- Read operation failures
import {
NextcloudStorageAdapter,
StorageAuthError,
StorageConnectionError
} from '@flowhash/core';
try {
await storage.put('path/to/file', data);
} catch (error) {
if (error instanceof StorageAuthError) {
console.error('Authentication failed - check credentials');
} else if (error instanceof StorageConnectionError) {
console.error('Cannot reach Nextcloud server');
}
}Observability & Monitoring:
Implement custom observers for production monitoring:
import { StorageObserver, StorageMetrics, AuditLogEntry } from '@flowhash/core';
class ProductionObserver implements StorageObserver {
log(level, message, context) {
// Send to your logging service (Datadog, CloudWatch, etc.)
}
recordMetrics(metrics: StorageMetrics) {
// Send to your metrics service (Prometheus, Grafana, etc.)
// Track: operation latency, bytes read/written, success rate
}
recordAudit(entry: AuditLogEntry) {
// Send to compliance/audit logging system
// Includes: actor, operation, path, timestamp, success/failure
}
}Performance Characteristics:
- ⚠️
append()uses read-modify-write (O(n²) for many appends) - ⚠️
append()is NOT atomic - concurrent appends cause race conditions - ✅ File size limits prevent OOM (100MB max file, 10MB max append by default)
- ✅ All operations include metrics: duration, bytes, success/failure
- ✅ Audit logging for compliance (GDPR, data access tracking)
Best Practices:
// ✅ GOOD: Small, infrequent appends (typical JSONL event logging)
await storage.append('ticket-123/events.jsonl', Buffer.from(jsonLine + '\n'));
// ⚠️ AVOID: Many rapid appends to same file (use batching)
for (let i = 0; i < 1000; i++) {
await storage.append('file.txt', small_chunk); // Inefficient!
}
// ✅ BETTER: Batch appends into single write
const batch = chunks.join('\n') + '\n';
await storage.append('file.txt', Buffer.from(batch));
// ⚠️ AVOID: Large appends (>10MB triggers error)
await storage.append('file.txt', hugeBuffer); // Will throw error
// ✅ BETTER: Use put() for large files
await storage.put('large-file.bin', hugeBuffer);Security & Compliance:
import {
NextcloudStorageAdapter,
FieldLevelSanitizer,
HashBasedSanitizer,
ConsoleStorageObserver
} from '@flowhash/core';
// PII/PHI protection with field-level sanitization
const storage = new NextcloudStorageAdapter({
url: process.env.NEXTCLOUD_URL,
username: process.env.NEXTCLOUD_USERNAME,
password: process.env.NEXTCLOUD_PASSWORD,
// Automatic PII masking
dataSanitizer: new FieldLevelSanitizer([
'email',
'phone',
'ssn',
'creditCard',
'password',
]),
// Audit logging for compliance (GDPR, HIPAA)
observer: new ProductionObserver(),
actor: req.user.email, // Track who accessed data
});
// Pseudonymization for data analysis
const pseudonymizedStorage = new NextcloudStorageAdapter({
url: process.env.NEXTCLOUD_URL,
username: process.env.NEXTCLOUD_USERNAME,
password: process.env.NEXTCLOUD_PASSWORD,
dataSanitizer: new HashBasedSanitizer(['userId', 'email']),
// Allows correlation while protecting actual values
});Path Security:
All paths are automatically sanitized to prevent traversal attacks:
// ✅ SAFE: Normal paths
await storage.put('ticket-123/events.jsonl', data); // OK
// ❌ BLOCKED: Path traversal attempts
await storage.put('../../../etc/passwd', data); // Throws error
await storage.put('ticket/../admin/secret', data); // Throws error
await storage.put('ticket\0hidden', data); // Throws error (null byte)Data Governance:
See DATA-GOVERNANCE.md for:
- GDPR compliance guidance (data retention, right to erasure)
- HIPAA compliance recommendations
- Encryption at rest strategies
- PII/PHI filtering patterns
- Data retention policies
Advanced Production Patterns:
See ADVANCED-PATTERNS.md for:
- Batch append optimization (100x faster for bulk writes)
- WebDAV connection pooling configuration
- Distributed locking patterns (Redis/Database)
- Client-side encryption wrapper implementation
- When non-atomic appends are acceptable (safety analysis)
S3 (Stub)
import { S3StorageAdapter } from '@flowhash/core';
// TODO: Implement with AWS SDK v3
const storage = new S3StorageAdapter({
bucket: 'flowhash-data',
region: 'us-east-1',
accessKeyId: 'ACCESS_KEY',
secretAccessKey: 'SECRET_KEY'
});IPFS (Stub)
import { IPFSStorageAdapter } from '@flowhash/core';
// TODO: Implement with IPFS HTTP client
const storage = new IPFSStorageAdapter({
apiUrl: 'http://127.0.0.1:5001'
});Anchor Adapters
None (Local Storage Only)
import { NoneAnchorAdapter } from '@flowhash/core';
const anchor = new NoneAnchorAdapter();
// Records Merkle root to storage without blockchain anchoringL2 EVM (Stub)
import { L2EVMAnchorAdapter } from '@flowhash/core';
// TODO: Implement with ethers.js or viem
const anchor = new L2EVMAnchorAdapter({
rpcUrl: 'https://rpc.l2.example.com',
contractAddress: '0x...',
privateKey: '0x...',
chainId: 10
});API Reference
Events
hashEvent(event: FlowhashEvent): string- Hash an event deterministicallyappendEvent(ticketId: string, event: FlowhashEvent, storage: StorageAdapter): Promise<string>- Append event to storagereadEvents(ticketId: string, storage: StorageAdapter): Promise<FlowhashEvent[]>- Read all events for a ticket
Merkle Trees
buildMerkle(hashes: string[]): MerkleTree- Build a Merkle tree from hashesverifyProof(proof: MerkleProof): boolean- Verify a Merkle proofgetProof(tree: MerkleTree, leafHash: string): MerkleProof | undefined- Get proof for a leaf
Anchoring
dailyMerkle(date: string, storage: StorageAdapter, anchor: AnchorAdapter): Promise<DailyMerkleRecord>- Build and anchor daily Merkle treereadDailyMerkle(date: string, storage: StorageAdapter): Promise<DailyMerkleRecord | null>- Read daily Merkle record
Credentials
generateKeyPair(): { privateKey: string; publicKey: string }- Generate ECDSA P-256 key pairsignCredential(credential: FHCredential, privateKey: string, verificationMethod: string): Promise<FHCredential>- Sign a credentialverifyCredential(credential: FHCredential, publicKey: string): Promise<boolean>- Verify a signed credential
Hashing
hashObject(obj: unknown): string- Hash any object with canonical JSONhashString(str: string): string- Hash a stringcanonicalJSON(obj: unknown): string- Serialize to canonical JSON
Data Structures
FlowhashEvent
interface FlowhashEvent {
id: string;
ticketId: string;
timestamp: string; // ISO 8601
eventType: string;
actor: string;
data: Record<string, unknown>;
metadata?: Record<string, unknown>;
}AnchorRecord
interface AnchorRecord {
date: string; // YYYY-MM-DD
root: string; // Merkle root hash
count: number; // Number of events
timestamp: string; // ISO 8601
anchorType: 'none' | 'l2evm';
txHash?: string; // Blockchain transaction hash
blockNumber?: number;
chainId?: number;
}MerkleProof
interface MerkleProof {
leaf: string;
path: Array<{
hash: string;
position: 'left' | 'right';
}>;
root: string;
}Directory Structure
When using the filesystem adapter, data is organized as:
Flowhash/
TO-123/
events.jsonl # Event log for ticket TO-123
TO-456/
events.jsonl # Event log for ticket TO-456
merkle/
2025-10-17.json # Daily Merkle record
2025-10-18.jsonDevelopment
Build
npm run buildTest
npm testRun Example
npm run build
node examples/usage.jsTesting
The test suite includes:
- Deterministic hashing: Same input always produces same hash
- Merkle proof verification: All proofs verify correctly
- Credential signing: Sign/verify cycle works with ECDSA keys
- Storage operations: Events append and read correctly
- Daily Merkle generation: Aggregates events and anchors root
Run tests:
npm testAcceptance Criteria
- [x]
npm run testpasses all test cases - [x]
hashEvent()produces deterministic SHA-256 digest - [x]
appendEvent()creates/Flowhash/TO-123/events.jsonl - [x]
dailyMerkle()writes/Flowhash/merkle/YYYY-MM-DD.json - [x] Sign/verify cycle works with ECDSA P-256 keys
- [x] Merkle proofs verify correctly
- [x] Example script outputs deterministic Merkle root
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Roadmap
- [x] ✅ Nextcloud storage adapter (v0.1 production-ready)
- [x] ✅ CLI health check tool (
flowhash-doctor) - [ ] Complete S3 storage adapter implementation
- [ ] Complete IPFS storage adapter implementation
- [ ] Complete L2 EVM anchor adapter with ethers.js
- [ ] Add support for multiple blockchain networks
- [ ] Add GraphQL API for querying events
License
Apache-2.0
Copyright 2025 Flowhash Contributors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Support
- Documentation: https://github.com/flowhash/core
- Issues: https://github.com/flowhash/core/issues
- Discussions: https://github.com/flowhash/core/discussions
