@trillboards/edge-federated
v0.2.1
Published
On-device federated learning, gradient upload, and VAS attestation for Trillboards Edge AI SDK
Maintainers
Readme
@trillboards/edge-federated
Privacy-preserving federated learning for DOOH devices. Train models on-device from audience and contextual signals, upload only sparse gradients. No raw data leaves the device.
Install
npm install @trillboards/edge-federatedWhat This Does
Enables on-device model training that improves audience prediction accuracy over time while preserving viewer privacy:
- Federated Trainer — accumulates training samples from audience sensing, computes gradients locally, uploads only the top 2% sparsest gradients every 5 minutes
- Model Manager — manages local model versions, checks cloud for global model updates, handles serialization and version reconciliation
- Slice Context — training is partitioned by venue type, daypart, geography, and device profile for fine-grained model personalization
Usage
import { FederatedTrainer, ModelManager } from '@trillboards/edge-federated';
// Initialize trainer with slice context
const trainer = new FederatedTrainer({
modelType: 'audience_prediction',
sliceContext: {
venueType: 'retail',
daypart: 'afternoon',
geo: 'us-east',
deviceProfile: 'tier_3'
}
});
// Feed training samples from audience sensing
trainer.addSample({
features: [faceCount, avgAttention, dwellTime, emotionScore],
label: impressionEngagement
});
// Trainer auto-uploads sparse gradients every 5 minutes
// when MIN_SAMPLES_FOR_UPLOAD (10) threshold is met
await trainer.start();
// Check for global model updates
const modelManager = new ModelManager();
const update = await modelManager.checkForUpdate('audience_prediction');
if (update.available) {
await modelManager.download(update);
}How It Works
- On-device training — model trains on local audience data (face count, attention, emotion, dwell time)
- Sparse gradient extraction — only the top 2% most significant gradients are selected (top-K sparsification)
- Gradient upload — compressed gradients sent to cloud every 5 minutes (configurable)
- Global aggregation — cloud aggregates gradients from all devices to update the global model
- Model distribution — updated global model pushed back to devices
Raw audience data never leaves the device. Only mathematical gradient values are transmitted.
Configuration
| Parameter | Default | Description |
|-----------|---------|-------------|
| UPLOAD_INTERVAL_MS | 300,000 (5 min) | Gradient upload cadence |
| TOP_K_RATIO | 0.02 (2%) | Fraction of gradients to upload |
| MIN_SAMPLES_FOR_UPLOAD | 10 | Minimum samples before upload |
| MAX_GRADIENT_BUFFER_SIZE | 50,000 | Max buffered gradients |
License
MIT
