citrate-js
v0.1.3
Published
JavaScript/TypeScript SDK for Citrate AI blockchain platform
Maintainers
Readme
Citrate JavaScript/TypeScript SDK
A comprehensive JavaScript/TypeScript SDK for interacting with the Citrate AI blockchain platform. Deploy AI models, execute inferences, manage encryption, and handle payments with ease.
Features
- 🚀 Full TypeScript Support: Complete type definitions for all APIs
- 🔐 End-to-End Encryption: Secure model weights and inference data
- 💰 Built-in Payments: Pay-per-use pricing and revenue sharing
- 🌐 Web3 Integration: MetaMask and WalletConnect support
- ⚡ Real-time Streaming: WebSocket support for live inference
- ⚛️ React Hooks: Optional React integration for web apps
- 🔧 Multi-format Models: CoreML, ONNX, TensorFlow, PyTorch support
Installation
npm install citrate-js
# or
yarn add citrate-jsPrerequisites
Before using the SDK, ensure you have a running Citrate node with AI models pinned:
1. Install Citrate Node
# Clone and build the node
git clone https://github.com/citrate-ai/citrate.git
cd citrate
cargo build --release2. Setup IPFS
# Install and start IPFS
brew install ipfs # macOS
ipfs init
ipfs daemon &3. Pin Required AI Models
Citrate requires AI models to be pinned for inference. Use the automated CLI:
# Automatically download and pin all required models (4.3 GB)
./target/release/citrate model auto-pin
# Verify models are pinned
./target/release/citrate model listModels included:
- BGE-M3 (437 MB): Embeddings for semantic search
- Mistral 7B (4.3 GB): LLM for chat and completions
4. Start the Node
# Run local devnet
./target/release/citrate devnet
# Or connect to testnet
./target/release/citrate --config config/testnet.tomlYour node will be available at http://localhost:8545 for the SDK to connect to.
Quick Start
Basic Usage
import { CitrateClient, ModelConfig, ModelType, AccessType } from 'citrate-js';
// Connect to Citrate network
const client = new CitrateClient({
rpcUrl: 'https://mainnet.citrate.ai',
privateKey: 'your-private-key' // optional
});
// Deploy a model
const modelData = new Uint8Array(/* your model bytes */);
const config: ModelConfig = {
name: 'My AI Model',
modelType: ModelType.COREML,
accessType: AccessType.PAID,
accessPrice: 100000000000000000n, // 0.1 ETH in wei
encrypted: true
};
const deployment = await client.deployModel(modelData, config);
console.log('Model deployed:', deployment.modelId);
// Execute inference
const result = await client.inference({
modelId: deployment.modelId,
inputData: { text: 'Hello, AI!' }
});
console.log('AI Response:', result.outputData);React Integration
import React from 'react';
import { useCitrateClient, useInference } from 'citrate-js';
function AIChat() {
const { client, isConnected } = useCitrateClient({
rpcUrl: 'https://mainnet.citrate.ai',
autoConnect: true
});
const { execute, result, isExecuting } = useInference(client);
const handleSubmit = async (input: string) => {
await execute({
modelId: 'your-model-id',
inputData: { prompt: input }
});
};
if (!isConnected) return <div>Connecting...</div>;
return (
<div>
<button
onClick={() => handleSubmit('Hello')}
disabled={isExecuting}
>
{isExecuting ? 'Processing...' : 'Send Message'}
</button>
{result && (
<div>Response: {result.outputData.text}</div>
)}
</div>
);
}Real-time Streaming
import { WebSocketClient } from 'citrate-js';
const wsClient = new WebSocketClient({
url: 'wss://mainnet.citrate.ai/ws'
});
await wsClient.connect();
// Start streaming inference
await wsClient.startStreamingInference({
modelId: 'text-generation-model',
inputData: { prompt: 'Write a story about...' },
onPartialResult: (partial) => {
console.log('Partial:', partial.outputData);
},
onComplete: (final) => {
console.log('Complete:', final.outputData);
}
});API Reference
CitrateClient
Constructor
new CitrateClient(config: CitrateClientConfig)Parameters:
config.rpcUrl- RPC endpoint URLconfig.privateKey- Optional private key for transactionsconfig.timeout- Request timeout in milliseconds (default: 30000)config.headers- Additional HTTP headers
Methods
deployModel()
deployModel(
modelData: ArrayBuffer | Uint8Array,
config: ModelConfig
): Promise<ModelDeployment>Deploy an AI model to the blockchain.
inference()
inference(request: InferenceRequest): Promise<InferenceResult>Execute inference on a deployed model.
batchInference()
batchInference(request: BatchInferenceRequest): Promise<BatchInferenceResult>Execute batch inference for multiple inputs.
getModelInfo()
getModelInfo(modelId: string): Promise<ModelInfo>Get detailed information about a model.
listModels()
listModels(owner?: string, limit?: number): Promise<ModelInfo[]>List available models in the marketplace.
purchaseModelAccess()
purchaseModelAccess(modelId: string, paymentAmount: bigint): Promise<string>Purchase access to a paid model.
Type Definitions
ModelConfig
interface ModelConfig {
name: string;
description?: string;
modelType: ModelType;
version?: string;
accessType: AccessType;
accessPrice: bigint;
accessList?: string[];
encrypted: boolean;
encryptionConfig?: EncryptionConfig;
metadata?: Record<string, any>;
tags?: string[];
maxBatchSize?: number;
timeoutSeconds?: number;
memoryLimitMb?: number;
revenueShares?: Record<string, number>;
}InferenceRequest
interface InferenceRequest {
modelId: string;
inputData: Record<string, any>;
encrypted?: boolean;
batchSize?: number;
timeout?: number;
timestamp?: number;
}InferenceResult
interface InferenceResult {
modelId: string;
outputData: Record<string, any>;
gasUsed: bigint;
executionTime: number;
txHash: string;
confidence?: number;
metadata?: Record<string, any>;
}Enums
ModelType
enum ModelType {
COREML = 'coreml',
ONNX = 'onnx',
TENSORFLOW = 'tensorflow',
PYTORCH = 'pytorch',
CUSTOM = 'custom'
}AccessType
enum AccessType {
PUBLIC = 'public',
PRIVATE = 'private',
PAID = 'paid',
WHITELIST = 'whitelist'
}Examples
Image Classification
// Deploy image classifier
const imageModel = await client.deployModel(modelBytes, {
name: 'Image Classifier',
modelType: ModelType.COREML,
accessType: AccessType.PAID,
accessPrice: 50000000000000000n, // 0.05 ETH
encrypted: true
});
// Classify image
const imageBytes = new Uint8Array(/* image data */);
const result = await client.inference({
modelId: imageModel.modelId,
inputData: {
image: Array.from(imageBytes),
format: 'jpg'
}
});
console.log('Classification:', result.outputData.label);
console.log('Confidence:', result.outputData.confidence);Text Generation
// Deploy text generation model
const textModel = await client.deployModel(modelBytes, {
name: 'GPT Model',
modelType: ModelType.PYTORCH,
accessType: AccessType.PUBLIC,
encrypted: false
});
// Generate text
const result = await client.inference({
modelId: textModel.modelId,
inputData: {
prompt: 'The future of AI is',
maxTokens: 100,
temperature: 0.7
}
});
console.log('Generated text:', result.outputData.text);Encrypted Private Model
import { KeyManager } from 'citrate-js';
const keyManager = new KeyManager();
// Deploy encrypted model
const encryptedModel = await client.deployModel(modelBytes, {
name: 'Private Medical Model',
modelType: ModelType.ONNX,
accessType: AccessType.WHITELIST,
accessList: ['0x123...', '0x456...'],
encrypted: true,
encryptionConfig: {
algorithm: 'AES-256-GCM',
keyDerivation: 'HKDF-SHA256',
accessControl: true,
thresholdShares: 3,
totalShares: 5
}
});
// Execute encrypted inference
const sensitiveResult = await client.inference({
modelId: encryptedModel.modelId,
inputData: {
patientData: { /* sensitive medical data */ }
},
encrypted: true
});Revenue Sharing
// Deploy model with revenue sharing
const sharedModel = await client.deployModel(modelBytes, {
name: 'Collaborative Model',
modelType: ModelType.TENSORFLOW,
accessType: AccessType.PAID,
accessPrice: 200000000000000000n, // 0.2 ETH
revenueShares: {
'0xModel-Creator-Address': 0.60, // 60% to model creator
'0xData-Provider-Address': 0.30, // 30% to data provider
'0xPlatform-Address': 0.10 // 10% to platform
}
});React Hooks
useCitrateClient
const {
client,
isConnected,
isConnecting,
error,
connect,
disconnect,
chainId,
address,
balance
} = useCitrateClient({
rpcUrl: 'https://mainnet.citrate.ai',
autoConnect: true
});useModelDeployment
const {
deploy,
deployment,
isDeploying,
error
} = useModelDeployment(client);useInference
const {
execute,
result,
isExecuting,
error
} = useInference(client);useModelInfo
const {
modelInfo,
isLoading,
error,
refetch
} = useModelInfo(client, modelId);useModelList
const {
models,
isLoading,
error,
refetch
} = useModelList(client, ownerAddress, 50);Error Handling
import {
CitrateError,
ModelNotFoundError,
InsufficientFundsError,
ValidationError
} from 'citrate-js';
try {
const result = await client.inference({
modelId: 'invalid-model',
inputData: { test: 'data' }
});
} catch (error) {
if (error instanceof ModelNotFoundError) {
console.error('Model does not exist');
} else if (error instanceof InsufficientFundsError) {
console.error('Not enough funds for inference');
} else if (error instanceof ValidationError) {
console.error('Invalid input:', error.message);
} else if (error instanceof CitrateError) {
console.error('Citrate error:', error.message, error.code);
}
}Configuration
Network Configuration
import { CHAIN_IDS, DEFAULT_RPC_URLS } from 'citrate-js';
const client = new CitrateClient({
rpcUrl: DEFAULT_RPC_URLS[CHAIN_IDS.MAINNET],
// or for testnet:
// rpcUrl: DEFAULT_RPC_URLS[CHAIN_IDS.TESTNET],
privateKey: process.env.PRIVATE_KEY
});Custom Timeouts
const client = new CitrateClient({
rpcUrl: 'https://mainnet.citrate.ai',
timeout: 60000, // 60 seconds
retries: 3
});Development
Building
npm run buildTesting
npm testLinting
npm run lintFormatting
npm run formatBrowser Support
The SDK works in all modern browsers and supports:
- ES2020+ JavaScript environments
- WebAssembly for cryptographic operations
- WebSockets for real-time features
- Web3 wallet integration
Node.js Support
Requires Node.js 16.0.0 or higher.
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
Apache License 2.0 - see LICENSE file for details.
Support
- Documentation: https://docs.citrate.ai
- GitHub: https://github.com/citrate-ai/citrate
- Discord: https://discord.gg/citrate
- Issues: https://github.com/citrate-ai/citrate/issues
