@unitypredict/unitypredict-client
v1.0.0
Published
A Node.js client library for making predictions using the UnityPredict API. Handles both simple predictions and complex workflows involving file uploads with configurable timeouts and exponential backoff.
Readme
UnityPredict JavaScript Client
A Node.js client library for making predictions using the UnityPredict API. This client handles both simple predictions and complex workflows involving file uploads, with configurable timeouts and exponential backoff for efficient polling of long-running predictions.
Installation
npm install unitypredictjsclientOr if using locally:
npm install ./UnityPredictJSClientUsage
Basic Setup
const { UnityPredictClient, PredictionRequest, LocalFile } = require('unitypredictjsclient');
// Initialize the client with your API key
const client = new UnityPredictClient('your-api-key-here');
// Initialize with verbose logging enabled
const verboseClient = new UnityPredictClient('your-api-key-here', { verboseLog: true });Simple Prediction (No File Inputs)
const request = new PredictionRequest({
inputValues: {
text: 'Hello world',
temperature: 0.7
},
desiredOutcomes: ['output']
});
const result = await client.predict('model-id', request);
console.log('Results:', result.outcomes);Prediction with File Upload
const localFile = new LocalFile('/path/to/audio.mp3');
const fileRequest = new PredictionRequest({
inputValues: {
'Audio File': localFile
},
desiredOutcomes: ['Transcription', 'Language']
});
// Download output files to a folder
const fileResult = await client.predict('model-id', fileRequest, '/output/folder');
console.log('Transcription:', fileResult.outcomes.Transcription);Asynchronous Predictions
For long-running predictions, you can use the async methods:
// Start an asynchronous prediction
const asyncRequest = new PredictionRequest({
inputValues: {
prompt: 'Write a long story about a robot',
max_tokens: 2000
},
desiredOutcomes: ['generated_text']
});
const asyncResponse = await client.asyncPredict('model-id', asyncRequest);
if (asyncResponse.status === 'Processing') {
// Check status later
const finalResult = await client.getRequestStatus(asyncResponse.requestId);
console.log('Final results:', finalResult.outcomes);
}Check Request Status
const status = await client.getRequestStatus('request-id-123');
switch (status.status?.toLowerCase()) {
case 'processing':
console.log('Prediction is still processing...');
break;
case 'completed':
console.log('Prediction completed successfully!');
console.log(`Results: ${Object.keys(status.outcomes || {}).join(', ')}`);
console.log(`Compute cost: ${status.computeCost}`);
break;
case 'error':
console.log(`Prediction failed: ${status.errorMessages}`);
break;
}Long-Running Predictions with Custom Timeout
const longRequest = new PredictionRequest({
inputValues: {
prompt: 'Generate a very long story',
max_tokens: 5000
},
desiredOutcomes: ['generated_text']
});
// 10 minutes timeout (600 seconds)
const longResult = await client.predict('text-model-id', longRequest, null, 600);API Reference
UnityPredictClient
Constructor
new UnityPredictClient(apiKey, options)apiKey(string, required): Your UnityPredict API keyoptions(object, optional):verboseLog(boolean): Enable verbose logging (default: false)baseUrl(string): Custom base URL for the API (default: production URL)
Methods
predict(modelId, request, outputFolderPath, pollingTimeoutSeconds)
Sends a synchronous prediction request. Automatically handles polling for long-running predictions.
modelId(string): The model ID to userequest(PredictionRequest): The prediction requestoutputFolderPath(string, optional): Path to download output filespollingTimeoutSeconds(number, optional): Timeout for polling (default: 900 seconds)
Returns: Promise<PredictionResponse>
asyncPredict(modelId, request, outputFolderPath)
Initiates an asynchronous prediction. Returns immediately with a request ID.
modelId(string): The model ID to userequest(PredictionRequest): The prediction requestoutputFolderPath(string, optional): Path to download output files
Returns: Promise<PredictionResponse>
getRequestStatus(requestId, outputFolderPath)
Retrieves the status of an asynchronous prediction request.
requestId(string): The request ID to checkoutputFolderPath(string, optional): Path to download output files
Returns: Promise<PredictionResponse>
PredictionRequest
new PredictionRequest({
inputValues: { /* your inputs */ },
desiredOutcomes: ['outcome1', 'outcome2'],
contextId: 'optional-context-id',
requestId: 'optional-request-id' // Used internally
})LocalFile
Wrapper for local files to be uploaded:
const file = new LocalFile('/path/to/file.ext');PredictionResponse
Response object containing:
requestId(string): Request tracking IDstatus(string): Status ('Processing', 'Completed', 'Error')outcomes(object): Prediction resultscomputeCost(number): Compute costcomputeTime(string): Compute timeerrorMessages(string): Error messages if any
UnityPredictException
Custom exception thrown when API operations fail.
Environment Configuration
The client automatically uses the development environment when NODE_ENV is set to development or dev:
NODE_ENV=development node your-script.jsFeatures
- ✅ Synchronous and asynchronous prediction support
- ✅ Automatic file upload handling
- ✅ Exponential backoff polling for long-running predictions
- ✅ Automatic file download for output files
- ✅ Comprehensive error handling
- ✅ Verbose logging option
- ✅ Environment-based configuration (dev/prod)
Requirements
- Node.js >= 12.0.0
License
ISC
