smartclaim
v5.3.0
Published
A complete SmartClaim client for R&D Tax Claims(https://smartclaim.uk)
Readme
SmartClaim Client
Please see the SmartClaim website for more information. A TypeScript/JavaScript client for interacting with the SmartClaim service. This client provides functionality for file uploads and AI-powered draft question answering with real-time event streaming.
Installation
pnpm run buildUsage
Here's a complete example of how to use the client:
import { SmartClaimClient, SmartClaimMessage } from 'smartclaim';
// Initialize and connect to the client
const client = new SmartClaimClient(
'your-api-url',
'your-client-id',
'your-client-secret',
'your-user-id'
);
await client.connect();
// List existing files
await client.listFiles({
moduleId: "draft",
onEvent: (msg: SmartClaimMessage) => {
if (msg.key === "data") {
const files = msg.message;
console.log("Files:", files);
}
}
});
// Upload a file
const fileRequestId = await client.uploadFile({
filePath: './your-document.pdf',
filename: 'your-document.pdf',
onEvent: (msg: SmartClaimMessage) => {
if (msg.key === "status" && msg.message === "Ready") {
console.log("Upload complete");
}
}
});
// Ask a draft question
await client.askDraftQuestion({
questionKey: 'q_4', // Options: q_1 to q_6
selectedModel: 'gpt-4o-mini', // Options: gpt-4o-mini, gpt-4o, Deepseek-V3, DeepSeek-R1
customUserInstructions: '',
customAssistantInstructions: '',
title: 'Project Name',
override_files: [fileRequestId], // if you want to override the files, pass in the file request ids here otherwise it will use the files uploaded for the module
onEvent: (msg: SmartClaimMessage) => {
if (msg.sub_type === 'delta') {
process.stdout.write(msg.message); // Stream response
}
}
});
// Delete a file
await client.deleteFile({
fileRequestId: fileRequestId,
onEvent: (msg: SmartClaimMessage) => {
if (msg.key === "completed" && msg.message === "true") {
console.log("File deleted");
}
}
});
// Close the connection
client.close();Features
- File management (list, upload, delete)
- AI-powered draft question answering with real-time response streaming
- Event-based architecture for real-time updates
- TypeScript support
API Reference
SmartClaimClient
Constructor
constructor(url: string, clientId: string, clientSecret: string, userId: string)Methods
connect()
Establishes connection to the service.
close()
Closes the connection.
listFiles(options)
Lists all files in a module.
Options:
moduleId: Module identifier (e.g., "draft")onEvent: Callback for file list events
uploadFile(options)
Uploads a file with progress events.
Options:
fileData: File or Buffer or Blob data to uploadfilePath: Path to the file (alternative to fileData)filename: Name of the file (required when using fileData)onEvent: Callback for progress events
Note: Either fileData or filePath must be provided, but not both.
askDraftQuestion(options)
Asks an AI-powered draft question.
Options:
questionKey: Question identifier (q_1 to q_6)selectedModel: AI model to use (gpt-4o-mini, gpt-4o, Deepseek-V3, DeepSeek-R1)customUserInstructions: Custom instructions for the usercustomAssistantInstructions: Custom instructions for the assistanttitle: Project titleoverride_files: Array of file request IDs to override the files uploaded for the module, you don't need to pass this in if you want to use the files uploaded for the moduleonEvent: Callback for streaming events
deleteFile(options)
Deletes a file by its request ID.
Options:
fileRequestId: ID of the file to deleteonEvent: Callback for deletion events
listUsers(options)
Lists all users.
Options:
onEvent: Callback for user list events
createUser(options)
Creates a new user.
Options:
email: User's email addressname: User's nameonEvent: Callback for user creation events
This will return client credentials for the new user.
deleteUser(options)
Deletes a user by ID.
Options:
id: ID of the user to deleteonEvent: Callback for deletion events
getMeetingInfo(options)
Retrieves information about a specific meeting by its ID.
Options:
meetingId: ID of the meeting to retrieve information for
Returns:
- A Promise that resolves with the meeting information
Example usage:
const meetingInfo = await client.getMeetingInfo({
meetingId: "your-meeting-id"
});
console.log(meetingInfo);askReview(options)
Reviews specific aspects of the draft.
Options:
key: Review category (one of: overall, eligibility, baseline, advance, uncertainty, resolution)subKey: Specific aspect to review. Valid subkeys depend on the category:- overall: coherence, competent_professionals
- eligibility: overall_eligibility, risk_factors, baseline_statements, internet_search, feedback, uncertainty_check, qualifying_activity
- baseline: comprehensiveness, focus, phrasing, grammar
- advance: comprehensiveness, focus, phrasing, guideline_references, grammar
- uncertainty: comprehensiveness, focus, phrasing, guideline_references, grammar
- resolution: comprehensiveness, focus, phrasing, guideline_references, grammar
context: Additional context for the reviewgeneral_assistant_instructions: General instructions for the assistantonEvent: Callback for streaming events
Example usage:
await client.askReview({
key: 'overall',
subKey: 'coherence',
context: 'Your context text here...',
general_assistant_instructions: '',
onEvent: (msg: SmartClaimMessage) => {
console.log(msg);
if (msg.sub_type === 'delta') {
process.stdout.write(msg.message); // Stream response
}
}
});Realtime Interview Analysis
The SmartClaim client provides powerful capabilities for real-time audio transcription and analysis during interviews or meetings. This feature uses Microsoft's Azure Speech Services for high-quality speech recognition.
Overview
Realtime Interview Analysis enables:
- Live audio capture from microphone and/or system audio
- Real-time speech-to-text transcription
- Sending transcriptions to SmartClaim's backend for analysis
- Event-based system for receiving analysis results and updates
Key Features
- Continuous speech recognition with Azure Speech Services
- Support for both microphone and system audio capture
- Configurable language settings
- Meeting ID tracking for session management
- Pause/resume functionality
- Real-time transcript streaming
- Event-based architecture
Usage
Getting Credentials
To use Realtime Interview Analysis, you need to obtain credentials from SmartClaim:
import { SmartClaimClient } from 'smartclaim';
// Initialize SmartClaim client
const client = new SmartClaimClient(url, clientId, clientSecret, userId);
// Get credentials for web usage including Azure Speech token
const credentials = await client.getWebCredentials();
// credentials contains: {smartclaim_access_token, azure_speech_token, azure_speech_region}
// Use these in streamTranscribe
const streamResult = await streamTranscribe({
azureSpeechToken: credentials.azure_speech_token,
region: credentials.azure_speech_region,
smartclaimAccessToken: credentials.smartclaim_access_token,
// ... other options
});Complete Authentication Flow
SmartClaim Authentication uses the client.getWebCredentials() method.
The authentication flow is:
- Backend: Get credentials from SmartClaim API
// Initialize SmartClaim client
const client = new SmartClaimClient(url, clientId, clientSecret, userId);
// Get credentials including Azure Speech token
const credentials = await client.getWebCredentials();
// credentials contains:
// {
// "smartclaim_access_token": "...",
// "azure_speech_token": "...",
// "azure_speech_region": "..."
// }- Client: Use the returned credentials with streamTranscribe
// Configure streaming with credentials
const streamConfig = {
// Azure Speech Service configuration
region: credentials.azure_speech_region,
azureSpeechToken: credentials.azure_speech_token,
language: 'en-US', // or your preferred language
// SmartClaim configuration
smartclaimAccessToken: credentials.smartclaim_access_token,
smartclaimUrl: 'https://api.smartclaim.ai', // or your SmartClaim API URL
// Audio configuration
meetingId: meetingId, // Optional - will be generated if not provided
includeSystemAudio: true, // Capture system audio (screen sharing audio)
microphoneDeviceId: selectedMicDeviceId, // Optional - specific microphone device
gain: 1.5, // Microphone gain/volume boost (1.0 = normal, >1.0 = amplified)
// Event handlers
onTranscript: (text, isFinal) => {
// Handle transcript updates
console.log(`${isFinal ? 'Final' : 'Interim'}: ${text}`);
},
onEvent: (message) => {
// Handle other events (analysis results, status updates, etc.)
console.log('Event:', message);
},
onError: (error) => {
console.error('Streaming error:', error);
},
onClose: () => {
console.log('Stream closed');
}
};
// Start streaming with configuration
const streamResult = await streamTranscribe(streamConfig);The backend uses getWebCredentials() to obtain and provide all necessary tokens for authentication.
Detailed Usage Example
import { streamTranscribe } from 'smartclaim';
// Configure audio streaming
const streamResult = await streamTranscribe({
// Authentication settings
azureSpeechToken: 'your-azure-speech-token',
smartclaimAccessToken: 'your-smartclaim-token',
region: 'westeurope',
// Session configuration
meetingId: 'optional-meeting-id', // Will be generated if not provided
language: 'en-US',
smartclaimUrl: 'https://api.smartclaim.uk/api',
// Audio settings
microphoneDeviceId: 'optional-specific-mic-id',
includeSystemAudio: true, // Capture system audio (screen sharing audio)
gain: 1.5, // Microphone gain/amplification (1.0 = normal, 2.0 = double volume)
// Callbacks
onTranscript: (text, isFinal) => {
console.log(`Transcript ${isFinal ? 'final' : 'interim'}: ${text}`);
},
onEvent: (event) => {
console.log('Event received:', event);
},
onError: (error) => {
console.error('Error:', error);
},
onClose: () => {
console.log('Connection closed');
}
});
// The streamResult provides controls for the audio stream
console.log(`Meeting ID: ${streamResult.meetingId}`);
// Stop the audio stream and transcription
streamResult.stop();
// Pause transcription temporarily
streamResult.pause();
// Resume transcription after pausing
streamResult.resume();
// Send custom messages through the event system
streamResult.sendMessage({ type: 'custom', data: 'your-data' });Configuration Options
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| azureSpeechToken | string | Yes | Azure Speech Service access token |
| smartclaimAccessToken | string | Yes | SmartClaim API access token |
| region | string | Yes | Azure Speech Service region (e.g., 'westeurope') |
| language | string | No | Recognition language (default: 'en-US') |
| meetingId | string | No | Meeting identifier (auto-generated if not provided) |
| smartclaimUrl | string | No | SmartClaim API URL (default: 'https://api.smartclaim.ai') |
| includeSystemAudio | boolean | No | Capture system audio for screen sharing (default: false) |
| microphoneDeviceId | string | No | Specific microphone device ID |
| gain | number | No | Microphone gain/amplification factor (default: 1.0, range: 0.1-3.0) |
| onTranscript | function | Yes | Callback for transcript updates |
| onEvent | function | Yes | Callback for other events |
| onError | function | No | Callback for error handling |
| onClose | function | No | Callback when connection closes |
Audio Settings
Microphone Gain: The
gainparameter controls microphone volume amplification1.0= Normal volume (no amplification)1.5= 50% volume boost (recommended for quiet environments)2.0= Double volume- Range:
0.1to3.0
System Audio: When
includeSystemAudiois enabled, the system captures both microphone and computer audio (useful for screen sharing scenarios)Device Selection: Use
microphoneDeviceIdto specify a particular microphone when multiple devices are available
Event Types
The onEvent callback receives events with the following structure:
interface SmartClaimMessage {
message: any; // The payload of the message
request_id: string; // Usually the meeting ID for tracking
type: string; // Type of message (e.g., 'response', 'info', 'interim')
sub_type: string; // Subtype for more specific categorization
key: string; // Key identifier for the message
}Common event types include:
- Transcription events: Contain recognized speech text
- Status events: Indicate changes in system status
- Analysis events: Contain AI analysis of transcribed content
Error Handling
Errors in the audio streaming system are now emitted through the onError callback with structured error information using the SmartClaimError class:
class SmartClaimError extends Error {
request_id: string; // Meeting/request ID for tracking
type: string; // Always 'error'
sub_type: string; // Error category (e.g., 'authentication', 'recognition', 'system_audio')
key: string; // Specific error key identifier
}Error Categories and Keys
The following error types may be emitted:
| Sub-Type | Key | Description |
|----------|-----|-------------|
| authentication | token_refresh_failed | Azure Speech token refresh failed |
| authentication | token_update_restart_failed | Failed to restart recognition with new token |
| authentication | token_update_stop_failed | Failed to stop recognition for token update |
| authentication | authentication_failed | Initial authentication failed |
| system_audio | system_audio_failed | Failed to capture system audio |
| system_audio | system_audio_source_failed | Failed to create system audio source |
| recognition | recognition_canceled | Speech recognition was canceled with error |
| recognition | recognizer_creation_failed | Failed to create speech recognizer |
| recognition | recognition_stop_failed | Failed to stop recognition |
| recognition | recognition_stop_exception | Exception while stopping recognition |
| recognition | recognition_failed | Failed to start recognition |
| transcript | transcript_send_failed | Failed to send transcript to backend |
Example Error Handling
const streamResult = await streamTranscribe({
// ... configuration options ...
onError: (error) => {
// Check if it's a structured SmartClaimError
if (error instanceof SmartClaimError) {
console.error(`Error [${error.sub_type}/${error.key}]: ${error.message}`);
console.error(`Meeting ID: ${error.request_id}`);
// Handle specific error types
switch (error.sub_type) {
case 'authentication':
// Handle authentication errors
if (error.key === 'token_refresh_failed') {
// Token refresh failed, may need to get new credentials
}
break;
case 'system_audio':
// Handle system audio errors
console.warn('System audio capture failed, continuing with microphone only');
break;
case 'recognition':
// Handle recognition errors
if (error.key === 'recognition_canceled') {
// Recognition was canceled, may need to restart
}
break;
}
} else {
// Handle regular errors
console.error('Error:', error.message);
}
}
});Note: Prior versions emitted errors through the onEvent callback with type: 'error'. These are now emitted through onError for better error handling separation.
Transcription Event Types
When working with audio transcription, you'll receive several types of transcription-related events:
Interim Transcriptions - Partial, non-final speech recognition results:
{ message: "partial transcription text", request_id: "meeting-id", type: "interim", sub_type: "transcription", key: "interim_transcription" }These events occur in real-time as speech is being processed but before a complete utterance is recognized. They are useful for providing immediate feedback but may change.
Final Transcriptions - Completed speech recognition results:
{ message: "final transcription text", request_id: "meeting-id", type: "response", sub_type: "transcription", key: "transcription" }These events represent finalized speech segments that Azure Speech Service has confidently recognized. These transcriptions are automatically sent to the SmartClaim backend for processing.
Processed Transcripts - Results after backend analysis:
{ message: { /* analysis results */ }, request_id: "meeting-id", type: "response", sub_type: "transcript_processed", key: "transcript_processed" }These events contain the response from the SmartClaim backend after it has processed a transcription. The
messagefield typically contains structured data with analysis results that can be used to provide insights or guidance based on the conversation content.
The onTranscript callback provides a more direct way to handle transcription events, receiving both interim and final transcriptions with a boolean flag indicating whether the transcription is final.
