@gladiaio/sdk
v1.0.4
Published
Gladia JavaScript/TypeScript SDK
Readme
Gladia JavaScript SDK
A TypeScript/JavaScript SDK for the Gladia API.
Requirements
For non-browser environment, you need either Node.js 20+ or Bun .
Installation
npm install @gladiaio/sdkIf you are using Node.js < 22, you also need to install the ws package:
npm install wsUsage
Import GladiaClient and create an instance.
Provide an API key with apiKey or the GLADIA_API_KEY environment variable. Get your API key here in under a minute.
You can also set GLADIA_API_URL and GLADIA_REGION (eu-west / us-west).
Node.js / Browser (ESM)
import { GladiaClient } from '@gladiaio/sdk'
const gladiaClient = new GladiaClient({
apiKey: 'your-api-key',
})Node.js (CommonJS)
const { GladiaClient } = require('@gladiaio/sdk')
const gladiaClient = new GladiaClient({
apiKey: 'your-api-key',
})Browser (script tag)
<script src="https://unpkg.com/@gladiaio/sdk"></script>
<script>
const gladiaClient = new Gladia.GladiaClient({
apiKey: 'your-api-key',
})
</script>Pre-recorded transcription
transcribe() accepts a file path (Node), http(s) URL, File, or Blob. It uploads when needed, then polls until the job completes.
import { GladiaClient } from '@gladiaio/sdk'
import 'dotenv/config'
const gladiaClient = new GladiaClient()
const audioPath = '../data/online-meeting-example.mp4'
const result = await gladiaClient.preRecorded().transcribe(audioPath, {
language_config: { languages: ['en'] },
})
console.log(result.result?.transcription?.full_transcript ?? '')See all supported languages here !
Pass the options argument to enable features from Audio intelligence such as diarization, translation, PII redaction, and much more.
Async pre-recorded
If your runtime has no top-level await, wrap calls in an async function:
async function main() {
const gladiaClient = new GladiaClient()
const result = await gladiaClient.preRecorded().transcribe('./audio.mp3')
console.log(result.result?.transcription?.full_transcript ?? '')
}
main().catch(console.error)Live transcription
const liveSession = gladiaClient.liveV2().startSession({
model: 'solaria-1',
encoding: 'wav/pcm',
sample_rate: 16000,
bit_depth: 16,
channels: 1,
language_config: {
languages: ['en'],
},
messages_config: {
receive_partial_transcripts: true,
},
})
liveSession.on('message', (message) => {
if (message.type === 'transcript') {
console.log(`${message.data.is_final ? 'F' : 'P'} | ${message.data.utterance.text.trim()}`)
}
})
liveSession.once('started', () => {
console.log(`Session ${liveSession.sessionId} started`)
})
liveSession.on('error', (err) => {
console.error('An error occurred during live session:', err)
})
liveSession.once('ended', () => {
console.log(`Session ${liveSession.sessionId} ended`)
})
liveSession.sendAudio(/* <audio_chunk> */)
liveSession.stopRecording()See all the supported languages here !
Pass the options argument to enable features from Audio intelligence such as diarization, translation, PII redaction, and much more.
Waiting for the session to finish
To await shutdown after stopRecording(), wait on the ended event:
import { GladiaClient } from '@gladiaio/sdk'
async function runLive() {
const gladiaClient = new GladiaClient({ apiKey: 'your-api-key' })
const liveSession = gladiaClient.liveV2().startSession({
language_config: { languages: ['en'] },
})
const ended = new Promise<void>((resolve) => {
liveSession.once('ended', () => resolve())
})
liveSession.stopRecording()
await ended
}
runLive().catch(console.error)When you need the session id as soon as the backend has created the session:
const sessionId = await liveSession.getSessionId()Documentation
License
MIT
