@llamaindex/llama-deploy
v0.0.2
Published
Generated llama-deploy API client from OpenAPI specification
Maintainers
Keywords
Readme
@llamaindex/llama-deploy
This package provides a TypeScript llama-deploy API client generated from the OpenAPI specification using @hey-api/openapi-ts.
Installation
npm install @llamaindex/llama-deployDevelopment
Generate llama-deploy API client
To generate the llama-deploy API client from the OpenAPI specification:
npm run generateThis will read the OpenAPI JSON file from ../chat-ui/src/hook/openapi.json and generate TypeScript client code in src/generated/.
Build
To build the package:
npm run buildThis will run the generator and compile TypeScript to the dist/ directory.
Clean
To clean generated files:
npm run cleanUsage
import { client, DeploymentsService } from '@llamaindex/llama-deploy'
// Configure the client
client.setConfig({
baseUrl: 'https://your-api-base-url.com',
})
// Use the API services
const deployments = await DeploymentsService.readDeploymentsDeploymentsGet()API Services
The generated client includes services for:
- DeploymentsService: Manage deployments
- TasksService: Create and manage tasks
- SessionsService: Handle sessions
- EventsService: Stream and send events
Generated Files
The following files are generated and should not be edited manually:
src/generated/client.ts- HTTP client configurationsrc/generated/services.ts- API service methodssrc/generated/types.ts- TypeScript type definitionssrc/generated/index.ts- Main exports
Configuration
The generation is configured in openapi-ts.config.ts. Key settings:
- Input:
../chat-ui/src/hook/openapi.json - Output:
./src/generated - Client:
@hey-api/client-fetch - Format: Prettier formatting applied
