@lovable.dev/sdk
v0.1.1
Published
TypeScript SDK for the Lovable API
Readme
@lovable.dev/sdk
TypeScript SDK for the Lovable API.
Currently in preview.
Installation
npm install @lovable.dev/sdkUsage
import { LovableClient } from "@lovable.dev/sdk";
const client = new LovableClient({
apiKey: "lov_your-api-key",
});
// List workspaces
const workspaces = await client.listWorkspaces();
// 1. Create a project
const project = await client.createProject(workspaces[0].id, {
description: "Best todo app",
initialMessage: "Create a todo app with authentication"
});
// 2. Wait for the AI response and get the preview URL
const response = await client.waitForResponse(project.id);
console.log(response.content); // AI's response text
console.log(response.messageId); // AI message ID (for traces)
console.log(response.previewUrl); // Preview URL for the project
// 3. Send a follow-up chat message
await client.chat(project.id, {
message: "Add a footer",
});
// 4. Send a message with file attachments
import { readFile } from "fs/promises";
const imageData = await readFile("design.png");
await client.chat(project.id, {
message: "Update the hero section to match this design",
files: [{ name: "design.png", data: imageData, type: "image/png" }],
});
// 5. Publish the project and get the live URL
await client.publish(project.id);
const published = await client.waitForProjectPublished(project.id);
console.log(published.url); // Live public URLRemixing a project at a specific message
const client = new LovableClient({ apiKey: "lov_your-api-key" });
// Remix a project at the state just before a specific message
const jobId = await client.remixProject("source-project-id", {
workspaceId: "target-workspace-id",
messageId: "message-id-to-snapshot-at",
// remixMode: "including", // use "including" to keep the message and its AI response
includeHistory: true,
includeCustomKnowledge: true,
});
// Wait for the remix to complete
const { projectId } = await client.waitForRemix("source-project-id", jobId, {
onProgress: (status, step) => console.log(`Remix: ${status}`, step),
});
console.log(`Remixed project: ${projectId}`);
// Send a follow-up message to the remixed project
await client.chat(projectId, { message: "Add dark mode" });
const response = await client.waitForResponse(projectId);Using a custom model
You can route the main agent to a custom OpenAI-compatible endpoint for eval and RL workflows:
await client.chat(project.id, {
message: "Add a dark mode toggle",
customModel: {
endpoint: "https://my-vllm.example.com/v1",
apiKey: "sk-...",
modelName: "meta-llama/Llama-3.3-70B-Instruct",
},
});Fetching message traces (_dev)
Trace APIs are available under the _dev namespace. These are not part of the stable v1 surface and may change without notice.
const response = await client.waitForResponse(project.id);
// Fetch all traces for the message
const traces = await client._dev.getMessageTraces(project.id, response.messageId);
console.log(traces.spans);
// Filter by purpose (e.g. only the main agent span)
const agentTraces = await client._dev.getMessageTraces(project.id, response.messageId, {
purposes: ["main_agent"],
});
// Batch fetch traces for multiple messages
const result = await client._dev.getMessageTracesBatch([
{ projectId: "proj-1", messageId: "msg-1" },
{ projectId: "proj-2", messageId: "msg-2" },
], { purposes: ["main_agent"], concurrency: 5 });
for (const [messageId, trace] of result.traces) {
console.log(messageId, trace.spans.length);
}
for (const [messageId, error] of result.errors) {
console.error(messageId, error.message);
}API Reference
LovableClient
Constructor
new LovableClient(options: LovableClientOptions)apiKey(required): Your Lovable API keybaseUrl(optional): Override the default API base URL
Methods
listWorkspaces(): Promise<WorkspaceWithMembership[]>
List all workspaces the authenticated user has access to.
getWorkspace(workspaceId: string): Promise<WorkspaceWithMembership>
Get a specific workspace by ID.
listProjects(workspaceId: string, options?): Promise<ProjectResponse[]>
List projects in a workspace.
Options:
limit(optional): Maximum number of projects to returnvisibility(optional): Filter by visibility ("all"|"personal"|"public"|"workspace")
createProject(workspaceId: string, options): Promise<ProjectResponse>
Create a new project in a workspace.
Options:
description(required): Project descriptiontechStack(optional): Technology stack (e.g.,"react")visibility(optional): Project visibility ("draft"|"private"|"public")templateProjectId(optional): ID of a template project to cloneinitialMessage(optional): Initial chat message to send to the AI agentfiles(optional): Array of files to attach (browserFileobjects orFileInputobjects)
chat(projectId: string, options): Promise<void>
Send a chat message to a project's AI agent.
Options:
message(required): The message to sendfiles(optional): Array of files to attach (browserFileobjects orFileInputobjects)chatOnly(optional): If true, only chat without making code changescustomModel(optional): Route the main agent to a custom OpenAI-compatible endpoint (seeCustomModelConfig)
Note: This is an asynchronous operation. The API accepts the message and processes it in the background. Use waitForResponse() to wait for the AI's reply.
waitForResponse(projectId: string, options?): Promise<ChatResponse>
Wait for the AI's response to a chat message. Connects to the project's message stream (SSE) and returns the full response once complete.
Use this after chat() or after createProject() with initialMessage.
Returns:
content(string): The AI's full response textmessageId(string): The AI message ID (use with_dev.getMessageTraces())previewUrl(string): The project's preview URL
Options:
timeout(optional): Maximum time to wait in ms (default: 300000 = 5 minutes)
Throws an error if the stream fails or timeout is reached.
getPreviewUrl(projectId: string): string
Get the preview URL for a project. This is a synchronous method that constructs the URL from the project ID.
publish(projectId: string, options?): Promise<DeploymentResponse>
Publish (deploy) a project to make it publicly accessible. The deployment runs asynchronously — use waitForProjectPublished() to wait for completion.
Options:
name(optional): Custom slug for the published URL
Returns:
status(string): Deployment statusdeployment_id(string): The deployment IDurl(string): The published URL (may not be available until deployment completes)
getPublishedUrl(projectId: string): Promise<string | null>
Get the published URL for a project, or null if not published. Fetches the latest project details to check publication status.
waitForProjectReady(projectId: string, options?): Promise<ProjectResponse>
Wait for a project to reach "completed" status. Projects start in "in_progress" status while being created/built.
Options:
pollInterval(optional): Time between polls in ms (default: 2000)timeout(optional): Maximum time to wait in ms (default: 300000 = 5 minutes)onProgress(optional): Callback for status updates
Throws an error if the project fails or timeout is reached.
waitForProjectPublished(projectId: string, options?): Promise<ProjectResponse>
Wait for a project to be published (deployed) and have a live URL.
Options:
pollInterval(optional): Time between polls in ms (default: 3000)timeout(optional): Maximum time to wait in ms (default: 600000 = 10 minutes)onProgress(optional): Callback for status updates
Throws an error if timeout is reached.
remixProject(sourceProjectId: string, options): Promise<string>
Remix (fork) an existing project, optionally at a specific message point in time.
When messageId is provided, the remix captures the project state as it was just before that message was processed (default remixMode: "before"). Set remixMode: "including" to include the message and its AI response in the remix. Without messageId, the full current state is remixed.
Returns the remix job ID for polling with waitForRemix().
Options:
workspaceId(required): Target workspace for the new projectmessageId(optional): Message ID to snapshot at — by default the remix reflects the project state just before this messageremixMode(optional):"before"(default) captures state before the message;"including"captures state after the message and its AI responseincludeHistory(optional, default:false): Whether to preserve chat historyincludeCustomKnowledge(optional, default:false): Whether to copy custom instructions/knowledgeinitialMessage(optional): Initial chat message to send after remix completesskipInitialRemixMessage(optional, default:false): When true, suppresses the default "I've successfully remixed this project" message
waitForRemix(sourceProjectId: string, jobId: string, options?): Promise<RemixResult>
Wait for a remix operation to complete. Polls until the job finishes.
Returns:
projectId(string): The ID of the newly created project
Options:
pollInterval(optional): Time between polls in ms (default: 2000)timeout(optional): Maximum time to wait in ms (default: 300000 = 5 minutes)onProgress(optional): Callback with(status, step?)for progress updates
Throws an error if the remix fails or timeout is reached.
_dev (developer/experimental APIs)
These methods are not part of the stable v1 surface and may change without notice.
_dev.getMessageTraces(projectId: string, messageId: string, options?): Promise<MessageTracesResponse>
Fetch Braintrust trace spans for a specific chat message. The messageId is available from the ChatResponse returned by waitForResponse().
When a purpose has multiple spans (e.g. main_agent across turns), only the last span is returned — it contains the full accumulated context.
Options:
purposes(optional): Filter spans by purpose (e.g.["main_agent", "knowledge_rag"])
Returns:
message_id(string): The message IDbraintrust_span_id(string): The Braintrust span IDroot_span_id(string): The root span IDspans(TraceSpan[]): The filtered trace spans
_dev.getMessageTracesBatch(queries, options?): Promise<BatchTracesResult>
Fetch traces for multiple messages across projects in parallel.
queries: Array of{ projectId, messageId }to fetchoptions.purposes(optional): Filter spans by purpose (applied to all queries)options.concurrency(optional): Max parallel requests (default: 5)
Returns:
traces: Map of messageId →MessageTracesResponseerrors: Map of messageId →Error(for failed requests)
inviteCollaborator(workspaceId: string, options): Promise<WorkspaceMembershipResponse>
Invite a user to a workspace.
Options:
email(required): Email address of the user to inviterole(optional): Role to assign ("admin"|"collaborator"|"member"|"viewer")
listWorkspaceMembers(workspaceId: string): Promise<WorkspaceMembershipResponse[]>
List all members of a workspace.
removeWorkspaceMember(workspaceId: string, userId: string): Promise<void>
Remove a member from a workspace.
getProject(projectId: string): Promise<ProjectResponse>
Get project details by ID.
Types
The SDK exports TypeScript types for all API responses. See src/types.ts for the full list.
import type {
WorkspaceWithMembership,
ProjectResponse,
CreateProjectOptions,
ChatResponse,
CustomModelConfig,
FileInput,
TracePurpose,
TraceSpan,
MessageTracesResponse,
// ... etc
} from "@lovable.dev/sdk";FileInput
For Node.js or non-browser environments, use FileInput instead of the browser File API:
interface FileInput {
name: string; // Original file name (e.g., "screenshot.png")
data: Blob | ArrayBuffer | Uint8Array; // File contents
type: string; // MIME type (e.g., "image/png")
}CustomModelConfig
Configuration for routing the main agent to a custom OpenAI-compatible endpoint:
interface CustomModelConfig {
endpoint: string; // Base URL (e.g., "https://my-vllm.example.com/v1")
apiKey: string; // API key for the endpoint
modelName: string; // Model identifier (e.g., "meta-llama/Llama-3.3-70B-Instruct")
}TracePurpose
Available trace span purposes:
type TracePurpose = "main_agent" | "codebase_rag" | "knowledge_rag" | "review";