npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@lovable.dev/sdk

v0.1.1

Published

TypeScript SDK for the Lovable API

Readme

@lovable.dev/sdk

TypeScript SDK for the Lovable API.

Currently in preview.

Installation

npm install @lovable.dev/sdk

Usage

import { LovableClient } from "@lovable.dev/sdk";

const client = new LovableClient({
  apiKey: "lov_your-api-key",
});

// List workspaces
const workspaces = await client.listWorkspaces();

// 1. Create a project
const project = await client.createProject(workspaces[0].id, {
  description: "Best todo app",
  initialMessage: "Create a todo app with authentication"
});

// 2. Wait for the AI response and get the preview URL
const response = await client.waitForResponse(project.id);
console.log(response.content);    // AI's response text
console.log(response.messageId);  // AI message ID (for traces)
console.log(response.previewUrl); // Preview URL for the project

// 3. Send a follow-up chat message
await client.chat(project.id, {
  message: "Add a footer",
});

// 4. Send a message with file attachments
import { readFile } from "fs/promises";
const imageData = await readFile("design.png");
await client.chat(project.id, {
  message: "Update the hero section to match this design",
  files: [{ name: "design.png", data: imageData, type: "image/png" }],
});

// 5. Publish the project and get the live URL
await client.publish(project.id);
const published = await client.waitForProjectPublished(project.id);
console.log(published.url); // Live public URL

Remixing a project at a specific message

const client = new LovableClient({ apiKey: "lov_your-api-key" });

// Remix a project at the state just before a specific message
const jobId = await client.remixProject("source-project-id", {
  workspaceId: "target-workspace-id",
  messageId: "message-id-to-snapshot-at",
  // remixMode: "including", // use "including" to keep the message and its AI response
  includeHistory: true,
  includeCustomKnowledge: true,
});

// Wait for the remix to complete
const { projectId } = await client.waitForRemix("source-project-id", jobId, {
  onProgress: (status, step) => console.log(`Remix: ${status}`, step),
});

console.log(`Remixed project: ${projectId}`);

// Send a follow-up message to the remixed project
await client.chat(projectId, { message: "Add dark mode" });
const response = await client.waitForResponse(projectId);

Using a custom model

You can route the main agent to a custom OpenAI-compatible endpoint for eval and RL workflows:

await client.chat(project.id, {
  message: "Add a dark mode toggle",
  customModel: {
    endpoint: "https://my-vllm.example.com/v1",
    apiKey: "sk-...",
    modelName: "meta-llama/Llama-3.3-70B-Instruct",
  },
});

Fetching message traces (_dev)

Trace APIs are available under the _dev namespace. These are not part of the stable v1 surface and may change without notice.

const response = await client.waitForResponse(project.id);

// Fetch all traces for the message
const traces = await client._dev.getMessageTraces(project.id, response.messageId);
console.log(traces.spans);

// Filter by purpose (e.g. only the main agent span)
const agentTraces = await client._dev.getMessageTraces(project.id, response.messageId, {
  purposes: ["main_agent"],
});

// Batch fetch traces for multiple messages
const result = await client._dev.getMessageTracesBatch([
  { projectId: "proj-1", messageId: "msg-1" },
  { projectId: "proj-2", messageId: "msg-2" },
], { purposes: ["main_agent"], concurrency: 5 });

for (const [messageId, trace] of result.traces) {
  console.log(messageId, trace.spans.length);
}
for (const [messageId, error] of result.errors) {
  console.error(messageId, error.message);
}

API Reference

LovableClient

Constructor

new LovableClient(options: LovableClientOptions)
  • apiKey (required): Your Lovable API key
  • baseUrl (optional): Override the default API base URL

Methods

listWorkspaces(): Promise<WorkspaceWithMembership[]>

List all workspaces the authenticated user has access to.

getWorkspace(workspaceId: string): Promise<WorkspaceWithMembership>

Get a specific workspace by ID.

listProjects(workspaceId: string, options?): Promise<ProjectResponse[]>

List projects in a workspace.

Options:

  • limit (optional): Maximum number of projects to return
  • visibility (optional): Filter by visibility ("all" | "personal" | "public" | "workspace")
createProject(workspaceId: string, options): Promise<ProjectResponse>

Create a new project in a workspace.

Options:

  • description (required): Project description
  • techStack (optional): Technology stack (e.g., "react")
  • visibility (optional): Project visibility ("draft" | "private" | "public")
  • templateProjectId (optional): ID of a template project to clone
  • initialMessage (optional): Initial chat message to send to the AI agent
  • files (optional): Array of files to attach (browser File objects or FileInput objects)
chat(projectId: string, options): Promise<void>

Send a chat message to a project's AI agent.

Options:

  • message (required): The message to send
  • files (optional): Array of files to attach (browser File objects or FileInput objects)
  • chatOnly (optional): If true, only chat without making code changes
  • customModel (optional): Route the main agent to a custom OpenAI-compatible endpoint (see CustomModelConfig)

Note: This is an asynchronous operation. The API accepts the message and processes it in the background. Use waitForResponse() to wait for the AI's reply.

waitForResponse(projectId: string, options?): Promise<ChatResponse>

Wait for the AI's response to a chat message. Connects to the project's message stream (SSE) and returns the full response once complete.

Use this after chat() or after createProject() with initialMessage.

Returns:

  • content (string): The AI's full response text
  • messageId (string): The AI message ID (use with _dev.getMessageTraces())
  • previewUrl (string): The project's preview URL

Options:

  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)

Throws an error if the stream fails or timeout is reached.

getPreviewUrl(projectId: string): string

Get the preview URL for a project. This is a synchronous method that constructs the URL from the project ID.

publish(projectId: string, options?): Promise<DeploymentResponse>

Publish (deploy) a project to make it publicly accessible. The deployment runs asynchronously — use waitForProjectPublished() to wait for completion.

Options:

  • name (optional): Custom slug for the published URL

Returns:

  • status (string): Deployment status
  • deployment_id (string): The deployment ID
  • url (string): The published URL (may not be available until deployment completes)
getPublishedUrl(projectId: string): Promise<string | null>

Get the published URL for a project, or null if not published. Fetches the latest project details to check publication status.

waitForProjectReady(projectId: string, options?): Promise<ProjectResponse>

Wait for a project to reach "completed" status. Projects start in "in_progress" status while being created/built.

Options:

  • pollInterval (optional): Time between polls in ms (default: 2000)
  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)
  • onProgress (optional): Callback for status updates

Throws an error if the project fails or timeout is reached.

waitForProjectPublished(projectId: string, options?): Promise<ProjectResponse>

Wait for a project to be published (deployed) and have a live URL.

Options:

  • pollInterval (optional): Time between polls in ms (default: 3000)
  • timeout (optional): Maximum time to wait in ms (default: 600000 = 10 minutes)
  • onProgress (optional): Callback for status updates

Throws an error if timeout is reached.

remixProject(sourceProjectId: string, options): Promise<string>

Remix (fork) an existing project, optionally at a specific message point in time.

When messageId is provided, the remix captures the project state as it was just before that message was processed (default remixMode: "before"). Set remixMode: "including" to include the message and its AI response in the remix. Without messageId, the full current state is remixed.

Returns the remix job ID for polling with waitForRemix().

Options:

  • workspaceId (required): Target workspace for the new project
  • messageId (optional): Message ID to snapshot at — by default the remix reflects the project state just before this message
  • remixMode (optional): "before" (default) captures state before the message; "including" captures state after the message and its AI response
  • includeHistory (optional, default: false): Whether to preserve chat history
  • includeCustomKnowledge (optional, default: false): Whether to copy custom instructions/knowledge
  • initialMessage (optional): Initial chat message to send after remix completes
  • skipInitialRemixMessage (optional, default: false): When true, suppresses the default "I've successfully remixed this project" message
waitForRemix(sourceProjectId: string, jobId: string, options?): Promise<RemixResult>

Wait for a remix operation to complete. Polls until the job finishes.

Returns:

  • projectId (string): The ID of the newly created project

Options:

  • pollInterval (optional): Time between polls in ms (default: 2000)
  • timeout (optional): Maximum time to wait in ms (default: 300000 = 5 minutes)
  • onProgress (optional): Callback with (status, step?) for progress updates

Throws an error if the remix fails or timeout is reached.

_dev (developer/experimental APIs)

These methods are not part of the stable v1 surface and may change without notice.

_dev.getMessageTraces(projectId: string, messageId: string, options?): Promise<MessageTracesResponse>

Fetch Braintrust trace spans for a specific chat message. The messageId is available from the ChatResponse returned by waitForResponse().

When a purpose has multiple spans (e.g. main_agent across turns), only the last span is returned — it contains the full accumulated context.

Options:

  • purposes (optional): Filter spans by purpose (e.g. ["main_agent", "knowledge_rag"])

Returns:

  • message_id (string): The message ID
  • braintrust_span_id (string): The Braintrust span ID
  • root_span_id (string): The root span ID
  • spans (TraceSpan[]): The filtered trace spans
_dev.getMessageTracesBatch(queries, options?): Promise<BatchTracesResult>

Fetch traces for multiple messages across projects in parallel.

  • queries: Array of { projectId, messageId } to fetch
  • options.purposes (optional): Filter spans by purpose (applied to all queries)
  • options.concurrency (optional): Max parallel requests (default: 5)

Returns:

  • traces: Map of messageId → MessageTracesResponse
  • errors: Map of messageId → Error (for failed requests)
inviteCollaborator(workspaceId: string, options): Promise<WorkspaceMembershipResponse>

Invite a user to a workspace.

Options:

  • email (required): Email address of the user to invite
  • role (optional): Role to assign ("admin" | "collaborator" | "member" | "viewer")
listWorkspaceMembers(workspaceId: string): Promise<WorkspaceMembershipResponse[]>

List all members of a workspace.

removeWorkspaceMember(workspaceId: string, userId: string): Promise<void>

Remove a member from a workspace.

getProject(projectId: string): Promise<ProjectResponse>

Get project details by ID.

Types

The SDK exports TypeScript types for all API responses. See src/types.ts for the full list.

import type {
  WorkspaceWithMembership,
  ProjectResponse,
  CreateProjectOptions,
  ChatResponse,
  CustomModelConfig,
  FileInput,
  TracePurpose,
  TraceSpan,
  MessageTracesResponse,
  // ... etc
} from "@lovable.dev/sdk";

FileInput

For Node.js or non-browser environments, use FileInput instead of the browser File API:

interface FileInput {
  name: string;              // Original file name (e.g., "screenshot.png")
  data: Blob | ArrayBuffer | Uint8Array;  // File contents
  type: string;              // MIME type (e.g., "image/png")
}

CustomModelConfig

Configuration for routing the main agent to a custom OpenAI-compatible endpoint:

interface CustomModelConfig {
  endpoint: string;   // Base URL (e.g., "https://my-vllm.example.com/v1")
  apiKey: string;     // API key for the endpoint
  modelName: string;  // Model identifier (e.g., "meta-llama/Llama-3.3-70B-Instruct")
}

TracePurpose

Available trace span purposes:

type TracePurpose = "main_agent" | "codebase_rag" | "knowledge_rag" | "review";