@canaryai/cli
v0.1.6
Published
Run local test runs, expose your local app via a tunnel, and stream results into Canary.
Readme
Canary CLI
Run local test runs, expose your local app via a tunnel, and stream results into Canary.
Install
npm install -g @canaryai/cli
# or
bun add -g @canaryai/cliLogin
canary loginQuickstart (local testing)
- Start your app locally.
- Start a run (auto-tunnel + run):
canary run --port 5173 --title "Login smoke"- Open the watch URL printed in the terminal.
Tunnel only
canary tunnel --port 5173MCP server
canary mcpTools:
local_run_tests(port, instructions, title)local_wait_for_results(runId)
Environment variables
CANARY_API_URL(defaulthttps://api.trycanary.ai)CANARY_APP_URL(defaulthttps://app.trycanary.ai)CANARY_API_TOKEN(optional;canary loginstores a token automatically)CANARY_LOCAL_PORT(optional default port forcanary run/canary tunnel)
Programmatic usage
You can trigger a suite programmatically without shelling out to the CLI:
import { canary } from "@canaryai/cli";
const result = await canary.run({
projectRoot: "/path/to/repo",
testDir: ["tests/smoke"],
cliArgs: ["--grep", "login"],
healing: {
apiKey: process.env.AI_API_KEY,
provider: "openai",
model: "gpt-4o-mini",
timeoutMs: 120_000,
maxActions: 50,
warnOnly: true,
},
stdio: "pipe",
});
if (!result.ok) {
console.error("suite failed", result.summary);
}Notes:
- Defaults mirror the CLI: healing on, Playwright config respected.
result.summaryis derived from Playwright’s JSON reporter plus healed counts from the AI event log.
