npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@presidio-dev/factifai-agent

v2.0.0

Published

An AI powered browser automation testing agent powered by LLMs.

Readme

Factifai Agent

npm version License: MIT

Table of Contents

Overview

Factifai Agent is a powerful CLI tool for AI-driven browser automation testing that integrates seamlessly into development and testing workflows, CI/CD pipelines. Leveraging Large Language Models (LLMs), it interprets natural language test instructions and executes them through a structured, reliable process.

Built on LangGraph and Playwright, it enables testers and developers to write test cases in plain English while maintaining precision and reproducibility. The tool provides rich CLI visualization of test progress with real-time feedback, making it ideal for both interactive use and automated testing environments.

Demo

Demo

Key Features

  • Natural Language Test Instructions: Write test cases in plain English
  • LLM-Powered Test Interpretation: Automatically converts natural language to executable test steps
  • CLI-First Design: Purpose-built as a command-line tool for both interactive use and automation
  • CI/CD Pipeline Integration: Easily integrate into GitHub Actions, Jenkins, GitLab CI, and more
  • Rich Progress Visualization: Beautiful terminal interfaces showing real-time test execution progress
  • Playwright Integration: Leverages Playwright's robust browser automation capabilities
  • LangGraph Architecture: Uses a directed state graph for reliable test execution flow
  • Cross-Browser Support: Works across Chromium, Firefox, and WebKit
  • Detailed Test Reporting: Generates comprehensive test execution reports
  • Step-by-Step Verification: Validates each test step against expected outcomes
  • Automatic Retry Mechanism: Intelligently retries failed steps
  • Multiple LLM Providers: Supports OpenAI, Azure OpenAI and AWS Bedrock

Requirements

  • Node.js 18+
  • Playwright with browsers (must be installed with npx playwright install --with-deps)

Installation

# Install globally
npm install -g @presidio-dev/factifai-agent

# Install Playwright globally
npm install -g playwright

# Install Playwright and dependencies
npx playwright install --with-deps

Quick Start

With OpenAI

# Set your API key (only needed once, persists across sessions)
factifai-agent config --set OPENAI_API_KEY=your-api-key-here

# Run your test
factifai-agent --model openai run "Navigate to duckduckgo.com and search 'eagles'"

With Azure OpenAI

# Set your API key, instance name, deployment name and API version (only needed once, persists across sessions)
factifai-agent config --set AZURE_OPENAI_API_KEY=your-api-key-here
factifai-agent config --set AZURE_OPENAI_API_INSTANCE_NAME=your-instance-name
factifai-agent config --set AZURE_OPENAI_API_DEPLOYMENT_NAME=your-deployment-name
factifai-agent config --set AZURE_OPENAI_API_VERSION=your-api-version

# Run your test
factifai-agent --model azure-openai run "Navigate to duckduckgo.com and search 'eagles'"

With AWS Bedrock

# Set your AWS credentials (only needed once, persists across sessions)
factifai-agent config --set AWS_ACCESS_KEY_ID=your-access-key-id
factifai-agent config --set AWS_SECRET_ACCESS_KEY=your-secret-access-key
factifai-agent config --set AWS_DEFAULT_REGION=us-west-2

# Run your test
factifai-agent --model bedrock run "Navigate to duckduckgo.com and search 'eagles'"

Usage Guide

Commands

Test Automation

# Run with test instructions in the command
factifai-agent --model openai run "Your test instructions"

# Run from a file
factifai-agent --model openai run --file ./examples/test-case.txt

# With custom session ID
factifai-agent --model openai run --session my-test-123 "Your test instruction"

Configuration Management

# Show current configuration
factifai-agent config --show

# Set default model provider (persists across sessions)
factifai-agent config --model openai

# Set individual configuration values (persists across sessions)
factifai-agent config --set OPENAI_API_KEY=your-api-key
factifai-agent config --set OPENAI_MODEL=gpt-4.1

Model Management

# List all available models
factifai-agent models

Usage Examples

Cross-Browser Compatibility Testing (Coming Soon)

You can run the same test across different browsers to ensure consistent functionality:

# Test with Firefox
factifai-agent run --browser firefox "Verify that user registration works on our website"

# Test with WebKit (Safari)
factifai-agent run --browser webkit "Verify that user registration works on our website"

# Test with Chromium (default)
factifai-agent run "Verify that user registration works on our website"

Test File Format

Create structured test files for complex scenarios:

**Objective:** Search on DuckDuckGo

**Test Steps:**

1. **Navigate to duckduckgo.com**
   * **Expected:** DuckDuckGo homepage loads

2. **Search for "eagles"**
   * **Action:** Type "eagles" in search box and press Enter
   * **Expected:** Search results for "eagles" appear

Configuration

Factifai Agent uses a persistent configuration system that stores settings in ~/.factifai/config.json. This ensures your settings are remembered across terminal sessions.

Setting Configuration Values

# Model selection
factifai-agent config --set MODEL_PROVIDER=openai  # "openai" | "azure-openai" | "bedrock"
factifai-agent config --set OPENAI_MODEL=gpt-4.1
factifai-agent config --set BEDROCK_MODEL=us.anthropic.claude-3-7-sonnet-20250219-v1:0

# API credentials
factifai-agent config --set OPENAI_API_KEY=your-api-key-here
factifai-agent config --set AZURE_OPENAI_API_KEY=your-api-key-here
factifai-agent config --set AZURE_OPENAI_API_INSTANCE_NAME=your-instance-name
factifai-agent config --set AZURE_OPENAI_API_DEPLOYMENT_NAME=your-deployment-name
factifai-agent config --set AZURE_OPENAI_API_VERSION=your-api-version
factifai-agent config --set AWS_DEFAULT_REGION=us-west-2
factifai-agent config --set AWS_ACCESS_KEY_ID=your-access-key-id
factifai-agent config --set AWS_SECRET_ACCESS_KEY=your-secret-access-key

Viewing Current Configuration

# Show all configuration values
factifai-agent config --show

Clearing Configuration Values

To clear configuration values, you'll need to manually edit or remove the config file:

# Location of the configuration file
~/.factifai/config.json

You can either:

  • Edit this file directly and remove specific keys
  • Delete the file to reset all configuration values

Configuration Priority

The system uses the following priority order when determining configuration values:

  1. Persistent Configuration - Values set with config --set (stored in ~/.factifai/config.json)
  2. Environment Variables - Values set with traditional environment variables
  3. Default Values - Hardcoded defaults in the application

Environment variables can still be used alongside the configuration system, which is useful for:

  • Temporary overrides for specific sessions
  • CI/CD pipelines
  • Development environments
# Example of using environment variables (temporary, session only)
export OPENAI_MODEL=gpt-4.1-turbo
factifai-agent run "Your test instructions"

Supported Models

| Provider | Configuration | Available Models | |----------|--------------|-----------------| | OpenAI | OPENAI_API_KEY | gpt-4.1 (default)gpt-4o | | Azure OpenAI | AZURE_OPENAI_API_KEYAZURE_OPENAI_API_INSTANCE_NAMEAZURE_OPENAI_API_DEPLOYMENT_NAMEAZURE_OPENAI_API_VERSION | gpt-4.1 (default)gpt-4o | | AWS Bedrock | AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_DEFAULT_REGION | us.anthropic.claude-3-7-sonnet-20250219-v1:0 (default)anthropic.claude-3-5-sonnet-20240620-v1:0 |

💡 Best Practices for Test Creation

Writing Effective Tests

  1. Create Focused Tests

    • Keep tests small and focused on single user journeys
    • Test one feature or functionality at a time
    • Break complex workflows into separate test cases
    • Example: ✅ "Check login functionality" instead of ❌ "Verify entire website works"
  2. Use Descriptive Language

    • Be specific about actions and targets
    • Include element identifiers when possible
    • Use clear, unambiguous instructions
    • Example: ✅ "Type 'standard_user' into the username field" instead of ❌ "enter username"
  3. Include Expected Outcomes

    • Always specify what success looks like
    • Include explicit verification points
    • Mention what elements or text should appear
    • Example: ✅ "Verify that the account dashboard displays the username"
  4. Structure Your Test Instructions

    • Use numbered steps for complex scenarios
    • Group related actions together
    • Include setup and teardown steps when needed
    • Example: ✅ "1. Navigate to login page, 2. Enter credentials, 3. Click submit, 4. Verify dashboard appears"

Optimizing for LLMs

  1. Handle Dynamic Content

    • Use flexible language for elements that may change
    • Provide fallback strategies
    • Describe elements by their function rather than exact text
    • Example: ✅ "Click on the first product in the list" instead of ❌ "Click on 'Sauce Labs Backpack'"
  2. Avoid Ambiguity

    • Use standard web terminology consistently
    • Be explicit about which element to interact with when multiple similar elements exist
    • Break complex instructions into simpler steps
    • Example: ✅ "Click the 'Add to Cart' button for the 'Sauce Labs Backpack' product" instead of ❌ "Add the item"

Test Management

  1. Organize Tests in Files for E2E and Regression

    • Separate end-to-end flows from component-level tests
    • Structure E2E tests to follow user journeys
    • Group regression tests by functional area
    • Example file structure:
      tests/
      ├── e2e/
      │   ├── checkout-flow.txt       # Complete purchase flow
      │   └── account-creation.txt    # Full signup to profile creation
      ├── regression/
      │   ├── auth/                   # Authentication components
      │   │   ├── login.txt
      │   │   └── password-reset.txt
      │   └── product/                # Product functionality
      │       ├── filtering.txt
      │       └── sorting.txt
  2. CI/CD Integration

    • Store credentials securely using environment variables
    • Set appropriate timeouts for test execution
    • Configure retries for potentially flaky tests
    • Example GitHub Actions configuration:
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v3
        - run: |
            npm install -g @presidio-dev/factifai-agent
            npm install -g playwright
            npx playwright install --with-deps
            factifai-agent config --set OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
            factifai-agent run --file tests/e2e-tests.txt --retries 2

Example of Good vs. Poor Test Instructions

❌ Poor Example:

Test login at saucedemo site

✅ Good Example:

Test login functionality on saucedemo.com

1. Navigate to https://www.saucedemo.com
2. Enter "standard_user" in the username field
3. Enter "secret_sauce" in the password field 
4. Click the Login button
5. Verify that:
   - The inventory page loads
   - The shopping cart icon is visible
   - The hamburger menu is available in the top-left corner

Architecture

Factifai Agent employs a robust LangGraph-based architecture:

  1. Preprocessing Node: Formats and prepares the test instruction
  2. Parsing Node: Converts natural language to structured test steps
  3. Execution Node: Performs browser actions via Playwright
  4. Tracking Node: Monitors test progress and status
  5. Tool Node: Provides necessary tools for interaction
  6. Report Generation Node: Creates detailed test results

Learn More

License

MIT © PRESIDIO®