npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

fakeapi-ai

v1.0.1

Published

A customizable fake API server for frontend development with Ollama integration.

Readme

FakeAPI-AI

A customizable Node.js package that allows you to create a fake API server for frontend development and testing. It supports both static JSON responses and dynamic content generation using an Ollama Large Language Model (LLM) based on defined schemas.

Features

  • Static JSON Responses: Define fixed JSON data for specific API routes.
  • Dynamic LLM-Generated Responses: Generate realistic and varied JSON data on the fly using an Ollama model.
  • Schema-Driven Generation: Provide a schema (variable names, types, and descriptions) and the LLM will generate data conforming to it.
  • Array of Objects Support: Easily request an array of generated objects, with a configurable size (defaults to 15 if not specified for nested arrays).
  • CORS Enabled: Automatically handles CORS headers for seamless frontend integration.
  • Simple API: Easy to start and stop the server.

Installation

To use this package in your project, you can install it :

npm install fakeapi-ai

Ollama Setup

You need to have Ollama installed and running on your system.

Install Ollama: Follow the instructions on the official Ollama website: https://ollama.ai/

Pull a model: Download a model (e.g., llama2, mistral,codellama:13b-instruct) that you intend to use.

ollama pull llama2

Start the Ollama server:

ollama serve

By default, Ollama runs on http://localhost:11434.

Usage

You can integrate the fakeapi-ai into your development workflow by creating a simple Node.js script to start it.

Create a file (e.g., run-fake-api.js) in your project's root:

// run-fake-api.js
const { startFakeApiServer, stopFakeApiServer } = require("fakeapi-ai");

const myFakeRoutes = {
  GET: {
    // --- Static Response Example ---
    "/api/static-users": [
      { id: 1, name: "Alice Smith", email: "[email protected]" },
      { id: 2, name: "Bob Johnson", email: "[email protected]" },
    ],

    // --- LLM-Generated Single Object Example ---
    "/api/generated-user-profile": {
      ollama: {
        model: "llama2", // Ensure this model is pulled in your Ollama instance
        schema: {
          id: { type: "number", description: "Unique user identifier" },
          username: { type: "string", description: "User's chosen username" },
          email: { type: "string", description: "User's email address" },
          isActive: {
            type: "boolean",
            description: "Whether the user account is active",
          },
          registrationDate: {
            type: "string",
            description: "Date of registration in YYYY-MM-DD format",
          },
          address: {
            type: "object",
            properties: {
              street: { type: "string" },
              city: { type: "string" },
              zipCode: { type: "string" },
            },
          },
          hobbies: {
            type: "array",
            items: { type: "string" },
            arraySize: 3, // Generate 3 hobbies
          },
        },
        options: { temperature: 0.7 }, // Optional: adjust LLM creativity (0.0-2.0)
      },
    },

    // --- LLM-Generated Array of Objects Example ---
    "/api/generated-products": {
      ollama: {
        model: "llama2",
        count: 5, // Generate an array of 5 product objects
        schema: {
          productId: { type: "string", description: "Unique product ID" },
          productName: { type: "string", description: "Name of the product" },
          price: { type: "number", description: "Price of the product" },
          category: { type: "string", description: "Product category" },
          inStock: { type: "boolean", description: "Availability status" },
        },
      },
    },

    // --- LLM-Generated Report with Nested Array Example ---
    "/api/generated-sales-report": {
      ollama: {
        model: "llama2",
        schema: {
          reportId: { type: "string", description: "Unique report identifier" },
          title: { type: "string", description: "Title of the sales report" },
          dateGenerated: {
            type: "string",
            description: "Current date in YYYY-MM-DD",
          },
          salesData: {
            type: "array",
            items: {
              type: "object",
              properties: {
                month: {
                  type: "string",
                  description: "Month name (e.g., January)",
                },
                revenue: {
                  type: "number",
                  description: "Total revenue for the month",
                },
                expenses: {
                  type: "number",
                  description: "Total expenses for the month",
                },
              },
            },
            arraySize: 3, // Nested array with 3 elements (e.g., for 3 months)
          },
          summary: {
            type: "string",
            description: "A brief summary of the sales report data",
          },
        },
      },
    },
  },
  POST: {
    "/api/submit-form": { message: "Form data received!", status: "success" },
  },
};

const PORT = 8080; // The port your fake API server will listen on
const OLLAMA_URL = "http://localhost:11434"; // Your Ollama server URL

async function runFakeApi() {
  try {
    await startFakeApiServer({
      routes: myFakeRoutes,
      port: PORT,
      ollamaUrl: OLLAMA_URL, // Pass the Ollama URL here to enable LLM generation
    });
    console.log(`Fake API is ready on http://localhost:${PORT}`);

    // Keep the server running until manually stopped (e.g., Ctrl+C)
    process.on("SIGINT", async () => {
      console.log("Stopping fake API...");
      await stopFakeApiServer();
      process.exit(0);
    });
  } catch (error) {
    console.error("Failed to start fake API:", error);
    process.exit(1);
  }
}

runFakeApi();

Run the server:

node run-fake-api.js

Your frontend application can now make requests to http://localhost:8080/api/... for the routes you defined.

API Reference

startFakeApiServer(options)

Initializes and starts the fake API.

  • options (Object):

    • routes (Object): Required. An object defining your API routes.
      • Keys are HTTP methods ('GET', 'POST', 'PUT', 'DELETE').
      • Values are objects mapping API paths (e.g., '/users') to their responses.
      • A response can be:
        • Any static JSON-serializable data (e.g., { message: 'Success' }, [{ id: 1 }]).
        • An ollama configuration object for LLM-generated responses.
    • port (number): Optional. The port to run the server on. Defaults to 3000.
    • ollamaUrl (string): Optional. The base URL of your Ollama server (e.g., 'http://localhost:11434'). Required if you use ollama configurations in your routes.
  • Returns: Promise<void> - Resolves when the server starts, rejects on error.

ollama Configuration Object

Used within a route definition to specify LLM-generated responses.

  • model (string): Required. The name of the Ollama model to use (e.g., 'llama2', 'mistral'). Ensure this model is pulled in your Ollama instance.
  • schema (Object): Required. An object defining the structure of the JSON data you want the LLM to generate.
    • Keys are property names.
    • Values are PropertySchema objects.
  • count (number): Optional. The number of top-level objects to generate.
    • If 1 (default) or omitted, a single JSON object is returned.
    • If > 1, an array of count JSON objects is returned.
  • options (Object): Optional. Additional generation options to pass to the Ollama model (e.g., temperature, top_k, num_ctx). Refer to the Ollama API documentation for available options.

PropertySchema Object

Defines the type and optional details for a property within your schema.

  • type (string): Required. The data type for the property. Supported types:
    • 'string'
    • 'number'
    • 'boolean'
    • 'array' (requires items property)
    • 'object' (requires properties property)
  • properties (Object): Required if type is 'object'. An object defining the nested properties of the object. Each value is another PropertySchema.
  • items (Object): Required if type is 'array'. A PropertySchema object defining the schema for each element within the array.
  • arraySize (number): Optional. Specifies the number of elements for an array type. Defaults to 15 for nested arrays if not provided.
  • description (string): Optional. A brief description for the property. This helps guide the LLM in generating more relevant and accurate data.

stopFakeApiServer()

Stops the fake API server if it's running.

  • Returns: Promise<void> - Resolves when the server is stopped.

Contributing

Feel free to open issues or submit pull requests if you have suggestions or improvements.

License

This project is licensed under the ISC License.