fakeapi-ai
v1.0.1
Published
A customizable fake API server for frontend development with Ollama integration.
Maintainers
Readme
FakeAPI-AI
A customizable Node.js package that allows you to create a fake API server for frontend development and testing. It supports both static JSON responses and dynamic content generation using an Ollama Large Language Model (LLM) based on defined schemas.
Features
- Static JSON Responses: Define fixed JSON data for specific API routes.
- Dynamic LLM-Generated Responses: Generate realistic and varied JSON data on the fly using an Ollama model.
- Schema-Driven Generation: Provide a schema (variable names, types, and descriptions) and the LLM will generate data conforming to it.
- Array of Objects Support: Easily request an array of generated objects, with a configurable size (defaults to 15 if not specified for nested arrays).
- CORS Enabled: Automatically handles CORS headers for seamless frontend integration.
- Simple API: Easy to start and stop the server.
Installation
To use this package in your project, you can install it :
npm install fakeapi-aiOllama Setup
You need to have Ollama installed and running on your system.
Install Ollama: Follow the instructions on the official Ollama website: https://ollama.ai/
Pull a model: Download a model (e.g., llama2, mistral,codellama:13b-instruct) that you intend to use.
ollama pull llama2Start the Ollama server:
ollama serveBy default, Ollama runs on http://localhost:11434.
Usage
You can integrate the fakeapi-ai into your development workflow by creating a simple Node.js script to start it.
Create a file (e.g., run-fake-api.js) in your project's root:
// run-fake-api.js
const { startFakeApiServer, stopFakeApiServer } = require("fakeapi-ai");
const myFakeRoutes = {
GET: {
// --- Static Response Example ---
"/api/static-users": [
{ id: 1, name: "Alice Smith", email: "[email protected]" },
{ id: 2, name: "Bob Johnson", email: "[email protected]" },
],
// --- LLM-Generated Single Object Example ---
"/api/generated-user-profile": {
ollama: {
model: "llama2", // Ensure this model is pulled in your Ollama instance
schema: {
id: { type: "number", description: "Unique user identifier" },
username: { type: "string", description: "User's chosen username" },
email: { type: "string", description: "User's email address" },
isActive: {
type: "boolean",
description: "Whether the user account is active",
},
registrationDate: {
type: "string",
description: "Date of registration in YYYY-MM-DD format",
},
address: {
type: "object",
properties: {
street: { type: "string" },
city: { type: "string" },
zipCode: { type: "string" },
},
},
hobbies: {
type: "array",
items: { type: "string" },
arraySize: 3, // Generate 3 hobbies
},
},
options: { temperature: 0.7 }, // Optional: adjust LLM creativity (0.0-2.0)
},
},
// --- LLM-Generated Array of Objects Example ---
"/api/generated-products": {
ollama: {
model: "llama2",
count: 5, // Generate an array of 5 product objects
schema: {
productId: { type: "string", description: "Unique product ID" },
productName: { type: "string", description: "Name of the product" },
price: { type: "number", description: "Price of the product" },
category: { type: "string", description: "Product category" },
inStock: { type: "boolean", description: "Availability status" },
},
},
},
// --- LLM-Generated Report with Nested Array Example ---
"/api/generated-sales-report": {
ollama: {
model: "llama2",
schema: {
reportId: { type: "string", description: "Unique report identifier" },
title: { type: "string", description: "Title of the sales report" },
dateGenerated: {
type: "string",
description: "Current date in YYYY-MM-DD",
},
salesData: {
type: "array",
items: {
type: "object",
properties: {
month: {
type: "string",
description: "Month name (e.g., January)",
},
revenue: {
type: "number",
description: "Total revenue for the month",
},
expenses: {
type: "number",
description: "Total expenses for the month",
},
},
},
arraySize: 3, // Nested array with 3 elements (e.g., for 3 months)
},
summary: {
type: "string",
description: "A brief summary of the sales report data",
},
},
},
},
},
POST: {
"/api/submit-form": { message: "Form data received!", status: "success" },
},
};
const PORT = 8080; // The port your fake API server will listen on
const OLLAMA_URL = "http://localhost:11434"; // Your Ollama server URL
async function runFakeApi() {
try {
await startFakeApiServer({
routes: myFakeRoutes,
port: PORT,
ollamaUrl: OLLAMA_URL, // Pass the Ollama URL here to enable LLM generation
});
console.log(`Fake API is ready on http://localhost:${PORT}`);
// Keep the server running until manually stopped (e.g., Ctrl+C)
process.on("SIGINT", async () => {
console.log("Stopping fake API...");
await stopFakeApiServer();
process.exit(0);
});
} catch (error) {
console.error("Failed to start fake API:", error);
process.exit(1);
}
}
runFakeApi();Run the server:
node run-fake-api.jsYour frontend application can now make requests to http://localhost:8080/api/... for the routes you defined.
API Reference
startFakeApiServer(options)
Initializes and starts the fake API.
options(Object):routes(Object): Required. An object defining your API routes.- Keys are HTTP methods (
'GET','POST','PUT','DELETE'). - Values are objects mapping API paths (e.g.,
'/users') to their responses. - A response can be:
- Any static JSON-serializable data (e.g.,
{ message: 'Success' },[{ id: 1 }]). - An
ollamaconfiguration object for LLM-generated responses.
- Any static JSON-serializable data (e.g.,
- Keys are HTTP methods (
port(number): Optional. The port to run the server on. Defaults to3000.ollamaUrl(string): Optional. The base URL of your Ollama server (e.g.,'http://localhost:11434'). Required if you useollamaconfigurations in your routes.
Returns:
Promise<void>- Resolves when the server starts, rejects on error.
ollama Configuration Object
Used within a route definition to specify LLM-generated responses.
model(string): Required. The name of the Ollama model to use (e.g.,'llama2','mistral'). Ensure this model is pulled in your Ollama instance.schema(Object): Required. An object defining the structure of the JSON data you want the LLM to generate.- Keys are property names.
- Values are
PropertySchemaobjects.
count(number): Optional. The number of top-level objects to generate.- If
1(default) or omitted, a single JSON object is returned. - If
> 1, an array ofcountJSON objects is returned.
- If
options(Object): Optional. Additional generation options to pass to the Ollama model (e.g.,temperature,top_k,num_ctx). Refer to the Ollama API documentation for available options.
PropertySchema Object
Defines the type and optional details for a property within your schema.
type(string): Required. The data type for the property. Supported types:'string''number''boolean''array'(requiresitemsproperty)'object'(requirespropertiesproperty)
properties(Object): Required iftypeis'object'. An object defining the nested properties of the object. Each value is anotherPropertySchema.items(Object): Required iftypeis'array'. APropertySchemaobject defining the schema for each element within the array.arraySize(number): Optional. Specifies the number of elements for an arraytype. Defaults to15for nested arrays if not provided.description(string): Optional. A brief description for the property. This helps guide the LLM in generating more relevant and accurate data.
stopFakeApiServer()
Stops the fake API server if it's running.
- Returns:
Promise<void>- Resolves when the server is stopped.
Contributing
Feel free to open issues or submit pull requests if you have suggestions or improvements.
License
This project is licensed under the ISC License.
