@camidevv/mid-ai
v1.0.9
Published
MID AI is a project created from the need to have a tool that would allow me to have an artificial intelligence API for university projects, prototypes or personal small projects, but quickly, No registration, no need to associate my credit card and so on
Maintainers
Readme
MID AI
MID AI is a project created from the need to have a tool that would allow me to have an artificial intelligence API for university projects, prototypes or personal small projects, but quickly, No registration, no need to associate my credit card and so on.
Description
What MID AI does is basically receive a request with an array of messages, and then iterate over a list of artificial intelligence services (Cerebras, Deepseek, Gemini, Groq, Open Router, etc.) until one of them can answer the user’s question or request.
Main features
- Quick Start: you can use the package with minimal configuration.
- No API keys mode: if you only need up to 15 requests per minute you can not provide keys.
- Support for multiple model providers (see list below).
- Configuration by environment variables to add your credentials and increase limits.
- Design designed for prototypes and academic/personal use.
Installation
- npm install @camidevv/mid-ai
Quick use
Import client from package (generic example):
- 'import { MidAI } from '@camidevv/mid-ai'
Single call (keyless mode, up to 15 requests/minute):
- Create the client without credentials and make basic requests to models available in free mode.
import { MidAI } from '@camidevv/mid-ai';
const midAI = new MidAI({});
(async () => {
const response = await midAI.chat("What's the date today?");
for await (const chunk of response) {
console.log(chunk);
}
})();
//[OUTPUT]
// Today is November 29, 2023. How can I help you?
Limitations and API keys
- Default limit (no keys provided): 15 requests per minute.
- If your use case only requires this rate, you do not need to configure or provide API keys.
- If you need more capacity and/or access to additional models, you must provide API keys from the providers you want to use.
- It is highly recommended to provide your own personal keys/credentials for each provider because:
- They allow you to access higher quotas and additional models.
- Avoid dependency on shared or package-limited tokens.
- It is highly recommended to provide your own personal keys/credentials for each provider because:
- Concrete recommendation: the GitHub personal token (GitHub Personal Access Token) is one of the most models you can provide by integrating with certain gateways that use GitHub credentials to unlock models. If you are adding a single credential, this is usually the most useful one.
- Google Gemini usually offers many requests and models if configured correctly (see the Google documentation for details on quotas and permissions).
Configuration of suppliers
You can configure providers using environment variables. Examples of suggested variable names (adjust to your needs or the internal package configuration):
- «GITHUB_TOKEN` - GitHub personal token (recommended if you want many models).
- «GEMINI_API_KEY» -Credential for AI Studio (if applicable).
- «CEREBRAS_API_KEY» - Credential for Cerebras.
- «OPEN_ROUTER_KEY» - Credential for OpenRouter.
- «GROQ_API_KEY» - Credential for Groq.
Manage multiple suppliers
- The package will try to select the best available provider based on the configuration and keys provided.
- If there are no keys, the limited mode (15 rpm) will be activated.
- If you have provided multiple keys, the package will balance/select according to availability and internal policies (see advanced configuration in the package documentation or source code).
Suppliers and links
Here are some of the mentioned suppliers and their official pages:
- Cerebras: https://www.cerebras.ai/
- OpenRouter: https://openrouter.ai/
- AI Studio (Google Gemini): https://aistudio.google.com/
- Groq: https://groq.com/
Self-hosted
- If you want to use self-hosted server, you can visit the github repository of the project and follow the instructions to set up your own instance: https://github.com/programadorisgod/midAI
Good practice
- For local development and testing, use keyless mode up to 15 rpm to avoid exposing credentials.
- For serious integrations and deployments, add personal credentials by provider and review fee limits and billing for each service.
- Keep your tokens in secure environment variables, do not include them in code or public repositories.
- Review each provider’s usage policies (limits, terms and privacy).
Final notes
MID AI is intended as a practical tool to accelerate the development of small/academic projects. If you need production use with high availability and high limits, set up the appropriate vendor keys and review their terms of service.
