@jsuyog2/sequelize-ai
v1.2.1
Published
AI-powered natural language to Sequelize query generator
Downloads
876
Maintainers
Readme
Why This Project Matters
Bridging the gap between non-technical users and direct database querying is challenging. Traditional text-to-SQL solutions suffer from severe security risks (SQL injection) and often output raw queries that bypass application-level ORM logic.
sequelize-ai solves this by:
- Never granting raw SQL access. Generated code must pass through your existing
Sequelizemodels. - Restricting execution safely. All dynamically generated code runs in a sandboxed V8 isolate (
isolated-vm), limited strictly to safe data-retrieval methods, with configurable CPU and memory limits.
It is perfect for building LLM-powered admin dashboards, conversational data analytics tools, and internal chat bots—without compromising database integrity.
Features
- 🗣️ Natural Language Queries — ask complex questions in plain English.
- 🛡️ Secure Sandbox — strictly read-only execution in an isolated V8 context.
- ⚡ Tree-Shakeable & Dual Build — CJS and ESM support, minimal footprint.
- 🤖 Multi-Provider Support — OpenAI, Gemini, Claude, Groq, DeepSeek, Together, OpenRouter.
- 💡 AI Column Hints — add annotations to your models to aid the LLM context.
- 📊 Computed Columns — automatically derive values without raw SQL.
- 🔄 Multi-Query Handling — seamlessly handles compound questions.
- 📦 Structured JSON Output — predictable response formatting.
Installation
npm install @jsuyog2/sequelize-ai sequelizeInstall the required driver for your specific database dialect:
npm install pg pg-hstore # PostgreSQL
npm install mysql2 # MySQL
npm install sqlite3 # SQLiteInstall the SDK for your preferred AI provider:
npm install openai # OpenAI / DeepSeek
npm install @google/generative-ai # Gemini
npm install @anthropic-ai/sdk # Claude
npm install groq-sdk # GroqQuick Start Example
import { Sequelize } from "sequelize";
import SequelizeAI from "@jsuyog2/sequelize-ai";
// 1. Initialize your Sequelize instance
const sequelize = new Sequelize(process.env.DATABASE_URL, {
dialect: "postgres",
});
// 2. Initialize the AI generator
const ai = new SequelizeAI(sequelize, {
provider: "openai",
apiKey: process.env.OPENAI_API_KEY,
});
// 3. Ask a question
(async () => {
const result = await ai.ask("get all products where stock is less than 5");
console.log(result);
/* Output:
{
model: "Product",
method: "findAll",
data: [ ...rows ]
}
*/
})();API Documentation
new SequelizeAI(sequelize, options)
| Option | Type | Required | Default | Description |
| ------------- | -------- | -------- | ------------------- | -------------------------------------------------------------------------------------------------------- |
| provider | string | No | "openai" | The LLM provider. Available: openai, gemini, claude, groq, deepseek, together, openrouter. |
| apiKey | string | Yes | - | API key for the selected provider. |
| model | string | No | Provider Specific | Overrides the default model (e.g., "gpt-4o"). |
| timeout | number | No | 2000 | Sandbox V8 CPU timeout limit in milliseconds. |
| memoryLimit | number | No | 128 | Sandbox V8 memory limit in megabytes. |
ai.ask(userInput)
Returns a Promise that resolves to the result of the database query. Format depends on the query type (e.g. findAll returns an array, count returns a number), but always includes the metadata mapping.
const { model, method, data } = await ai.ask("Count all standard users");
// model: "User"
// method: "count"
// data: 154Advanced Usage
AI Column Hints
You can add aiDescription directly into your Sequelize column definitions. This significantly improves the accuracy of the generated queries by giving business logic context to the LLM.
const Product = sequelize.define("Product", {
status: {
type: DataTypes.INTEGER,
// Provide hints to the LLM!
aiDescription: "0=Draft, 1=Published, 2=Archived",
},
});Multi-Query Resolving
You can ask compound questions and the system will execute them in parallel and return the combined results in an array.
const stats = await ai.ask(
"how many users are there, and what is the maximum order amount?",
);
console.log(stats[0].data); // Total users count
console.log(stats[1].data); // Max order amountProvider Reference & Performance Notes
| Provider | Engine Identifier | Default Model | Speed/Latency | Cost Efficiency |
| ------------ | ----------------- | ---------------------- | ------------------- | ---------------------- |
| Groq | groq | llama-3.1-8b-instant | ⚡⚡⚡ Blazing Fast | 💰 Free tier available |
| Claude | claude | claude-haiku-4-5... | ⚡⚡ Very Fast | 💵 Very Low |
| Gemini | gemini | gemini-2.0-flash | ⚡⚡ Fast | 💰 Generous free tier |
| OpenAI | openai | gpt-4o-mini | ⚡ Fast | 💵 Low |
| DeepSeek | deepseek | deepseek-chat | ⚡ Fast | 💵 Lowest |
Tip for Production: For interactive dashboards where loading state matters, groq using the llama-3 hardware accelerator usually gives sub-500ms total response times for code generation.
Examples
We provide extensive examples in the /examples directory.
Run the local playground:
npm run exampleFAQ
Q: Can this accidentally DROP TABLE or DELETE rows?
A: No. The AI generated code is validated against a strict read-only method whitelist (findAll, count, sum, etc.). Sandbox policies prevent any destructive commands.
Q: Does my database schema get sent to the AI?
A: Yes. The textual representation of your models, columns, types, and associations is securely bundled into the system prompt to provide context. No actual database rows or data are sent to the AI.
Q: Is the generated code injected directly into Node context?
A: No. We use the robust isolated-vm package to spawn a separate V8 engine instance. The sandboxed code receives only isolated references.
Q: Can I use local models via Ollama?
A: Yes! You can use the openai-compatible provider (e.g. deepseek or together) but override the baseURL inside your local fork's provider mapping. Full custom baseURL injection support is coming in a future version.
Troubleshooting
Error: Method not allowed: destroy— The LLM tried to write data. Reword your prompt to request data retrieval.Error: Unknown model: Foo— The LLM hallucinated a table name. Make sure you useaiDescriptionhints on complex relationships.Error: Script execution timed out.— The dynamically generated query caused a CPU hang. This is the sandbox protecting your thread! Increasetimeoutin the constructor if you have heavy associations.
GitHub Publishing Guide
Want to release your own version or fork?
- Edit
package.jsonversion. - Push to branch
main. - Create a GitHub Release in the UI.
- Our
.github/workflows/npm-publish.ymlwill automatically build theCJSandESMbundles, run all Vitest mocks, and publish to bothnpmandGitHub Packages.
Contributing
We welcome contributions! Please see our Contributing Guide and Code of Conduct for detailed instructions on how to submit a Pull Request.
License
This project is licensed under the MIT License.
