ak-gemini
v1.1.13
Published
AK's Generative AI Helper for doing... transforms
Maintainers
Readme
AK-Gemini
Generic, type-safe, and highly configurable wrapper for Google's Gemini AI JSON transformation. Use this to power LLM-driven data pipelines, JSON mapping, or any automated AI transformation step, locally or in cloud functions.
Features
- Model-Agnostic: Use any Gemini model (
gemini-2.5-flashby default) - Declarative Few-shot Examples: Seed transformations using example mappings, with support for custom keys (
PROMPT,ANSWER,CONTEXT, or your own) - Automatic Validation & Repair: Validate outputs with your own async function; auto-repair failed payloads with LLM feedback loop (exponential backoff, fully configurable)
- Token Counting & Safety: Preview the exact Gemini token consumption for any operation—including all examples, instructions, and your input—before sending, so you can avoid window errors and manage costs.
- Conversation Management: Clear conversation history while preserving examples, or send stateless one-off messages that don't affect history
- Response Metadata: Access actual model version and token counts from API responses for billing verification and debugging
- Strong TypeScript/JSDoc Typings: All public APIs fully typed (see
/types) - Minimal API Surface: Dead simple, no ceremony—init, seed, transform, validate.
- Robust Logging: Pluggable logger for all steps, easy debugging
Install
npm install ak-geminiRequires Node.js 18+, and @google/genai.
Usage
1. Setup
Set your GEMINI_API_KEY environment variable:
export GEMINI_API_KEY=sk-your-gemini-api-keyor pass it directly in the constructor options.
2. Basic Example
import AITransformer from 'ak-gemini';
const transformer = new AITransformer({
modelName: 'gemini-2.5-flash', // or your preferred Gemini model
sourceKey: 'INPUT', // Custom prompt key (default: 'PROMPT')
targetKey: 'OUTPUT', // Custom answer key (default: 'ANSWER')
contextKey: 'CONTEXT', // Optional, for per-example context
maxRetries: 2, // Optional, for validation-repair loops
// responseSchema: { ... }, // Optional, strict output typing
});
const examples = [
{
CONTEXT: "Generate professional profiles with emoji representations",
INPUT: { "name": "Alice" },
OUTPUT: { "name": "Alice", "profession": "data scientist", "life_as_told_by_emoji": ["🔬", "💡", "📊", "🧠", "🌟"] }
}
];
await transformer.init();
await transformer.seed(examples);
const result = await transformer.message({ name: "Bob" });
console.log(result);
// → { name: "Bob", profession: "...", life_as_told_by_emoji: [ ... ] }3. Token Window Safety/Preview
Before calling .message() or .seed(), you can preview the INPUT token usage that will be sent to Gemini—including your system instructions, examples, and user input. This is vital for avoiding window errors and managing context size:
const { inputTokens } = await transformer.estimate({ name: "Bob" });
console.log(`Input tokens: ${inputTokens}`);
// Optional: abort or trim if over limit
if (inputTokens > 32000) throw new Error("Request too large for selected Gemini model");
// After the call, check actual usage (input + output)
await transformer.message({ name: "Bob" });
const usage = transformer.getLastUsage();
console.log(`Actual usage: ${usage.promptTokens} in, ${usage.responseTokens} out`);4. Automatic Validation & Self-Healing
You can pass a custom async validator—if it fails, the transformer will attempt to self-correct using LLM feedback, retrying up to maxRetries times:
const validator = async (payload) => {
if (!payload.profession || !Array.isArray(payload.life_as_told_by_emoji)) {
throw new Error('Invalid profile format');
}
return payload;
};
const validPayload = await transformer.transformWithValidation({ name: "Lynn" }, validator);
console.log(validPayload);5. Conversation Management
Manage chat history to control costs and isolate requests:
// Clear conversation history while preserving seeded examples
await transformer.clearConversation();
// Send a stateless message that doesn't affect chat history
const result = await transformer.message({ query: "one-off question" }, { stateless: true });
// Check actual model and token usage from last API call
console.log(transformer.lastResponseMetadata);
// → { modelVersion: 'gemini-2.5-flash-001', requestedModel: 'gemini-2.5-flash',
// promptTokens: 150, responseTokens: 42, totalTokens: 192, timestamp: 1703... }API
Constructor
new AITransformer(options)| Option | Type | Default | Description |
| ------------------ | ------ | ------------------ | ------------------------------------------------- |
| modelName | string | 'gemini-2.5-flash' | Gemini model to use |
| sourceKey | string | 'PROMPT' | Key for prompt/example input |
| targetKey | string | 'ANSWER' | Key for expected output in examples |
| contextKey | string | 'CONTEXT' | Key for per-example context (optional) |
| examplesFile | string | null | Path to JSON file containing examples |
| exampleData | array | null | Inline array of example objects |
| responseSchema | object | null | Optional JSON schema for strict output validation |
| maxRetries | number | 3 | Retries for validation+rebuild loop |
| retryDelay | number | 1000 | Initial retry delay in ms (exponential backoff) |
| logLevel | string | 'info' | Log level: 'trace', 'debug', 'info', 'warn', 'error', 'fatal', or 'none' |
| chatConfig | object | ... | Gemini chat config overrides |
| systemInstructions | string/null/false | (default prompt) | System prompt for Gemini. Pass null or false to disable. |
| maxOutputTokens | number | 50000 | Maximum tokens in generated response |
| thinkingConfig | object | null | Thinking features config (see below) |
| enableGrounding | boolean | false | Enable Google Search grounding (WARNING: $35/1k queries) |
| labels | object | null | Billing labels for cost attribution |
| apiKey | string | env var | Gemini API key (or use GEMINI_API_KEY env var) |
| vertexai | boolean | false | Use Vertex AI instead of Gemini API |
| project | string | env var | GCP project ID (for Vertex AI) |
| location | string | 'global' | GCP region (for Vertex AI) |
| googleAuthOptions | object | null | Auth options for Vertex AI (keyFilename, credentials) |
Methods
await transformer.init()
Initializes Gemini chat session (idempotent).
await transformer.seed(examples?)
Seeds the model with example transformations (uses keys from constructor).
You can omit examples to use the examplesFile (if provided).
await transformer.message(sourcePayload, options?)
Transforms input JSON to output JSON using the seeded examples and system instructions. Throws if estimated token window would be exceeded.
Options:
stateless: true— Send a one-off message without affecting chat history (usesgenerateContentinstead of chat)labels: {}— Per-message billing labels
await transformer.estimate(sourcePayload)
Returns { inputTokens } — the estimated INPUT tokens for the request (system instructions + all examples + your sourcePayload).
Use this to preview token window safety and manage costs before sending.
Note: This only estimates input tokens. Output tokens cannot be predicted before the API call. Use getLastUsage() after message() to see actual consumption.
await transformer.transformWithValidation(sourcePayload, validatorFn, options?)
Runs transformation, validates with your async validator, and (optionally) repairs payload using LLM until valid or retries are exhausted. Throws if all attempts fail.
await transformer.rebuild(lastPayload, errorMessage)
Given a failed payload and error message, uses LLM to generate a corrected payload.
await transformer.reset()
Resets the Gemini chat session, clearing all history/examples.
transformer.getHistory()
Returns the current chat history (for debugging).
await transformer.clearConversation()
Clears conversation history while preserving seeded examples. Useful for starting fresh user sessions without re-seeding.
transformer.getLastUsage()
Returns structured usage data for billing verification. Token counts are cumulative across all retry attempts - if validation failed and a retry was needed, you see the total tokens consumed, not just the final successful call. Returns null if no API call has been made yet.
const usage = transformer.getLastUsage();
// {
// promptTokens: 300, // CUMULATIVE input tokens across all attempts
// responseTokens: 84, // CUMULATIVE output tokens across all attempts
// totalTokens: 384, // CUMULATIVE total tokens
// attempts: 2, // Number of attempts (1 = first try success, 2+ = retries needed)
// modelVersion: 'gemini-2.5-flash-001', // Actual model that responded
// requestedModel: 'gemini-2.5-flash', // Model you requested
// timestamp: 1703... // When response was received
// }Properties
transformer.lastResponseMetadata
After each API call, contains metadata from the response:
{
modelVersion: string | null, // Actual model version that responded (e.g., 'gemini-2.5-flash-001')
requestedModel: string, // Model you requested (e.g., 'gemini-2.5-flash')
promptTokens: number, // Tokens in the prompt
responseTokens: number, // Tokens in the response
totalTokens: number, // Total tokens used
timestamp: number // When response was received
}Useful for verifying billing, debugging model behavior, and tracking token usage.
Examples
Seed with Custom Example Keys
const transformer = new AITransformer({
sourceKey: 'INPUT',
targetKey: 'OUTPUT',
contextKey: 'CTX'
});
await transformer.init();
await transformer.seed([
{
CTX: "You are a dog expert.",
INPUT: { breed: "golden retriever" },
OUTPUT: { breed: "golden retriever", size: "large", friendly: true }
}
]);
const dog = await transformer.message({ breed: "chihuahua" });Use With Validation and Retry
const result = await transformer.transformWithValidation(
{ name: "Bob" },
async (output) => {
if (!output.name || !output.profession) throw new Error("Missing fields");
return output;
}
);Vertex AI Authentication
Use Vertex AI instead of the Gemini API for enterprise features, VPC controls, and GCP billing integration.
With Service Account Key File
const transformer = new AITransformer({
vertexai: true,
project: 'my-gcp-project',
location: 'us-central1', // Optional: defaults to 'global' endpoint
googleAuthOptions: {
keyFilename: './service-account.json'
}
});With Application Default Credentials
// Uses GOOGLE_APPLICATION_CREDENTIALS env var or `gcloud auth application-default login`
const transformer = new AITransformer({
vertexai: true,
project: 'my-gcp-project' // or GOOGLE_CLOUD_PROJECT env var
});Advanced Configuration
Disabling System Instructions
By default, the transformer uses built-in system instructions optimized for JSON transformation. You can provide custom instructions or disable them entirely:
// Custom system instructions
new AITransformer({ systemInstructions: "You are a helpful assistant..." });
// Disable system instructions entirely (use Gemini's default behavior)
new AITransformer({ systemInstructions: null });
new AITransformer({ systemInstructions: false });Thinking Configuration
For models that support extended thinking (like gemini-2.5-flash):
const transformer = new AITransformer({
modelName: 'gemini-2.5-flash',
thinkingConfig: {
thinkingBudget: 1024, // Token budget for thinking
}
});Billing Labels
Labels flow through to GCP billing reports for cost attribution:
const transformer = new AITransformer({
labels: {
client: 'acme_corp',
app: 'data_pipeline',
environment: 'production'
}
});
// Override per-message
await transformer.message(payload, { labels: { request_type: 'batch' } });Token Window Management & Error Handling
- Throws on missing credentials (API key for Gemini API, or project ID for Vertex AI)
.message()and.seed()will estimate and prevent calls that would exceed Gemini's model window- All API and parsing errors surfaced as
Errorwith context - Validator and retry failures include the number of attempts and last error
Testing
- Jest test suite included
- Real API integration tests as well as local unit tests
- 100% coverage for all error cases, configuration options, edge cases
Run tests with:
npm test