@genesislcap/foundation-ai
v14.409.0
Published
Genesis Foundation AI - Provider-agnostic AI configuration and shared utilities
Downloads
6,944
Readme
Genesis Foundation AI
API Docs
Provider-agnostic AI configuration and shared utilities. Configure once at app bootstrap; use across components via dependency injection.
Beta – not for production
This package is in beta. Do not use in production. API surface and behaviour may change. It is intended for evaluation and development only.
Supported providers:
- OpenAI (server proxy) – Client calls your server at
/gwf/ai-service/chat-completions; server calls OpenAI. - Chrome built-in AI – Prompt API / Gemini Nano (no API key; Chrome 138+)
Each provider exposes capabilities such as interpretCriteria (natural language to structured criteria). Future capabilities (e.g. summarization) will be added as optional methods on the same interface.
Feature flag
AI features are disabled by default and require the feature flag to be enabled. Add ?feature.ai to the URL to activate:
https://your-app.com/?feature.aiWhen the flag is off:
createAIProvider()andgetAIProvider()return a no-op providerresolveAIConfig()returnsnull- AI components (e.g.
*-ai-criteria-search,*-ai-indicator) do not render
You can check the flag programmatically:
import { isAIFeatureEnabled, AI_FEATURE_FLAG } from '@genesislcap/foundation-ai';
if (isAIFeatureEnabled()) {
// AI features are enabled
}Installation
Add @genesislcap/foundation-ai as a dependency. Run npm run bootstrap after changing dependencies.
{
"dependencies": {
"@genesislcap/foundation-ai": "latest"
}
}Configuration
1. Resolve config (Chrome-first or OpenAI)
Use resolveAIConfig to prefer Chrome when available, with server proxy as fallback:
import { createAIProvider, resolveAIConfig, AIProvider } from '@genesislcap/foundation-ai';
// Server URL is derived from API_HOST. Use provider and model:
const aiConfig = await resolveAIConfig({
provider: 'openai',
model: 'gpt-4o-mini',
preferChrome: true,
});2. Register provider in DI
Register the provider at app bootstrap (e.g. in main.ts):
import { Registration } from '@microsoft/fast-foundation';
this.container.register(
Registration.instance(AIProvider, createAIProvider(aiConfig)),
);Configure once; all components using getAIProvider() receive the same instance.
3. Use in components
import { getAIProvider } from '@genesislcap/foundation-ai';
const aiProvider = getAIProvider();
if (aiProvider?.interpretCriteria) {
const result = await aiProvider.interpretCriteria(input, { fields: fieldMetadata });
}Server-side web handler (OpenAI proxy)
The client posts to /gwf/ai-service/chat-completions. You must configure a Genesis web handler on your server to proxy these requests to OpenAI. The API key stays on the server and is never exposed to the client.
1. Add a web handler script
Create a *-web-handler.kts file in your Genesis app's scripts folder (e.g. src/main/genesis/scripts/). Example for OpenAI:
/**
* AI web handler - proxies OpenAI Chat Completions requests.
* API key is read from OPENAI_API_KEY env var (never exposed to client).
*/
import com.fasterxml.jackson.annotation.JsonAlias
import com.fasterxml.jackson.annotation.JsonProperty
import com.fasterxml.jackson.databind.ObjectMapper
import java.net.URI
import java.net.http.HttpClient
import java.net.http.HttpRequest
import java.net.http.HttpResponse
import java.nio.charset.StandardCharsets
data class ChatCompletionsRequest(
val provider: String = "openai",
val model: String = "gpt-4o-mini",
@JsonProperty("SYSTEM_PROMPT") @JsonAlias("systemPrompt") val systemPrompt: String,
@JsonProperty("USER_PROMPT") @JsonAlias("userPrompt") val userPrompt: String,
@JsonProperty("RESPONSE_SCHEMA") @JsonAlias("responseSchema") val responseSchema: Map<String, Any>? = null,
)
data class ChatCompletionsResponse(
val content: String,
)
webHandlers("ai-service") {
val httpClient = HttpClient.newBuilder().build()
val apiKey = java.lang.System.getenv("OPENAI_API_KEY")
endpoint<ChatCompletionsRequest, ChatCompletionsResponse>(POST, "chat-completions") {
handleRequest {
val req = body
if (req.provider != "openai") {
throw IllegalStateException("Only 'openai' provider is supported for now")
}
if (apiKey.isNullOrBlank()) {
throw IllegalStateException("OPENAI_API_KEY is not configured on the server")
}
val responseFormat = req.responseSchema?.let { schema ->
mapOf(
"type" to "json_schema",
"json_schema" to mapOf(
"name" to "structured_response",
"strict" to true,
"schema" to schema,
),
)
}
val openAiBody = mapOf(
"model" to req.model,
"messages" to listOf(
mapOf("role" to "system", "content" to req.systemPrompt),
mapOf("role" to "user", "content" to req.userPrompt),
),
) + (responseFormat?.let { mapOf("response_format" to it) } ?: emptyMap())
val objectMapper = ObjectMapper()
val requestBody = objectMapper.writeValueAsString(openAiBody)
val request = HttpRequest.newBuilder()
.uri(URI.create("https://api.openai.com/v1/chat/completions"))
.header("Authorization", "Bearer $apiKey")
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(requestBody, StandardCharsets.UTF_8))
.build()
val response = httpClient.send(request, HttpResponse.BodyHandlers.ofString(StandardCharsets.UTF_8))
if (response.statusCode() >= 400) {
LOG.error("OpenAI API error: ${response.statusCode()} - ${response.body()}")
throw RuntimeException("OpenAI API error: ${response.statusCode()}")
}
val json = objectMapper.readTree(response.body())
val choices = json.path("choices")
val content = if (choices.isArray && !choices.isEmpty) {
choices.get(0).path("message").path("content").asText()
} else ""
ChatCompletionsResponse(content = content)
}
}
}2. Set the API key
Configure OPENAI_API_KEY on the server (env var, system config, or secrets manager). Do not expose it to the client.
Manual provider selection
To force a specific provider instead of using resolveAIConfig:
// Server proxy (provider + model; URL derived from API_HOST)
Registration.instance(AIProvider, createAIProvider({
provider: 'openai',
model: 'gpt-4o-mini',
}));
// Chrome only (when available)
Registration.instance(AIProvider, createAIProvider({
providerType: 'chrome',
}));Chrome built-in AI (Prompt API / Gemini Nano)
The Chrome provider uses the Prompt API with Gemini Nano. It runs on-device; no API key and no data is sent to external services.
Production (deployed apps)
Chrome built-in AI is in an origin trial. For production domains, users do not need to enable any flags:
- Enroll your domain in the Prompt API origin trial.
- Add the origin trial token (meta tag or header) to your app.
- The model download is triggered when your app calls
LanguageModel.create()from a user gesture (e.g. the AI indicator "Install model" button).
Local development (localhost)
For testing on localhost, enable these Chrome flags:
chrome://flags/#optimization-guide-on-device-model→ Enabledchrome://flags/#prompt-api-for-gemini-nano→ Enabled
Restart Chrome after changing flags.
Hardware and system requirements
- OS: Windows 10/11, macOS 13+, Linux, or ChromeOS (Chromebook Plus)
- Storage: At least 22 GB free (for model download)
- GPU: >4 GB VRAM, or CPU: 16 GB RAM and 4+ cores
- Network: Unmetered connection for initial model download only; inference runs offline
See Chrome's Prompt API docs for full requirements.
Speech-to-text
The package includes Web Speech API utilities for voice input:
import { isSpeechRecognitionAvailable, startSpeechRecognition } from '@genesislcap/foundation-ai';
if (isSpeechRecognitionAvailable()) {
const stop = startSpeechRecognition(
(transcript, isFinal) => { /* handle transcript */ },
() => { /* on error */ },
);
// Call stop() to end recording
}License
This project provides front-end dependencies and uses licensed components. Licenses are required during development. Contact Genesis Global for details.
Licensed components
Genesis low-code platform
