@jeandev_2030/js-ai-core
v0.1.0
Published
Core HTTP/TLS client for multiple AI providers (OpenAI, Groq, Grok, etc) with zero dependencies
Maintainers
Readme
js-ai-core
Core HTTP/TLS client for AI providers with zero dependencies. Today this package ships:
- HttpClient: a minimal TLS + raw HTTP/1.1 client
- OpenAIProvider: wrapper for the OpenAI Responses API
Install
npm install @jeandev_2030/js-ai-coreQuick start (OpenAI)
const { OpenAIProvider } = require('@jeandev_2030/js-ai-core');
const client = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY
});
async function run() {
const response = await client.createResponse({
model: 'gpt-4.1-mini',
input: 'Ola!'
});
console.log('status:', response.status);
console.log('id:', response.id);
console.log('text:', client.extractText(response));
}
run().catch(console.error);API
HttpClient
const { HttpClient } = require('@jeandev_2030/js-ai-core');
const http = new HttpClient({ timeout: 30_000 });Methods:
request({ method, url, headers, body })get(url, options)post(url, body, options)
Return value:
{
status: Number,
headers: Object, // lowercased header names
data: Object | String | null
}Notes:
- Uses TLS sockets and raw HTTP/1.1 requests
- No redirects, retries, or streaming
- Default headers:
Content-Type: application/json,User-Agent: js-ai-core/1.0,Connection: close
OpenAIProvider
const { OpenAIProvider } = require('@jeandev_2030/js-ai-core');
const openai = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
baseUrl: 'https://api.openai.com/v1' // optional
});Methods:
createResponse({ model, input, headers, options })headersare merged into the request headersoptionsis merged into the JSON payload sent to/responses
extractText(response)returns concatenated text fromresponse.raw.output
Errors:
If the API returns status >= 400, the provider throws:
{
provider: 'openai',
status: Number,
message: String,
code: String | null,
type: String | null,
raw: Object
}Node support
- Node.js >= 18
License
MIT
