unified-llm
v3.0.3
Published
A unified TypeScript interface for interacting with various LLM providers including OpenAI, Anthropic, Google Gemini and DeepSeek
Downloads
78
Maintainers
Readme
Unified LLM Interface
A unified interface for interacting with multiple Large Language Model providers (Anthropic, DeepSeek, OpenAI) with consistent APIs, streaming support, and advanced features.
Features
- 🤝 Unified API for Anthropic, DeepSeek, Google, OpenAI and Open Router models
- 🌊 Streaming support with controllable callbacks
- 🛠 Tool/function calling capabilities
- 🖼 Image or document input support (multi-modal)
- 🎮 Stream control (cancel individual or all streams)
- 📊 Detailed token usage tracking
- 🔄 Response format control (text/JSON)
- 🎛 Fine-grained parameter control
- 🧠 Access to Claude 3.7+'s "thinking" feature for deeper model reasoning
Installation
npm install unified-llm
# or
yarn add unified-llmQuick Start
import { unified } from 'unified-llm'
// Basic completion
const response = await unified.create({
model_provider: 'openai' | 'deepseek',
model_name: 'gpt-4o' | 'deepseek-chat',
api_key: 'your-api-key',
system_messages: [
{
type: 'text',
content: 'You are a helpful assistant.'
}
],
messages: [
{
role: 'user',
content: [
{
type: 'text',
content: 'Hello, How are you?'
}
]
}
],
default_headers: {
'X-Title': '<YOUR_SITE_NAME>',
},
headers: undefined,
extra_body: undefined,
})
// Streaming completion
const stream_response = unified.stream({
model_provider: 'anthropic',
model_name: 'claude-3-5-sonnet-20240620',
api_key: 'your-api-key',
system_messages: [
{
type: 'text',
content: 'You are a helpful assistant.'
}
],
messages: [
{
role: 'user',
content: [
{
type: 'text',
content: 'Tell me a science fiction story.'
}
]
}
],
cb: ({ event, data }) => {
switch (event) {
case 'chunk':
console.log('Chunk: ', data)
break
case 'thinking_chunk':
console.log('Thinking chunk: ', data)
break
case 'tool_start':
console.log('Tool start: ', data)
break
case 'tool':
console.log('Tool: ', data)
break
case 'finish':
console.log('Stream finished.')
break
case 'error':
console.error('Error: ', data)
break
}
}
})
// Using Claude 3.7's thinking feature
const response_with_thinking = await unified.create({
model_provider: 'anthropic',
model_name: 'claude-3-7-sonnet-20250219',
api_key: 'your-api-key',
system_messages: [
{
type: 'text',
content: 'You are a helpful assistant.'
}
],
messages: [
{
role: 'user',
content: [
{
type: 'text',
content: 'Solve this complex problem step by step...'
}
]
}
],
thinking: {
effort: 'high',
max_tokens: 16000
}
})
// Access the thinking content
console.log(response_with_thinking.thinking);API Documentation
Core Interface
1. unified.create(params)
Creates a non-streaming completion request.
Parameters:
params(UCreateParams):model_provider: 'anthropic' | 'deepseek' | 'google' | 'open_router' | 'openai'model_name: Model name (e.g., 'gpt-4o', 'claude-3-5-sonnet-20240620', etc.)api_key: Provider API keymessages: Array of message blocks (user/assistant/tool interactions)system_messages: Array of system instruction messagestools?: Array of tool definitionstool_choice?: Specific tool to useresponse_format?: Response format (OpenAI and DeepSeek only)thinking?: Effort of thinking for thinking or reasoning modelseffort: 'low' | 'medium' | 'high'max_tokens: Maximum tokens to allocate for thinking
parallel_tool_calls?: Whether to use parallel tool callsstore?: Whether to store the completionmetadata?: Metadata to store with the completionmax_retries?: Maximum number of retry attempts (0 means no retries)retry_base_delay?: Base delay in milliseconds between retries (default: 1000ms) (max: 4000ms)default_headers?: Default headers to include in the request (Open Router only)headers?: Headers to include in the request (Open Router only)extra_body?: Extra body to include in the request (Open Router only)parameters?: Optional completion parametersmax_tokens?: Maximum tokens to allocate for text completiontemperature?: (0-1) Controls randomnesstop_p?: (0-1) Controls diversitypresence_penalty?: (-2.0-2.0) Penalizes token presencefrequency_penalty?: (-2.0-2.0) Penalizes token frequency
Returns:
- Promise:
model_provider: Provider namemodel_name: Model namecompletion: Generated textthinking_completion?: Thinking process contenttool_calls?: Any tool invocationsfinish_reason?: Completion termination reasontoken_usage: Token consumption detailscaching_token_usage?: Cache-related token usageadditional_token_usage?: Additional token metrics
2. unified.create_with_retry(params)
Creates a non-streaming completion request with multiple model fallbacks.
Parameters:
params(UCreateWithRetryParams):models: Array of model configurations to try in sequencemodel_provider: 'anthropic' | 'open_router' | 'deepseek' | 'google' | 'openai'model_name: Model name for the providerapi_key: Provider API keymax_retries?: Maximum retries for this specific modelretry_base_delay?: Base delay for retries (in ms) for this model
messages: Array of message blocks (user/assistant/tool interactions)system_messages: Array of system instruction messagestools?: Array of tool definitionstool_choice?: Specific tool to useresponse_format?: Response format (OpenAI and DeepSeek only)thinking?: Effort of thinking for thinking or reasoning modelseffort: 'low' | 'medium' | 'high'max_tokens: Maximum tokens to allocate for thinking
parallel_tool_calls?: Whether to use parallel tool callsstore?: Whether to store the completionmetadata?: Metadata to store with the completionmax_retries?: Global maximum retry attempts (0 means no retries)retry_base_delay?: Global base delay in milliseconds between retries (default: 1000ms)default_headers?: Default headers to include in the request (Open Router only)headers?: Headers to include in the request (Open Router only)extra_body?: Extra body to include in the request (Open Router only)throw_error_on_content_filter?: Controls fallback behavior when content filter is activated (default: false)- If true: System will attempt to use the next model in the fallback chain
- If false/undefined: System will return the content filter response without trying other models
Returns:
- Promise: Same as create() function
3. unified.stream(params)
Creates a streaming completion request.
Parameters:
- Same as
create()plus:cb: Callback function for stream eventsevent: 'chunk' | 'thinking_chunk' | 'tool_start' | 'tool' | 'error' | 'finish'data: Event-specific data
Event Types:
chunk: Emitted when a text chunk is received (data is a string)thinking_chunk: Emitted when a thinking process chunk is received (Claude 3.7+ only, data is a string)tool_start: Emitted when a tool call starts (data is the tool name)tool: Emitted when tool calls are completed (data is an array of tool calls)error: Emitted when an error occurs (data is the error message)finish: Emitted when the stream completes (data is null)
Returns:
- UUnifiedStreamResponse:
stream_id: Unique stream identifierstream: Async function returning final response after stream completion
4. unified.is_stream_active(stream_id)
Checks if a specific stream is currently active.
Parameters:
stream_id: Unique identifier of the stream to check
Returns:
boolean: True if stream is active, false otherwise
5. unified.cancel_stream
Methods for cancelling active streams.
5.1. unified.cancel_stream.one(stream_id, cb?)
Parameters:
stream_id: Unique identifier of the stream to cancelcb?: Optional callback function to execute after cancellation
Returns:
void
Throws:
Errorwhen the specified stream is not found
5.2. unified.cancel_stream.all(cb?)
Parameters:
cb?: Optional callback function to execute after cancellation
Returns:
void
Message Types
Assistant Message
{
role: 'assistant',
content: [
{
type: 'text',
content: string,
}
],
tool_calls?: [
{
tool_call_id: string,
type: 'function',
function: {
name: string,
arguments: Record<string, unknown>
},
}
]
}User Text Message
{
role: 'user',
content: [
{
type: 'text',
content: string,
cache_control?: CacheControlEphemeral
}
]
}User Image Message
{
role: 'user',
content: [
{
type: 'image',
content: {
media_type: 'image/jpeg' | 'image/png' | 'image/gif',
content_format: 'base64' | 'url',
content: string
}
}
]
}User Document Message
{
role: 'user',
content: [
{
type: 'document',
content: {
media_type: 'application/pdf',
content_format: 'base64' | 'url',
content: string
}
}
]
}User Tool Result Message
{
role: 'tool',
content: [
{
tool_call_id: string,
content: string,
cache_control?: CacheControlEphemeral,
is_error?: boolean
}
]
}Tool Integration
// Tool definition
const calculator = {
name: 'calculator',
description: 'Performs basic calculations',
schema: {
type: 'object',
properties: {
operation: {
type: 'enum',
description: 'Mathematical operation to perform',
enum: ['add', 'subtract', 'multiply', 'divide']
},
numbers: {
type: 'array',
description: 'Numbers to operate on',
items: {
type: 'number'
}
}
}
},
required: ['operation', 'numbers']
}
// Tool usage
const response = await unified.create({
// ... other params
tools: [calculator],
tool_choice: { type: 'required', name: 'calculator' } // Optional: force tool usage
})Error Handling
The library throws errors in these cases:
- Invalid model provider
- Invalid API key
- Network failures
- Rate limiting
- Invalid parameters
- Stream not found (when canceling)
- Error from the provider
Example:
try {
const response = await unified.create({
// ... params
})
} catch (error) {
console.error('Error: ', error.message)
}Usage Examples
// data.ts
import { UCreateParams, UToolDefinition, UToolUseBlock } from 'unified-llm'
export const system_messages: UCreateParams['system_messages'] = [
{
type: 'text',
content:
'You are a helpful assistant who respond to user queries and perform tasks. Always check available tools and use them to perform tasks then respond to user.'
}
]
export const base_messages: UCreateParams['messages'] = [
{
role: 'user',
content: [
{
type: 'text',
content: `Gather all data and respond in one message for the following tasks:
1. Can you tell me the weather in Delhi? Also tell what should I wear?
2. Search for files containing "config" and summarize the contents.
3. Calculate 23 * 45 and then add 50% of the result to it and tell final result.`
}
]
}
]
export const tool_list: UToolDefinition[] = [
{
name: 'search_files',
description: 'Search for files containing a specific query',
schema: {
type: 'object',
properties: {
query: {
type: 'string',
description: 'The search query'
}
}
},
required: ['query']
},
{
name: 'calculate',
description: 'Calculate a mathematical expression',
schema: {
type: 'object',
properties: {
expression: {
type: 'string',
description: 'The mathematical expression to evaluate'
}
}
},
required: ['expression']
},
{
name: 'get_weather',
description: 'Get current weather for a city',
schema: {
type: 'object',
properties: {
city: {
type: 'string',
description: 'The city name'
}
}
},
required: ['city']
}
]
const search_files = async (query: string): Promise<string> => {
return `Found results for "${query}": config.json, config.txt
File contents:
config.json: '{ "maxThreads": 8, "timeoutMs": 3000, "logLevel": "debug" }'
config.txt: 'This contains important metrics and data. System configuration settings are stored here.'`
}
const calculate_math = async (expression: string): Promise<number> => {
return eval(expression)
}
const get_weather = async (city: string): Promise<string> => {
return `Weather in ${city}: 45°C, Sunny`
}
export const process_tool_calls = async (tool_calls: UToolUseBlock[]) => {
const results: Array<{
tool_call_id: string
content: string
is_error?: boolean
}> = []
for (const tool of tool_calls) {
try {
let result: string | number
switch (tool.function.name) {
case 'search_files':
result = await search_files(tool.function.arguments.query as string)
break
case 'calculate':
result = await calculate_math(
tool.function.arguments.expression as string
)
break
case 'get_weather':
result = await get_weather(tool.function.arguments.city as string)
break
default:
throw new Error(`Unknown tool: ${tool.function.name}`)
}
results.push({
tool_call_id: tool.tool_call_id,
content: String(result)
})
} catch (error) {
results.push({
tool_call_id: tool.tool_call_id,
content: `Error: ${(error as Error).message}`,
is_error: true
})
}
}
return results
}1. Create Completion
# Code (with tool calls)
// create.ts
import dotenv from 'dotenv'
import path from 'path'
import { UCreateParams, UMessages, unified } from 'unified-llm'
import {
base_messages,
process_tool_calls,
system_messages,
tool_list
} from './data'
dotenv.config({ path: path.resolve(__dirname, '../.env') })
const process_messages = async (
params: UCreateParams,
messages: UMessages,
iteration = 0
): Promise<void> => {
if (iteration >= 10) {
console.log(
'!! Maximum iteration (10) reached. Stopping further tool calls.\n'
)
return
}
const response = await unified.create({
...params,
messages
})
messages.push({
role: 'assistant',
content: [
{
type: 'text',
content: response.completion
}
],
tool_calls: response.tool_calls
})
if (response.tool_calls?.length) {
const tool_call_names = response.tool_calls.map(
(tool_call) => tool_call.function.name
)
console.log(`>> Processing tool calls: ${tool_call_names.join(', ')}\n`)
const processed_tools = await process_tool_calls(response.tool_calls)
messages.push({
role: 'tool',
content: processed_tools
})
await process_messages(params, messages, iteration + 1)
} else {
console.log('>> Full conversation\n')
console.dir(messages, { depth: null })
console.log('\n>> Last message\n')
console.log(messages[messages.length - 1].content[0].content)
}
}
const main = async () => {
const messages = [...base_messages]
const params: UCreateParams = {
model_provider: 'deepseek',
model_name: 'deepseek-chat',
api_key: process.env.DEEPSEEK_API_KEY || 'NA',
system_messages,
messages,
tools: tool_list,
parameters: {
temperature: 0.2,
max_tokens: 1000
}
}
console.log('>> Processing messages...\n')
await process_messages(params, messages)
console.log('\n>> Done.\n')
}
main().catch(console.error)# Output
>> Processing messages...
>> Processing tool calls: get_weather, search_files, calculate
>> Processing tool calls: calculate
>> Full conversation
[
{
role: 'user',
content: [
{
type: 'text',
content: 'Gather all data and respond in one message for the following tasks:\n' +
'\n' +
' 1. Can you tell me the weather in Delhi also tell what should I wear then?\n' +
' 2. Can you search for files containing "config" and summarize the contents.\n' +
' 3. Calculate 23 * 45 and then add 50% of the result to it and tell final result.'
}
]
},
{
role: 'assistant',
content: [ { type: 'text', content: '' } ],
tool_calls: [
{
tool_call_id: 'call_0_701c966d-cd8f-412a-aa2c-8fce41ddd2d9',
type: 'function',
function: { name: 'get_weather', arguments: { city: 'Delhi' } }
},
{
tool_call_id: 'call_1_7f68149a-3965-4a12-800b-6adab236155d',
type: 'function',
function: { name: 'search_files', arguments: { query: 'config' } }
},
{
tool_call_id: 'call_2_74036671-a06d-4eb4-9652-2d3f2d991818',
type: 'function',
function: { name: 'calculate', arguments: { expression: '23 * 45' } }
}
]
},
{
role: 'tool',
content: [
{
tool_call_id: 'call_0_701c966d-cd8f-412a-aa2c-8fce41ddd2d9',
content: 'Weather in Delhi: 45°C, Sunny'
},
{
tool_call_id: 'call_1_7f68149a-3965-4a12-800b-6adab236155d',
content: 'Found results for "config": config.json, config.txt\n' +
' File contents:\n' +
` config.json: '{ "maxThreads": 8, "timeoutMs": 3000, "logLevel": "debug" }'\n` +
" config.txt: 'This contains important metrics and data. System configuration settings are stored here.'"
},
{
tool_call_id: 'call_2_74036671-a06d-4eb4-9652-2d3f2d991818',
content: '1035'
}
]
},
{
role: 'assistant',
content: [ { type: 'text', content: '' } ],
tool_calls: [
{
tool_call_id: 'call_0_d014e60e-8f1a-49c5-9d9c-73e6818075f0',
type: 'function',
function: {
name: 'calculate',
arguments: { expression: '1035 + 1035 * 0.5' }
}
}
]
},
{
role: 'tool',
content: [
{
tool_call_id: 'call_0_d014e60e-8f1a-49c5-9d9c-73e6818075f0',
content: '1552.5'
}
]
},
{
role: 'assistant',
content: [
{
type: 'text',
content: 'Here are the results for your tasks:\n' +
'\n' +
"1. **Weather in Delhi**: The current weather in Delhi is 45°C and sunny. Given the high temperature, it is advisable to wear light, breathable clothing such as cotton shirts, shorts, or dresses. Don't forget to wear sunglasses, a hat, and apply sunscreen to protect yourself from the sun.\n" +
'\n' +
'2. **Files containing "config"**:\n' +
' - **config.json**: `{ "maxThreads": 8, "timeoutMs": 3000, "logLevel": "debug" }`\n' +
' - **config.txt**: This contains important metrics and data. System configuration settings are stored here.\n' +
'\n' +
'3. **Calculation**: \n' +
' - First, calculate 23 * 45: 1035\n' +
' - Then, add 50% of the result to it: 1035 + (1035 * 0.5) = 1552.5\n' +
' - The final result is **1552.5**.'
}
],
tool_calls: undefined
}
]
>> Last message
Here are the results for your tasks:
1. **Weather in Delhi**: The current weather in Delhi is 45°C and sunny. Given the high temperature, it is advisable to wear light, breathable clothing such as cotton shirts, shorts, or dresses. Don't forget to wear sunglasses, a hat, and apply sunscreen to protect yourself from the sun.
2. **Files containing "config"**:
- **config.json**: `{ "maxThreads": 8, "timeoutMs": 3000, "logLevel": "debug" }`
- **config.txt**: This contains important metrics and data. System configuration settings are stored here.
3. **Calculation**:
- First, calculate 23 * 45: 1035
- Then, add 50% of the result to it: 1035 + (1035 * 0.5) = 1552.5
- The final result is **1552.5**.
>> Done.2. Create With Retry Completion
# Code
// create-with-retry.ts
const params: UCreateWithRetryParams = {
models: [
{
model_provider: 'openai',
model_name: 'gpt-4',
api_key: process.env.OPENAI_API_KEY || 'NA',
max_retries: 3
},
{
model_provider: 'anthropic',
model_name: 'claude-3-sonnet',
api_key: process.env.ANTHROPIC_API_KEY || 'NA'
}
],
system_messages: [
{
type: 'text',
content: 'You are a helpful assistant focused on providing clear and concise information.'
}
],
messages: [
{
role: 'user',
content: [
{
type: 'text',
content: 'Explain the concept of generative AI in simple terms.'
}
]
}
],
max_retries: 2,
throw_error_on_content_filter: true,
}
// The function will try OpenAI first with 3 retries, then fall back to Anthropic if all OpenAI attempts fail
const response = await unified.create_with_retry(params)# Output
// Same as create() function response
3. Stream Completion
# Code (with tool calls)
// stream.ts
import dotenv from 'dotenv'
import path from 'path'
import { UMessages, UStreamEvent, UStreamParams, unified } from 'unified-llm'
import {
base_messages,
process_tool_calls,
system_messages,
tool_list
} from './data'
dotenv.config({ path: path.resolve(__dirname, '../.env') })
const process_messages = async (
params: UStreamParams,
messages: UMessages,
iteration = 0
): Promise<void> => {
if (iteration >= 10) {
console.log(
'!! Maximum iteration (10) reached. Stopping further tool calls.\n'
)
return
}
const stream_response = unified.stream({
...params,
messages
})
const response = await stream_response.stream()
messages.push({
role: 'assistant',
content: [
{
type: 'text',
content: response.completion
}
],
tool_calls: response.tool_calls
})
if (response.tool_calls?.length) {
const processed_tools = await process_tool_calls(response.tool_calls)
messages.push({
role: 'tool',
content: processed_tools
})
await process_messages(params, messages, iteration + 1)
} else {
console.log('>> Full conversation\n')
console.dir(messages, { depth: null })
console.log('\n>> Last message\n')
console.log(messages[messages.length - 1].content[0].content)
}
}
const main = async () => {
const messages = [...base_messages]
let current_completion = ''
const params: UStreamParams = {
model_provider: 'openai',
model_name: 'gpt-4o',
api_key: process.env.OPENAI_API_KEY || 'NA',
system_messages,
messages,
tools: tool_list,
parameters: {
temperature: 0.7,
max_tokens: 1000
},
cb: ({ event, data }) => {
switch (event) {
case UStreamEvent.chunk:
if (data) {
process.stdout.write(data)
current_completion += data
}
break
case UStreamEvent.tool_start:
if (data) {
console.log(`>> Starting tool call: ${data}\n`)
}
break
case UStreamEvent.tool:
if (data) {
console.log('>> Tool calls completed\n')
}
break
case UStreamEvent.error:
console.error(`!! Error: ${data}`)
break
case UStreamEvent.finish:
console.log('>> Stream finished\n')
break
}
}
}
console.log('>> Processing messages...\n')
await process_messages(params, messages)
console.log('\n>> Done.\n')
}
main().catch(console.error)# Output
>> Processing messages...
>> Starting tool call: get_weather
>> Starting tool call: search_files
>> Starting tool call: calculate
>> Tool calls completed
>> Starting tool call: calculate
>> Tool calls completed
Here's the gathered information:
1. **Weather in Delhi:** The temperature is 45°C and it's sunny. You should wear light, breathable clothing, such as cotton shirts or dresses to stay cool in this hot weather. Don't forget sunglasses and sunscreen if you're heading out!
2. **Search for "config" Files:**
- **Found Files:**
- `config.json`: Contains settings like `maxThreads: 8`, `timeoutMs: 3000`, and `logLevel: "debug"`.
- `config.txt`: Contains important metrics and data. It includes system configuration settings.
3. **Calculation:**
- Initial calculation: \( 23 \times 45 = 1035 \)
- Adding 50% of the result: \( 1035 + (1035 \times 0.5) = 1552.5 \)
- **Final Result:** 1552.5>> Stream finished
>> Full conversation
[
{
role: 'user',
content: [
{
type: 'text',
content: 'Gather all data and respond in one message for the following tasks:\n' +
'\n' +
' 1. Can you tell me the weather in Delhi also tell what should I wear then?\n' +
' 2. Can you search for files containing "config" and summarize the contents.\n' +
' 3. Calculate 23 * 45 and then add 50% of the result to it and tell final result.'
}
]
},
{
role: 'assistant',
content: [ { type: 'text', content: '' } ],
tool_calls: [
{
tool_call_id: 'call_1',
type: 'function',
function: { name: 'get_weather', arguments: { city: 'Delhi' } }
},
{
tool_call_id: 'call_2',
type: 'function',
function: { name: 'search_files', arguments: { query: 'config' } }
},
{
tool_call_id: 'call_3',
type: 'function',
function: { name: 'calculate', arguments: { expression: '23 * 45' } }
}
]
},
{
role: 'tool',
content: [
{
tool_call_id: 'call_1',
content: 'Weather in Delhi: 45°C, Sunny'
},
{
tool_call_id: 'call_2',
content: 'Found results for "config": config.json, config.txt\n' +
' File contents:\n' +
` config.json: '{ "maxThreads": 8, "timeoutMs": 3000, "logLevel": "debug" }'\n` +
" config.txt: 'This contains important metrics and data. System configuration settings are stored here.'"
},
{
tool_call_id: 'call_3',
content: '1035'
}
]
},
{
role: 'assistant',
content: [ { type: 'text', content: '' } ],
tool_calls: [
{
tool_call_id: 'call_4',
type: 'function',
function: {
name: 'calculate',
arguments: { expression: '1035 + (1035 * 0.5)' }
}
}
]
},
{
role: 'tool',
content: [
{
tool_call_id: 'call_4',
content: '1552.5'
}
]
},
{
role: 'assistant',
content: [
{
type: 'text',
content: "Here's the gathered information:\n" +
'\n' +
"1. **Weather in Delhi:** The temperature is 45°C and it's sunny. You should wear light, breathable clothing, such as cotton shirts or dresses to stay cool in this hot weather. Don't forget sunglasses and sunscreen if you're heading out!\n" +
'\n' +
'2. **Search for "config" Files:**\n' +
' - **Found Files:** \n' +
' - `config.json`: Contains settings like `maxThreads: 8`, `timeoutMs: 3000`, and `logLevel: "debug"`.\n' +
' - `config.txt`: Contains important metrics and data. It includes system configuration settings.\n' +
'\n' +
'3. **Calculation:**\n' +
' - Initial calculation: \\( 23 \\times 45 = 1035 \\)\n' +
' - Adding 50% of the result: \\( 1035 + (1035 \\times 0.5) = 1552.5 \\)\n' +
' - **Final Result:** 1552.5'
}
],
tool_calls: []
}
]
>> Last message
Here's the gathered information:
1. **Weather in Delhi:** The temperature is 45°C and it's sunny. You should wear light, breathable clothing, such as cotton shirts or dresses to stay cool in this hot weather. Don't forget sunglasses and sunscreen if you're heading out!
2. **Search for "config" Files:**
- **Found Files:**
- `config.json`: Contains settings like `maxThreads: 8`, `timeoutMs: 3000`, and `logLevel: "debug"`.
- `config.txt`: Contains important metrics and data. It includes system configuration settings.
3. **Calculation:**
- Initial calculation: \( 23 \times 45 = 1035 \)
- Adding 50% of the result: \( 1035 + (1035 \times 0.5) = 1552.5 \)
- **Final Result:** 1552.5
>> Done.