tune-basic-toolset
v0.1.18
Published
Basic toolset for tune
Readme
Tune basic toolset
Basic toolset for Tune.
Index
- Setup
- Tools
- rf read file
- wf write file
- patch patch file
- append append to file
- sh execute shell command locally and remotely
- cmd execute Windows cmd command
- powershell execute PowerShell command
- grep search for patterns in text or files
- osa manage reminders/notes/calendar (AppleScript/macOS)
- jina_r fetch webpage content
- websearch search the web with web-enabled llms
- list keep list of tasks todo (loops for LLM)
- sqlite execute sqlite queries
- py run python code
- js_node run javascript code in node process
- js_ctx run javascript code that shares context with LLM
- turn handoff based agent (shared context)
- message talk to another chat/agent (separate context)
- Processors
- proc converts tool to processor
- init set initial value
- json_format make LLM respond with JSON
- log save LLM payload
- mock set variables inline
- linenum prepend line numbers
- text convert any variable to text variable
- resolve resolve a variable
- prop set additional properties of LLM
- head take first N lines of a file
- tail take last N lines of a file or LLM payload
- slice take lines from to of a file
- random random selection, sampling, shuffling, uniform ranges
- curry change a tool by setting a parameter
Setup
Text Editor
Install in your ~/.tune folder:
cd ~/.tune
npm install tune-basic-toolsetAdd to ~/.tune/default.ctx.js:
const basics = require('tune-basic-toolset')
module.exports = [
...
basics()
...
]JavaScript Project
npm install tune-basic-toolset tune-sdkconst tune = require('tune-sdk')
const basics = require('tune-basic-toolset')
const ctx = tune.makeContext(
basics({ expose: ["rf", "wf"], mount: "tools" })
)Tools
Tools is a function that llm can run on your local machine or server
rf
Read file
user: @rf
can you read README.md?
tool_call: rf {"filename":"README.md"}
tool_result:
@README.mdIt accepts an optional linenum parameter that prepends line numbers to the file (useful for patching).
wf
Write to a file
user: @wf
make a hello world javascript
tool_call: wf {"filename":"helloWorld.js"}
console.log('Hello, World!');
tool_result:
writtenpatch
user: @patch
translate "hello world" in helloWorld.js to dutch
tool_call: patch {"filename":"helloWorld.js"}
<<<<<<< ORIGINAL
console.log('Hello, World!');
=======
console.log('Hallo, Wereld!');
>>>>>>> UPDATED
tool_result:
patchedappend
user: @append
Add a todo that i want to buy a car and go to the moon
tool_call: append {"filename":"todo.md"}
- buy a car
- fly to the moon
tool_result:
appendedsh
Execute shell command locally or with ssh
user: @sh
find with ripgrep where echo is used
tool_call: sh
rg 'echo' ./
tool_result:
./README.md: echo: "You are echo, you print everything back",
./README.md: const text = "s: \@echo\nu: hello world";
./tools/echo.txt:you are echo, you print everything back
./tools/README.md:* `echo.txt` - to debug variable expansions and context
user:
check contents directory on root\@host.com
assistant:
tool_call: sh {"host": "[email protected]"}
ls
tool_result:
file1
file2cmd
Execute Windows cmd command
user: @cmd
list all files in current directory
tool_call: cmd
dir
tool_result:
Volume in drive C has no label.
Volume Serial Number is 1234-5678
Directory of C:\project
12/01/2023 10:30 AM <DIR> .
12/01/2023 10:30 AM <DIR> ..
12/01/2023 09:15 AM 1,024 package.json
12/01/2023 09:20 AM <DIR> src
1 File(s) 1,024 bytes
3 Dir(s) 15,728,640 bytes freepowershell
Execute PowerShell command
user: @powershell
get system information
tool_call: powershell
Get-ComputerInfo | Select-Object WindowsProductName, TotalPhysicalMemory, CsProcessors
tool_result:
WindowsProductName : Windows 11 Pro
TotalPhysicalMemory : 17179869184
CsProcessors : {Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz}grep
Search for patterns in text or files using regular expressions
user: @grep
find all lines containing "TODO" in myfile.js
tool_call: grep {"filename":"myfile.js","regex":"TODO"}
tool_result:
// TODO: refactor this function
// TODO: add error handling
system:
TODOS:
@{ myfile.js | proc grep regex=TODO }
osa
AppleScript tool, manage reminders, notes, calendar etc on osx
system: @osa @gpt-4.1
you're personal assistant with access to osx apps
like reminder, calendar etc
user:
can you remind me in 2 hours that i have meeting with jack
assistant:
tool_call: osa
tell application "Reminders" to make new reminder with properties {name:"Meeting with Jack", remind me date:(current date) + 2 * hours}
tool_result:
reminder id x-apple-reminder://849EA642-B7E0-4E16-94AA-B50FF1BC282Djina_r
Fetch webpage content with jina reader api. Works without an API key up to a limited rate. Set JINA_KEY in .env for higher rate limits.
user: @jina_r
take https://iovdin.github.io/tune/ page content
and put it into `home` directory
then make short summary
tool_call: jina_r {"url":"https://iovdin.github.io/tune/","filename":"home/tune_page_content.md"}
tool_result:
@home/tune_page_content.md
assistant:
The content from the "Tune" page has been successfully retrieved and saved into the `home/tune_page_content.md` file. Below is a short summary of the page:
---
**Summary of Tune - Your Everyday LLM Toolkit:**
Tune is a versatile toolkit designed for developers and users to effectively interact with Large Language Models (LLMs). It allows users to chat with AI, build applications, and create AI agents in a simple manner.
<cut for brevity>websearch
Search the web with web enabled llms
Supports search with perplexity/sonar, perplexity/sonar-pro, gpt-4o-search-preview, gpt-4o-mini-search-preview models via the model parameter (defaults to perplexity/sonar).
user: @websearch
latest ai news
assistant:
tool_call: websearch {"model":"perplexity/sonar"}
latest AI news
tool_result:
The latest AI news in October 2025 highlights significant investments, new projects, policy developments, and advances across various sectors:
- Major companies including Microsoft, Google, Nvidia, OpenAI, Salesforce, and CoreWeave have pledged over £31 billion in capital expenditure focused on AI data centers and infrastructure upgrades[1].The websearch tool provides up-to-date information by querying the web through AI-powered search models. You can specify different Perplexity models like perplexity/sonar-pro for more advanced searches.
list
Keep list of tasks to do
system: @list
You manage todo list in file todo.txt
user:
Today i need to refactor xyz
and then release abc
assistant:
tool_call: list {"filename":"todo.txt"}
todo - refactor xyz
todo - release abc
tool_result:
list updated
user:
ok, i've finished refactoring xyz
assistant:
tool_call: list {"filename":"todo.txt"}
done - refactor xyz
tool_result:
list updatedsqlite
execut sqlite queries with sqlite shell
user:
@sqlite
create a simple todo table in db.sqlite and fill it with fake data
assistant:
tool_call: sqlite {"filename":"db.sqlite"}
CREATE TABLE todo (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task TEXT NOT NULL,
status TEXT CHECK(status IN ('pending', 'completed')) NOT NULL DEFAULT 'pending',
due_date DATE
);
INSERT INTO todo (task, status, due_date) VALUES
('Buy groceries', 'pending', '2024-05-01'),
('Finish project report', 'pending', '2024-05-03'),
('Book flight tickets', 'completed', '2024-04-25'),
('Call plumber', 'pending', '2024-04-30'),
('Schedule meeting with team', 'completed', '2024-04-20');
tool_result:
user:
check pending tasks
assistant:
tool_call: sqlite {"filename":"db.sqlite","format":"json"}
SELECT * FROM todo WHERE status = 'pending';
tool_result:
[{"id":1,"task":"Buy groceries","status":"pending","due_date":"2024-05-01"},
{"id":2,"task":"Finish project report","status":"pending","due_date":"2024-05-03"},
{"id":4,"task":"Call plumber","status":"pending","due_date":"2024-04-30"}]py
execute python code
user: @py
1234 + 4311
tool_call: py
1234 + 4311
tool_result:
5545js_node
execute javascript code in nodejs
user: @js_node
1234 + 4311
tool_call: js {"inputType":"commonjs"}
1234 + 4311
tool_result:
5545
js_ctx
execute javascript code but also call tools available to llm or trigger LLM response from code It is useful when you need to trigger loops and ifs or chain few tool calls together
user:
@js_ctx @sh
what is in my current directory?
tool_call: sh
ls
tool_result:
__pycache__
append.schema.json
append.tool.js
... file list
user:
can you do the same using js_ctx tool?
assistant:
tool_call: js_ctx
const result = await ctx.exec("sh", { text: "ls" });
return result;
tool_result:
__pycache__
append.schema.json
append.tool.js
....Now we can loop trough all the files and summarize its content
user:
can you loop throught the .js files and summarize its content using llm?
assistant:
tool_call: js_ctx
// Get all .js files (excluding .proc.js and .schema.json files)
const files = await ctx.exec("sh", { text: "ls *.js | grep -v '.proc.js'" });
const jsFiles = files.trim().split('\n').filter(f => f.endsWith('.js'));
const summaries = [];
// Loop through each .js file
for (const file of jsFiles) {
// Read the file content
const content = await ctx.exec("sh", { text: `cat ${file}` });
// Use LLM to summarize
const summary = await ctx.file2run({
system: "@mistral-tiny You are a code analyst. Provide a brief 1-2 sentence summary of what this JavaScript file does.",
user: "Summarize this code:\n\n@code"
}, {
code: content
});
summaries.push({
file: file,
summary: summary.trim()
});
}
// Format and return results
return summaries.map(s => `**${s.file}**\n${s.summary}\n`).join('\n');
tool_result:
**append.tool.js**
This JavaScript file exports an asynchronous function that appends the given text to the end of a specified file, and returns "done" once the operation is completed.
**cmd.tool.js**
This JavaScript file exports an asynchronous function `cmd` that executes a system command (Windows-specific, using 'cmd.exe' as the shell) passed as a string, and returns the command's output or an error message containing both stderr and stdout.
**grep.tool.js**
This JavaScript module exports an asynchronous function named `grep` that reads a file (if provided) or a provided text and searches for lines matching a given regular expression. It returns the lines that match, separated by newlines. If the file is not found or the content is empty, it returns appropriate error messages.
....
turn
A way to switch roles when building multistep agents read more
system: @gpt-4o
@{ turn | curry filename=agent.txt}
You're playing 20 questions game.
You switch turns between 'thinker' and 'player' agent.
'player' always plays first
@@agent|init
assistant:
Is it a living thing?
tool_call: turn {"name": "thinker"}
tool_result:
now it is turn of thinker to reply
assistant:
No.
tool_call: turn {"role":"player"}
tool_result: now it is turn of player to reply
assistant:
Is it something that can be used indoors?
tool_call: turn {"role":"thinker"}
tool_result: now it is turn of thinker to reply
assistant:
Yes.
...message
Talk to another chat/agent via tool call. Orchestrate or evaulate other agents/chats.
system:
@message
Your goal is to talk to Groot at `groot.prompt` system prompt
and try to make him say anything but 'I am Groot'
tool_call: message {"filename":"groot.chat","system":"@@groot.prompt"}
Hello Groot! How are you feeling today?
tool_result:
I am Groot!
tool_call: message {"filename":"groot.chat"}
What do you think about trees?
tool_result:
I am Groot!
tool_call: message {"filename":"groot.chat"}
Can you tell me a joke?
tool_result:
Sure! Why did the tree go to the dentist?
Because it had a root canal!The content of groot.chat is then:
system: @@groot.prompt
user:
Hello Groot! How are you feeling today?
assistant:
I am Groot!
user:
What do you think about trees?
assistant:
I am Groot!
user:
Can you tell me a joke?
assistant:
Sure! Why did the tree go to the dentist?
Because it had a root canal!Processors
Processors is a way to modify variable or insert new ones into chat.
proc
converts any tool to a processor
system:
include project file list to system prompt
@{| proc sh git ls-files }
execute script with sqlite on db `db.sqlite` and insert result
@{ script.sql | proc sqlite filename=db.sqlite }
execut python script text="384 * 123" and insert back result
@{| proc py 384 * 123 }init
Set default value for non set variables
system:
@memory|init
if memory does not exist the chat will failjson_format
Set llm response format to json read more.
Without arguments it sets
"response_format": {
"type": "json_object"
}system:
@{ gpt-4o | json_format }
please reply in json format:
{ "message": "Your reply"}
user:
hi how are you?
assistant:
{ "message": "I'm just a virtual assistant, so I don't have feelings, but I'm here and ready to help you! How can I assist you today?" }
with argument it sets
"response_format": {
"type": "json_schema",
"json_schema": { "schema": "<contents of the referenced schema file>" }
} system:
@{ gpt-4o | json_format path/to/schema.json }log
Save LLM payload to a json or chat file, used for debugging
system:
@{ gpt-4o | log path/to/log.json }
@{ gpt-4o | log path/to/log.chat }mock
Set variables inline in chat.
system: @{| mock hello=world }
@echo
user:
@hello
assistant:
worldlinenum
Prepend line numbers to a file content. Useful when patching file.
system:
@echo
user:
@{ helloWorld | linenum }
assistant:
1 | console.log('Hello, World!');text
Treat special files (.ctx.js, .llm.js, .tool.js) like text
system:
@echo
user:
content
@rf.tool.mjs
assistant:
content
user:
content
@{ rf.tool.mjs | text}
assistant:
content
import { promises as fs } from 'fs';
import { relative, dirname } from 'path'
....resolve
Given filename resolve it and include
@{ filename | resolve }see examples/queryimage example
prop
set additional properties for llm
system:
@{ o3-mini | prop reasoning_effort=low temperature=2.0 }head
Take first N lines of text from a file or variable. Default is 20 lines.
user:
@{ filename.txt | head 10 } # first 10 linestail
Take last N lines of text from a file or variable. Default is 20 lines.
user:
@{ filename.txt | tail 15 } # last 15 linesYou can limit llm request context with tail like
system:
@{ gpt-4.1 | tail 2 } # take last 2 messages from the chat + system message
user:
1
assistant:
2
user:
3
assistant:
4slice
Extract a range of lines from a file or variable.
user:
@{ filename.txt | slice 5 15 } # lines 5 to 15 inclusive
@{ filename.txt | slice 10 } # from line 10 to end
@{ filename.txt | slice -10 -1 } # last 10 lines
@{ filename.txt | slice -20 } # last 20 lines
@{ filename.txt | slice 1 20 } # first 20 lines (like head 20)random
Random selection, sampling, shuffling, and uniform number generation.
Use cases:
user:
@{| random a b c d }
@{| random choice a b c d }
@{| random "choice 1" "choice 2" }
@{| random choice @path/to/file.txt } # choose 1 line from a file
@{| random choice 2..30 } # choose 1 from range
@{| random choice -2.5..7.5 } # floats
@{| random choices 3 a b c d } # pick 3 with replacment
@{| random choices 5 @file.txt } # pick 5 lines from file.txt
@{| random sample 3 a b c d } # pick 3 without replacement
@{| random sample 10 1..5 } # will return 5 unique numbers
@{| random shuffle a b c d }
@{| random shuffle 1..10 }
@{| random uniform 1..10 } # integers
@{| random uniform -2.5..7.5 } # floats
@{| random uniform 10 20 } # two-number form
Notes:
- Quotes are respected for tokens with spaces.
- Files referenced as @file are expanded to non-empty trimmed lines.
- Integer ranges like a..b can be mixed with discrete values and files; float ranges cannot be mixed in lists.
- sample and shuffle require a discrete set; float ranges are not supported there.
- choices and sample output multiple lines (one item per line).
curry
Modify a tool by setting parameter or name or description. Narrow possible usage of a tool so that LLM wont mess up
user:
@{ sh | curry text=ls $name=ls_cwd}
what is in my current directory?
assistant:
tool_call: ls_cwd
tool_result:
node_modules
package.json
README.md
src
user:
@{ sqlite | curry filename=db.sqlite format=table}
what tables are outhere?
user:
@{ list | curry filename=mylist.todo $name=todo }
create sample todo list
tool_call: todo
[] - Create sample todo list
tool_result:
list updated