npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

tune-sdk

v0.3.3

Published

chat with llm in a text file. core package

Readme

Tune - chat with llm in a text file

Reddit Discord

Tune is a handy extension for Visual Studio Code and and plugin for Neovim and plugin for Sublime Text that lets you chat with large language models (LLMs) in a text file. With tune javascript sdk you can make apps and agents.

Demo

asciicast

Setup

install tune-sdk

npm install -g tune-sdk

# create ~/.tune folder and install batteries
tune init

edit ~/.tune/.env file and add OPENAI_KEY and other keys

Template Language

user:
@myprompt     include file
@image        include image
@path/to/file include file at path
@gpt-4.1      connect model
@shell        connect tool
@@prompt      include file recursively

@{ name with whitespaces } - include file with whitespaces
@{ image | resize 512 }    - modify with processors
@{ largefile | tail 100 }  - modify with processors
@{| sh tree }              - insert generated content with processors

read more

Extend with Middlewares

Extend Tune with middlewares:

  • tune-fs - connect tools & files from local filesystem
  • tune-models - connect llm models from Anthropic/OpenAI/Gemini/Openrouter/Mistral/Groq
  • tune-basic-toolset - basic tools like read file, write file, shell etc.
  • tune-s3 - read/write files from s3
  • tune-mcp - connect tools from mcp servers
  • maik - fetch all you emails, and index them into sqlite database

For example:

cd ~/.tune 
npm install tune-models

Edit default.ctx.js and add middlewares

const models = require('tune-models')

module.exports = [
    ...
    models({
        default: "gpt-5-mini"
    })
    ...
]

Edit .env file and add provider's keys

OPENAI_KEY="<openai_key>"
ANTHROPIC_KEY="<anthropic_key>"

Use it in chat

system: 
@gemini-2.5-pro @openai_imgen

user: 
draw a stickman with talking bubble "Hello world"

assistant: 
tool_call: openai_imgen {"filename":"stickman_hello_world.png"}
a simple stickman drawing with a talking bubble saying 'Hello world'

tool_result: 
image generated

Command Line

# install tune globally
npm install -g tune-sdk

tune "hi how are you?"

# append user message to newchat.chat run and save
tune --user "hi how are you?" --filename newchat.chat  --save

# start new chat with system prompt and initial user message 
# print result to console
tune --system "You are Groot" --user "Hi how are you?"

# set context variable
tune --set test="hello" --user "@test" --system "You are echo you print everythting back"  
# prints hello

Static web server + context over WebSocket

Make simple web apps that share the same tools, files, and models available in Tune Chat. Tune Chat and the web app share the same context:

// Read files
await ctx.read("path/to/file")

// Write files
await ctx.write("path/to/file", content)

// Execute tools
let result = await ctx.exec("tool", { param: "value" })
// Note that `result` is always a string. If you expect JSON:
result = JSON.parse(result)

// Also render errors to the user, since you won’t be able to see and debug them otherwise.

// Call LLM
const result = await ctx.file2run({ user: "hi" })

Create an app, e.g. index.html:

...
<!-- Load the context into `window.ctx` -->
<script src="/contextws.js"></script>
...

Run the static web server from the folder:

$ tune ws

listening on http://localhost:8080

Javascript SDK

npm install tune-sdk

Tune core is middleware-based. A context resolves @name references into nodes like text, tool, llm, and processor.

const tune = require("tune-sdk")

async function main() {
  const ctx = tune.makeContext()

  ctx.use(async function middleware(name) {
    if (name === "file.txt") {
      return {
        type: "text",
        name: "file.txt",
        read: async () => fs.readFileSync("file.txt", "utf8")
      }
    }

    if (name === "readfile") {
      return {
        type: "tool",
        name: "readfile",
        schema: {
          type: "object",
          properties: {
            filename: { type: "string" }
          }
        },
        exec: async ({ filename }) => fs.readFileSync(filename, "utf8")
      }
    }

    if (name === "gpt-5") {
      return {
        type: "llm",
        name: "gpt-5",
        exec: async (payload) => ({
          url: "https://api.openai.com/v1/chat/completions",
          method: "POST",
          headers: {
            Authorization: `Bearer ${process.env.OPENAI_KEY}`,
            "Content-Type": "application/json"
          },
          body: JSON.stringify({
            model: "gpt-5",
            ...payload
          })
        })
      }
    }

    if (name === "tail") {
      return {
        type: "processor",
        name: "tail",
        exec: async (node, args) => {
          if (!node) return
          if (node.type !== "text") throw Error("tail can only modify text nodes")
          return {
            ...node,
            read: async () => {
              const content = await node.read()
              const n = parseInt(args.trim(), 10) || 20
              return content.split("\n").slice(-n).join("\n")
            }
          }
        }
      }
    }
  })

  const content = await ctx.file2run({
    system: "@gpt-5 @readfile",
    user: "can you read file.txt?",
    stream: false,
    response: "content"
  })

  console.log(content)
}

main()

read more about javascript sdk

Help / Manual

You can access tune manuals and available middlewares manuals from

system:
@man include all manuals for all connected packages
@man/ - list all the manuals, like list directory
@man/tune-sdk - get manual for tune core package 
@man/tune-basic-toolset - get manual for tune-basic-tool set package

read any of it as a file
tool_call: rf { "filename": "man/tune-basic-toolset"}
tool_result:
@man/tune-basic-toolset

To connect man middleware in default.ctx.js:

const man = require('tune-sdk/man')

module.exports = [
    ...
    man(),
    ...
]

To expose README.md of your npm package as man/<package-name>, in your package/src/index.js add:

const man = require("tune-sdk/man");

// this method will read package.json and README.md
man.add(__dirname)