slantwise
v0.0.11
Published
A CLI and local app to iterate on LLM chains with declarative and reactive formulas. Test your prompt variants quickly and as-needed with a deduplicated cache and lazy evaluation.
Readme
↗️ Slantwise
A CLI and local app to iterate on LLM chains with declarative and reactive formulas. Test your prompt variants quickly and as-needed with a deduplicated cache and lazy evaluation.
Note: this is super alpha software and the database schema is pretty unstable. Early feedback is welcome but please be aware that there is no guarantee data is transferrable from one version to another.
Why?
Prototyping LLM workflows is too slow! I developed this project partially out of curiosity and partially because I got impatient prototyping LLM wrappers for simple ideas. A new LLM-friendly problem stares me in the face every other week; CLI agents are great, but sometimes I just want to lock in a flow that I like. I found myself wanting the live iteration experience of reactive notebooks with the light syntax ergonomics of https://llm.datasette.io/, while letting me figure out how the pieces fit together as I went. Essentially, I wanted Excel but with more space to read. It's still early, but if you want to prototype workflows with formulas, this is for you!
Usage
Slantwise's fundamental building-block is the "formula", an expression that defines an output.
Every formula is composed of one or more operations. The core set are outlined here:
llmgetUrlContentconcat
Run slantwise operations to see all currently available operations.
llm behaves like a single conversation turn:
llm("hot air balloon", prompt="write me a bedtime story about the topic", model="openai/gpt-5")Formulas are nestable:
llm(
llm("hot air balloon", prompt="write me a bedtime story about the topic", model="openai/gpt-5"),
prompt="rate this bedtime story. 5 star scale",
model="openai/gpt-5"
)or chained using pipe operators (this is the same as the above):
llm("hot air balloon", prompt="write me a bedtime story about the topic", model="openai/gpt-5")
|> llm(prompt="write a review for this story", model="openai/gpt-5")and chains can get arbitrarily long:
llm("hot air balloon", prompt="write me a bedtime story about the topic", model="openai/gpt-5")
|> llm(prompt="write a review for this story", model="openai/gpt-5")
|> llm(prompt="give an appropriate 5-point rating that matches this review", model="openai/gpt-o3")getUrlContent uses Jina Reader to retrieve web content for the given URL in an LLM-friendly format. It's chainable with llm for some interesting results:
getUrlContent("https://news.ycombinator.com/")
|> llm(prompt="list the links to hardware-related threads", model="openai/gpt-5")Formulas can reference each other using a $-prefixed ID:
$ slantwise create 'getUrlContent("https://news.ycombinator.com/")'
# => chatty-ghosts-leave
$ slantwise create '$chatty-ghosts-leave |> llm(prompt="list the links to hardware-related threads", model="openai/gpt-5")'
# => thirty-laws-clapFormulas are lazily evaluated, meaning they are only computed when read. This includes when any downstream formulas are read!
Formula results are also cached; when a formula is read (slantwise read <formula-id>) for the first time, the results are remembered for future reads.
This means that all operations are treated as if they are deterministic which can be useful when iterating with LLM outputs.
# Reading the previous example's formula
$ slantwise read thirty-laws-clap
# => - https://news.ycombinator.com/item?id=123...
# Second try is the same
$ slantwise read thirty-laws-clap
# => - https://news.ycombinator.com/item?id=123...The caching behaviour can be overridden using the --reroll flag.
$ slantwise read thirty-laws-clap
# => - https://news.ycombinator.com/item?id=123...
# ^ old ID
$ slantwise read thirty-laws-clap --reroll
# => - https://news.ycombinator.com/item?id=456...
# ^ new ID 👀Slantwise detects when formula references form a cycle. To prevent (potentially expensive!) infinite loops, backreferences to in-progress formulas get substituted with an empty "seed" value. In other words, each node in a cycle is computed at most once.
$ slantwise create -l ping 'concat("ping ", "temp")'
# => smooth-parks-pump
$ slantwise create -l pong 'concat("pong ", $smooth-parks-pump)'
# => giant-windows-film
$ slantwise update ping --expression 'concat("ping ", $giant-windows-film)'
$ slantwise read ping
# => ping
# => pong
# =>
# Note that results are impacted by which formula is read
$ slantwise read pong
# => pong
# => ping
# =>Use the trace command for dependency and seeding information.
# trace executes formulas like read and accepts the same flags
$ slantwise trace ping --reroll
# => ping (smooth-parks-pump)
# => concat [computed]
# => → "ping \npong \n"
# => ├─ constant
# => │ → "ping "
# => └─ concat [computed]
# => → "pong \n"
# => ├─ constant
# => │ → "pong "
# => └─ concat [seed]
# => → ""Formulas can be managed using the list, create, update, and delete commands, and can be labelled a custom name for CLI usage using the -p flag.
Installation and Setup
The CLI is available on npm. Install it globally using:
npm install -g slantwiseor try it out with:
npx slantwiseTo get started:
- run
slantwise initto generate config files - open
config.json- on Linux, found in
~/.config/slantwise - on MacOS, found in
~/Library/Preferences/slantwise - on Windows, found in
%APPDATA%\slantwise\Config
- on Linux, found in
- update at least one API key:
openaiApiKey- for OpenAI modelsopenRouterApiKey- for OpenRouter models
- (Optional) use
slantwise modelsto see what LLM models are available, or useslantwise operationsto see valid operations.
A standalone GUI is also available, but might lag behind for feature parity. The latest version can be found on the Releases pages.
On the docket (in no particular order)
- file path referencing
- bulk processing
- more model support
- rate-limit aware queueing
- multi-workspace with live file watching
- persisting results as files (rather than purely in db)
- live observability
- parallelized execution
- garbage collection
- global undo/redo
- keybinding support
- loop stepping
Building from source
- Install Nix v2.31.0+ from the Nix Download Page
- Enable Nix flakes (NixOS Wiki)
- From the repo directory, run
nix develop - Install dependencies by running
just install - Run the associated build command for the interface
- Electron App: Run
just build {mac|win|linux}to build for your specific OS, orjust buildto build for all platforms. - CLI: Run
just build-cli
- Electron App: Run
Development
- Install Nix v2.31.0+ from the Nix Download Page
- Enable Nix flakes (NixOS Wiki)
- From the repo directory, run
nix developto enter the nix development environment
(Optionally: If you use direnv, rundirenv allowonce to automatically enter the environment when you navigate to the repo directory) - Install dependencies by running
just install - Run the development interface with the associated command:
- Electron App: Run
just devto start the Electron dev environment - CLI: Run
just clito build and run the CLI
- Electron App: Run
To see other frequently useful development commands, run just.
License
Apache 2.0
