nlang-cli
v0.1.3
Published
Executable Extensions - Build system for double-extension files
Readme
nlang — Executable Extensions
A build system where file extensions define the build pipeline. Files with double extensions (.html.md, .json.js, .css.md) are automatically executed: markdown files are sent to an LLM, JavaScript/TypeScript files are run in Node.js.
Install
npm install -g nlangQuick Start
- Create a file with a double extension:
## <!-- index.html.md -->
## model: gpt-5.4-2026-03-05
Create a simple landing page for a developer portfolio.
Include a hero section, about section, and contact form.
Use modern CSS with a dark theme.- Generate the GitHub Action:
nlang initSet your API key in GitHub repo Settings → Secrets →
OPENAI_API_KEYPush — your files will be built automatically!
How It Works
Double Extensions
The last extension determines the executor, everything before it is the output format:
| Source File | Executor | Output |
| ---------------- | -------------------- | ------------- |
| index.html.md | LLM prompt | index.html |
| styles.css.md | LLM prompt | styles.css |
| data.json.js | Node.js | data.json |
| readme.md.md | LLM prompt | readme.md |
| sitemap.xml.ts | Node.js (TypeScript) | sitemap.xml |
Markdown Executor (LLM)
Markdown files are sent as prompts to an LLM. Configure with frontmatter:
---
model: gpt-4o
temperature: 0.7
max_tokens: 8192
system: "You are an expert web developer."
cacheTtl: 86400
---
Your prompt here...JavaScript/TypeScript Executor
JS/TS files export a function that returns the output:
// data.json.js
export default async function (ctx) {
const response = await fetch("https://api.example.com/data");
const data = await response.json();
return JSON.stringify(data, null, 2);
}The ctx object contains:
deps— resolved dependency contentsvariables— variable values for this variantconfig— merged nlang.json configurationrootDir— project root pathenv— environment variables
Dependencies with @{path}
Reference other files in your prompts:
<!-- components.html.md -->
Create HTML components following this design system:
@{design-tokens.json}
And matching these TypeScript types:
@{src/types.ts}Files are built in dependency order. If design-tokens.json is itself generated (e.g., from design-tokens.json.md), it will be built first.
You can also reference URLs:
@{https://raw.githubusercontent.com/user/repo/main/schema.json}Variables with [name]
Use bracket syntax in paths for templated builds:
blog/
name.json # ["hello-world", "getting-started", "advanced-tips"]
[name].html.md # Template that uses [name] in the promptThe [name].html.md file will be executed once for each value in name.json, producing:
blog/hello-world.htmlblog/getting-started.htmlblog/advanced-tips.html
Cron Schedules
Add a trigger to frontmatter for scheduled rebuilds:
---
trigger: "0 */6 * * *"
---
Fetch the latest news and generate an HTML summary...When you run nlang init, this is picked up and added to the GitHub Action schedule.
Configuration: nlang.json
Place nlang.json in any directory. More specific configs override parent configs:
{
"model": "gpt-4o",
"temperature": 0,
"cacheTtl": 3600,
"baseURL": "https://api.openai.com/v1",
"system": "You are a helpful assistant."
}Config resolution order (later wins):
~/.nlang(global)./nlang.json(project root)./subdir/nlang.json(closer to file)- File frontmatter (highest priority)
Caching
LLM responses are cached by content hash with configurable TTL:
- Default TTL: 1 hour (3600s)
- When MCP is enabled: no cache by default
- Override with
cacheTtlin frontmatter ornlang.json - Set
cacheTtl: 0to disable caching - Set
cacheTtl: -1for infinite cache (only invalidated by content changes)
CLI
# Generate GitHub Action workflow
nlang init
# Build all executable files
nlang build
# Build a specific file and its dependency chain
nlang build --file blog/[name].html.md
# Dry run — show execution plan without running
nlang build --dry-run
# Specify project directory
nlang build -d /path/to/projectEnvironment Variables
| Variable | Description |
| ---------------- | ------------------------------------------------ |
| OPENAI_API_KEY | OpenAI API key |
| LLM_API_KEY | Alternative API key (for OpenAI-compatible APIs) |
Example Project
my-site/
├── nlang.json # {"model": "gpt-5.4-2026-03-05"}
├── index.html.md # Landing page prompt
├── styles.css.md # CSS prompt (references index.html.md output)
├── blog/
│ ├── name.json # ["intro", "tutorial"]
│ ├── [name].html.md # Blog post template
│ └── index.html.js # Blog index (reads generated posts)
├── data/
│ └── api-data.json.ts # Fetches and transforms API data
└── dist/ # ← Build output (auto-generated)
├── index.html
├── styles.css
├── blog/
│ ├── intro.html
│ ├── tutorial.html
│ └── index.html
└── data/
└── api-data.json