chatcn-cli
v0.1.0
Published
Scaffold AI chatbot templates into your shadcn project
Downloads
128
Maintainers
Readme
chatcn
Scaffold production-ready AI chatbot templates into your shadcn project.
- npm: https://www.npmjs.com/package/chatcn-cli
- GitHub: https://github.com/jeiwinfrey/chatcn
Quickstart
Initialize a chatbot in your existing shadcn project:
npx chatcn-cli initPrefer a different package runner? Use the one that matches your setup:
npx chatcn-cli init
pnpm dlx chatcn-cli init
yarn dlx chatcn-cli init
bunx chatcn-cli initThis will:
- Detect your framework and package manager
- Let you choose a chatbot template
- Let you choose an AI provider
- Let you choose a model, or use the provider's recommended default
- Install required shadcn components
- Generate all necessary files
Prerequisites
chatcn requires an existing project with shadcn initialized. If you haven't set up shadcn yet:
npx shadcn@latest initTemplates
chatcn provides 5 chatbot templates:
chatbot-basic
Pick this if you are building a simple chatbot and want the smallest starting point.
shadcn components: button, input, scroll-area
chatbot-ui
Pick this if you want a polished chatbot UI with message bubbles, markdown, and loading states.
shadcn components: button, input, scroll-area, card, avatar, skeleton
chatbot-assistant
Pick this if you are building a reusable assistant and want cleaner separation between UI, hook, and LLM logic.
shadcn components: button, input, scroll-area, card, separator
chatbot-support
Pick this if you are building a support or helpdesk chatbot with quick replies and a guided tone.
shadcn components: button, input, scroll-area, card, badge
chatbot-custom
Pick this if you want the simple chatbot starter with optional avatars, names, and loading behavior during setup.
shadcn components: button, input, scroll-area
Providers
chatcn supports 12 AI providers:
- OpenAI - GPT models (gpt-5-mini default)
- Anthropic (Claude) - Claude models (claude-3-5-haiku-latest default)
- OpenRouter - Access to multiple models through one API
- Google Gemini - Gemini models (gemini-2.5-flash-lite default)
- AWS Bedrock - Claude and other models via AWS (anthropic.claude-haiku-4-5-20251001-v1:0 default)
- Groq - Fast inference (meta-llama/llama-4-scout-17b-16e-instruct default)
- Together AI - Open source models (deepseek-ai/DeepSeek-V3.1 default)
- Mistral - Mistral models (mistral-small-latest default)
- xAI (Grok) - Grok models (grok-4.20-beta-latest-non-reasoning default)
- DeepSeek - DeepSeek models (deepseek-chat default)
- Cerebras - Ultra-fast inference (gpt-oss-120b default)
- Fireworks AI - Fast inference (accounts/fireworks/models/kimi-k2-thinking default)
CLI Flags
--template
Specify a template without interactive prompt:
npx chatcn-cli init --template chatbot-uiValid values: chatbot-basic, chatbot-ui, chatbot-assistant, chatbot-support, chatbot-custom
--provider
Specify a provider without interactive prompt:
npx chatcn-cli init --provider openaiValid values: openai, anthropic, openrouter, google, aws-bedrock, groq, together, mistral, xai, deepseek, cerebras, fireworks
--model
Choose the model to write into lib/llm.ts and AI_MODEL:
npx chatcn-cli init --provider openai --model gpt-5.1If you skip this flag, chatcn uses the provider's recommended default model.
--yes
Skip all prompts and use defaults:
npx chatcn-cli init --yes --template chatbot-basic --provider openai --model gpt-5-mini--overwrite
Overwrite existing files:
npx chatcn-cli init --overwriteBy default, chatcn will skip files that already exist to protect your custom code.
--cwd
Target a different directory:
npx chatcn-cli init --cwd ./my-projectCommands
init
Initialize a chatbot with interactive prompts:
npx chatcn-cli initOr with other runners:
pnpm dlx chatcn-cli init
yarn dlx chatcn-cli init
bunx chatcn-cli initWith flags to skip prompts:
npx chatcn-cli init --template chatbot-ui --provider anthropic --yes --model claude-3-5-haiku-latestadd
Add a chatbot template (same as init, but more explicit):
npx chatcn-cli add --template chatbot-assistant --provider openai --model gpt-5-miniSupported Frameworks
chatcn automatically detects your framework and generates appropriate code:
- Next.js (App Router and Pages Router)
- Vite + React
- Remix
- Astro
- TanStack Start
- React Router v7
- Laravel (with Inertia)
Environment Variables
After running chatcn, you'll need to set up environment variables for your chosen provider.
OpenAI
OPENAI_API_KEY=your_api_key_here
AI_MODEL=gpt-5-miniAnthropic
ANTHROPIC_API_KEY=your_api_key_here
AI_MODEL=claude-3-5-haiku-latestOpenRouter
OPENROUTER_API_KEY=your_api_key_here
AI_MODEL=openrouter/autoGoogle Gemini
GOOGLE_API_KEY=your_api_key_here
AI_MODEL=gemini-2.5-flash-liteAWS Bedrock
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key_here
AWS_SECRET_ACCESS_KEY=your_secret_key_here
AI_MODEL=anthropic.claude-haiku-4-5-20251001-v1:0Note: AWS Bedrock requires installing @aws-sdk/client-bedrock-runtime:
npm install @aws-sdk/client-bedrock-runtimeGroq
GROQ_API_KEY=your_api_key_here
AI_MODEL=meta-llama/llama-4-scout-17b-16e-instructTogether AI
TOGETHER_API_KEY=your_api_key_here
AI_MODEL=deepseek-ai/DeepSeek-V3.1Mistral
MISTRAL_API_KEY=your_api_key_here
AI_MODEL=mistral-small-latestxAI (Grok)
XAI_API_KEY=your_api_key_here
AI_MODEL=grok-4.20-beta-latest-non-reasoningDeepSeek
DEEPSEEK_API_KEY=your_api_key_here
AI_MODEL=deepseek-chatCerebras
CEREBRAS_API_KEY=your_api_key_here
AI_MODEL=gpt-oss-120bFireworks AI
FIREWORKS_API_KEY=your_api_key_here
AI_MODEL=accounts/fireworks/models/kimi-k2-thinkingCreate a .env.local file (Next.js) or .env file (other frameworks) in your project root with the appropriate variables.
Using the Generated Components
After running chatcn, you'll have a chat component ready to use in your application.
Next.js (App Router)
import { Chat } from "@/components/chat";
export default function Page() {
return (
<main className="container mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Chatbot</h1>
<Chat />
</main>
);
}Next.js (Pages Router)
import { Chat } from "@/components/chat";
export default function Home() {
return (
<div className="container mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Chatbot</h1>
<Chat />
</div>
);
}Vite
import { Chat } from "@/components/chat";
function App() {
return (
<div className="container mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Chatbot</h1>
<Chat />
</div>
);
}
export default App;Remix
import { Chat } from "~/components/chat";
export default function Index() {
return (
<div className="container mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Chatbot</h1>
<Chat />
</div>
);
}What Gets Generated
chatcn generates the following files:
- Component files - React components for the chatbot UI
- Hook files - React hooks for managing chat state and streaming
- LLM file - Provider-specific API integration (
lib/llm.ts) - API route - Backend endpoint for handling chat requests (framework-specific)
Example File Structure (Next.js)
your-project/
├── components/
│ ├── chat.tsx # Main chat component
│ └── chat-message.tsx # Message component (chatbot-ui only)
├── hooks/
│ └── use-chat.ts # Chat state management hook
├── lib/
│ └── llm.ts # Provider API integration
└── app/
└── api/
└── chat/
└── route.ts # API route handlerExamples
Basic chatbot with OpenAI
npx chatcn-cli init --template chatbot-basic --provider openai --yesPolished UI with Anthropic
npx chatcn-cli init --template chatbot-ui --provider anthropic --yesSupport chatbot with Groq
npx chatcn-cli init --template chatbot-support --provider groq --yesAdd another template to existing project
npx chatcn-cli add --template chatbot-assistant --provider google --overwriteTroubleshooting
"shadcn is not initialized"
Run npx shadcn@latest init first to set up shadcn in your project.
"File already exists"
Use the --overwrite flag to replace existing files, or manually remove the files you want to regenerate.
API route not working
Make sure you've set the required environment variables for your provider. Check your .env.local or .env file.
TypeScript errors
Run npm install to ensure all dependencies are installed. For AWS Bedrock, you'll need to manually install @aws-sdk/client-bedrock-runtime.
Development and Testing
Development
Run the built-in checks before opening a release:
npm run typecheck
npm test
npm run buildContributing
If you want to help improve chatcn, start here:
Running Tests
# Run all automated tests
npm test
# Run tests in watch mode
npm run test:watch
# Build the CLI
npm run build
# Test global installation
./test-global-install.shLicense
MIT. See LICENSE.
