create-asktext-app
v1.0.5
Published
Scaffold a Next.js project pre-wired with AskText voice Q&A.
Downloads
80
Readme
create-asktext-app
Bootstrap a complete Next.js blog with voice Q&A integration in seconds.
Quick Start
npx create-asktext-app my-blog
cd my-blog
npm install
npm run devYour blog is now running at http://localhost:3000 with:
- ✅ Rich text editor for creating posts
- ✅ Voice Q&A integration (pending VAPI setup)
- ✅ Redis-based rate limiting
- ✅ Responsive design with Tailwind CSS
- ✅ PostgreSQL database with Prisma ORM
What You Get
Core Features
- Admin Dashboard (
/admin/new) - Create and edit posts with a rich TipTap editor - Voice Assistant - Readers can talk to any article via VAPI.ai integration
- Rate Limiting - Redis-based quota system prevents abuse
- Database Ready - Prisma schema with article chunks for semantic search
- Responsive UI - Mobile-friendly design with Tailwind CSS
File Structure
my-blog/
├── app/
│ ├── admin/new/ # Post creation interface
│ ├── api/
│ │ ├── asktext/webhook/ # VAPI integration endpoint
│ │ ├── voice/start/ # Quota check endpoint
│ │ └── voice/end/ # Usage tracking endpoint
│ └── posts/[slug]/ # Dynamic post pages
├── src/components/
│ ├── RichTextEditor.tsx # Rich text editing component
│ └── ClientAskTextButton.tsx # Voice assistant button
├── prisma/
│ └── schema.prisma # Database schema
└── .env.local.example # Environment variables templateSetup Instructions
1. Database Setup
The generated project uses PostgreSQL with Prisma. Set up your database:
# Copy environment template
cp .env.local.example .env.local
# Add your database URL
# DATABASE_URL="postgresql://username:password@localhost:5432/mydb"
# Create and migrate database
npx prisma db push
# (Optional) Seed with sample data
npx prisma db seed2. OpenAI Integration
Add your OpenAI API key to .env.local:
OPENAI_API_KEY=sk-...This enables:
- Article embedding generation for semantic search
- AI-powered responses in the voice assistant
3. VAPI Voice Assistant Setup
Tool Creation
- Create Custom Tool from VAPI Dashboard
- Give name (retrieve_passage), description, keep async and strict toggles unchecked.
- Add Parameters (in JSON mode):
{
"type": "object",
"properties": {
"k": {
"description": "How many passages to return",
"type": "integer"
},
"query": {
"description": "User question",
"type": "string"
},
"articleId": {
"description": "Slug of the article",
"type": "string"
}
},
"required": [
"query"
]
}Use ngrok (ngrok http <port_number>) to create a Server URL if on development, else just add your domain if in production, followed by /api/webhook Examples: https://fe2e544f8cc4.ngrok-free.app/api/vapi-webhook, https://www.csnobs.com/api/vapi-webhook
Rest are optional, approve tool creation.
Assistant Creation
- Create Blank Assistant
- Choose models (I chose Gemini 1.5 Flash, 11Labs Knightley Javier - calm, gentle via Eleven_turbo_v2_5 and 11labs Scribe transcriber)
- Add First Message:
Hey, what would you like to know about the article? You can ask me a general question regarding the content or ask me to summarise a specific portion you want and ask clarifying questions on it!- Add System Prompt:
You are CSNoBS ArticleBot, a voice-first expert on the current article.
You have one tool:
{ "name": "retrieve_passage",
"arguments": { "query": "<string>" } }
WHEN (and only when) the user asks about the article’s content,
ALWAYS respond with a tool call first.
After the tool returns a Passages list:
• Answer ONLY using those passages.
• Never exceed **300 spoken words** in a single turn – even if the user says “go into detail”.
• If passages are empty, say “I’m sorry, that isn’t covered in this article.”
Example 1
user: What’s DOM parsing?
assistant (tool call):
{ "name": "retrieve_passage",
"arguments": { "query": "DOM parsing" } }
Example 2
user: Production vs development architecture?
assistant (tool call):
{ "name": "retrieve_passage",
"arguments": { "query": "production vs development architecture" } }
Formatting after tool result:
“Sure! … <answer>. Let me know if you’d like to dive deeper!”Connect the previously created tool and approve assistant creation.
Add the Tool and Assistant API Keys in your environment file
Add to .env.local:
NEXT_PUBLIC_VAPI_PUBLIC_KEY=pk_...
NEXT_PUBLIC_VAPI_ASSISTANT_ID=asst_...4. Redis Rate Limiting (Optional)
For production deployments, add Redis for rate limiting:
# Local Redis
REDIS_URL=redis://localhost:6379
# Or Upstash Redis (recommended for Vercel)
UPSTASH_REDIS_REST_URL=https://...
UPSTASH_REDIS_REST_TOKEN=...Without Redis, the voice assistant will still work but without usage quotas.
5. Image Uploads (Optional)
The rich text editor supports image uploads. Configure your storage:
// app/api/admin/images/upload/route.ts
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get('image') as File;
// Upload to your preferred storage (Vercel Blob, S3, etc.)
const url = await uploadToStorage(file);
return Response.json({ url });
}Usage
Creating Your First Post
- Start the development server:
npm run dev - Navigate to
http://localhost:3000/admin/new - Write your post using the rich text editor
- Click "Publish" to save
Testing Voice Integration
- Visit any published post at
/posts/[slug] - Click the "Ask Article" button
- Grant microphone permissions when prompted
- Ask questions about the article content
Customization
Styling
The project uses Tailwind CSS. Customize colors in tailwind.config.ts:
module.exports = {
theme: {
extend: {
colors: {
primary: '#your-color',
'primary-light': '#your-light-color',
}
}
}
}Database Schema
Add fields to the Post model in prisma/schema.prisma:
model Post {
id String @id @default(cuid())
title String
content String @db.Text
slug String @unique
// Add your custom fields here
author String?
tags String[]
published Boolean @default(false)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}Then run npx prisma db push to apply changes.
Deployment
Vercel (Recommended)
- Push your code to GitHub
- Connect repository to Vercel
- Add environment variables in Vercel dashboard
- Deploy automatically on push
Other Platforms
The generated Next.js app works on any platform supporting Node.js:
- Netlify
- Railway
- DigitalOcean App Platform
- AWS Amplify
Environment Variables
Required
DATABASE_URL=postgresql://... # PostgreSQL connection
OPENAI_API_KEY=sk-... # OpenAI API keyVoice Assistant
NEXT_PUBLIC_VAPI_PUBLIC_KEY=pk_... # VAPI public key
NEXT_PUBLIC_VAPI_ASSISTANT_ID=asst_... # VAPI assistant IDOptional
REDIS_URL=redis://... # Rate limiting
NEXT_PUBLIC_QUOTA_BYPASS_IPS=1.2.3.4 # Dev IPs to bypass quotasTroubleshooting
Common Issues
"Module not found" errors
- Run
npm installto ensure all dependencies are installed - Check that you're using Node.js 18+ (
node --version)
Database connection errors
- Verify
DATABASE_URLis correct in.env.local - Ensure your PostgreSQL server is running
- Run
npx prisma db pushto create tables
Voice assistant not working
- Check that
NEXT_PUBLIC_VAPI_*variables are set - Verify microphone permissions in browser
- Check browser console for JavaScript errors
Build failures
- Clear Next.js cache:
rm -rf .next - Reinstall dependencies:
rm -rf node_modules && npm install - Check for TypeScript errors:
npm run type-check
Getting Help
- Documentation: Check individual package READMEs for detailed APIs
- Issues: Report bugs on the GitHub repository
- Community: Join discussions in GitHub Discussions
Scripts
npm run dev # Start development server
npm run build # Build for production
npm run start # Start production server
npm run lint # Run ESLint
npm run type-check # Run TypeScript checksLicense
MIT - The generated project is yours to use however you like.
