is-question
v1.0.0
Published
Detect if text contains a genuine question that expects an answer using AI
Maintainers
Readme
is-question
Detect if text contains a genuine question that expects an answer using AI
A Node.js module and Claude Code hook that uses Ollama AI models to intelligently determine whether text contains a real question that expects an answer, distinguishing from rhetorical questions, exclamations, and statements.
Features
- Smart Detection: Understands context to differentiate genuine questions from rhetorical ones
- Claude Code Hook: Integrates with Claude Code to enforce question-first interactions
- Safe Tool Whitelist: Allows information-gathering tools while blocking actions during questions
- Explicit Directives: Override detection with
COMMAND:orQUESTION:prefixes - Session Management: Tracks question state across interactions
- Detailed Logging: JSONL logs for debugging hook behavior
- Module API: Import and use in your Node.js applications
- Fast: ~150ms average response time with cached models
Prerequisites
- Node.js >= 18.0.0
- Ollama installed and running
- An Ollama model (default:
qwen2.5-coder:7b)
Installation
As a Module
npm install is-questionFor Claude Code Hook
npm install -g is-question
# Or use npx without installing globallyUsage
Claude Code Hook Integration
The primary use case is as a hook for Claude Code to ensure questions are answered before actions are taken.
Setup
- Add to your
~/.claude/settings.json:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "npx -y is-question@latest"
}
]
}
],
"PreToolUse": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "npx -y is-question@latest"
}
]
}
],
"Stop": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "npx -y is-question@latest"
}
]
}
],
"SessionStart": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "npx -y is-question@latest"
}
]
}
]
}
}- Restart Claude Code or reload settings
Configuration Options
Using a different model:
{
"hooks": {
"UserPromptSubmit": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "npx -y is-question@latest --model llama2"
}
]
}
]
// ... same for other hooks
}
}Enable debug logging:
{
"type": "command",
"command": "npx -y is-question@latest -log"
}How It Works
When you interact with Claude Code:
- Questions are detected: The hook identifies genuine questions vs commands
- Safe tools allowed: Information-gathering tools (Read, Grep, WebSearch, etc.) remain available
- Actions blocked: Modifying tools (Edit, Write, Bash, etc.) are blocked until the question is answered
- Override available: Use
COMMAND:prefix to force command interpretation
Safe Tools (Always Allowed)
Read- Read filesGrep- Search file contentsGlob- Find files by patternLS- List directory contentsWebFetch- Fetch web contentWebSearch- Search the webTodoRead- Read todo lists
Explicit Directives
Override automatic detection:
# Force as command (won't be detected as question)
COMMAND: What's in the config file? Update it to use port 3000.
# Force as question (will block actions)
QUESTION: The code works perfectly.Session Management
- State saved in
~/.claude/MadameClaude/{session_id}.json - Logs written to
~/.claude/MadameClaude/{session_id}.jsonl(with-logflag) - Automatically cleaned up on session end
Module API
import isQuestion from 'is-question';
// Basic usage
const result = await isQuestion("What's your name?");
console.log(result); // true
// With custom model
const result = await isQuestion("Really? Fix this now", "llama2");
console.log(result); // falseKey Distinctions
The tool intelligently distinguishes between:
✅ Real Questions (returns true)
"WTF are you doing?"- Asks for explanation"What are you thinking?"- Seeks information"Can you explain this?"- Requests clarification
❌ Not Real Questions (returns false)
"WTF are you doing? Undo that"- Rhetorical with command"Really? Fix this now"- Command disguised as question"WTF?"- Pure exclamation"The code works perfectly"- Statement
API
isQuestion(text, model)
Determines if text contains a genuine question that expects an answer.
Parameters
text(string) - The text to analyzemodel(string, optional) - Ollama model to use (default: 'qwen2.5-coder:7b')
Returns
Promise<boolean>- Whether the text contains a real question
Debugging
Enable logging with the -log flag to see detailed hook behavior:
# View logs for current session
cat ~/.claude/MadameClaude/*.jsonl | jq .Log entries include:
- Hook events received
- Question detection results
- Tool blocking/allowing decisions
- Session state changes
Performance
Default Model: qwen2.5-coder:7b
The qwen2.5-coder:7b model is used as the default due to its excellent balance of accuracy and speed:
- 100% accuracy on genuine questions - Never misses real questions
- 78% overall accuracy on nuanced cases - Good at detecting rhetorical questions and commands
- ~150ms average response time - Fast enough for real-time hook usage (with model cached)
These performance characteristics make it ideal for the hook use case where both accuracy and speed are critical. Other models can be specified via the --model flag if different trade-offs are needed.
License
MIT © William Kapke
Contributing
Issues and pull requests are welcome at github.com/williamkapke/is-question
