npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@vicistack/ai-outbound-call-center-2026

v1.0.0

Published

The Real Cost of Building an AI Call Center in 2026 (With Actual Server Specs) — ViciStack call center engineering guide

Readme

The Real Cost of Building an AI Call Center in 2026 (With Actual Server Specs)

Most "AI call center" guides are written by people who have never waited 45 minutes on hold with a SIP trunk provider's support desk or stared at a MariaDB slow query log at 11 PM wondering why the hopper is empty. They skip the unglamorous parts -- server sizing, InnoDB buffer pool tuning, STIR/SHAKEN attestation delays, iptables rules that actually work -- and fast-forward to the fantasy: "AI replaces all your agents!" It does not. Not this year, probably not next year either. This is the guide I wish existed when I built my first outbound floor. Actual server specs. Actual pricing that includes the hidden costs every AI platform conveniently forgets. Actual configs you can paste into your Asterisk boxes. And an honest assessment of where AI earns its keep versus where it still costs you money by failing at the wrong moment. We are building a 50-seat AI-augmented outbound call center. Not a chatbot demo. Not a "just call our API" pitch deck. A production operation that pushes thousands of calls per day, stays on the right side of TCPA, and uses AI only where it genuinely moves the needle. ## What AI Actually Does Well in Outbound Right Now I will split this into two lists. The first is stuff you should deploy immediately because the ROI is obvious. The second is stuff that sounds great in a vendor demo but will burn you in production. ### Deploy immediately Answering machine detection (AMD). This is the single highest-ROI AI upgrade for any outbound shop. Stock VICIdial AMD uses acoustic heuristics -- silence lengths, energy thresholds, beep detection. It hits 65-75% accuracy with a 15-25% false positive rate. That means up to a quarter of live humans who pick up the phone get dropped because the system classified them as voicemail. On a 50-agent floor dialing 40,000 calls per day, that is thousands of connections you paid for and will never talk to. AI-powered AMD using ML-based audio fingerprint analysis (not silence detection) pushes accuracy to 98-99% with under 1% false positives. AMDY.IO claims 99% accuracy with a native one-line install for VICIdial. A DIY approach using a fine-tuned Whisper-small model with TF-IDF classification tested at ~98% end-to-end accuracy. The commercial solutions detect in milliseconds. The DIY path takes 1.5-3.5 seconds but costs nothing beyond server time. Every false positive is a lead you already paid for that vanishes. At scale, fixing AMD accuracy pays for itself in weeks. AMDY.IO pitches it as recovering $24K/month in lost connections for a 10-agent center. Even if that number is generous, the directional math is right. Post-call transcription and summarization. After every call, Whisper large-v3 transcribes the recording in about 3 seconds per minute of audio on an RTX 4090. Then a local LLM (Llama 3.2 8B running on Ollama) generates a 2-3 sentence summary and writes it back to the disposition notes. That saves agents 30-45 seconds of wrap-up per call. Across 50 agents making 80-100 calls per day each, that is 33-62 hours of agent time recovered daily. Start collecting transcripts from day one even if you do not analyze them yet. The transcript archive becomes the most valuable dataset you build. 100% automated QA scoring. Traditional QA: a supervisor listens to 1-2% of calls and fills out a scorecard. AI QA: every call scored against your criteria automatically. Script adherence, compliance disclosures, sentiment shifts, talk-listen ratio, objection handling. Enthu.ai does this at $59/user/month. CloudTalk AI handles it for $28/user/month. Neither requires replacing your dialer -- feed them recordings and they score everything. Published numbers from vendors (take with appropriate salt): 50-60% reduction in compliance violations within 90 days, 16% sales lift (Balto, across 250M+ guided calls -- large enough sample to take somewhat seriously), and McKinsey benchmarks showing tripled conversion rates with AI scoring plus coaching. Lead scoring and list prioritization. Feed lead data into a local LLM -- demographics, timezone, past contact history, source -- and get back a 0-100 propensity score. Write it to the VICIdial rank field. The hopper dials high-scored leads first. Llama 3.2 8B scores 500 leads in under two minutes on a single GPU. Run it as a cron job every 15 minutes. Per-lead ML scoring is something VICIdial does not do natively, and it measurably improves contact rates by calling the right people at the right times. ### What still falls on its face Complex sales conversations. AI voice agents can hold 10-40 minute calls that sound close to human. They handle basic objections, book appointments, qualify leads. For repetitive outbound -- appointment confirmations, payment reminders, surveys -- they work. For anything requiring real judgment, nuance, or empathy, they fumble. A $50,000 deal does not close because your AI nailed all the script bullet points. It closes because a human read the prospect's hesitation and changed course. Real-time agent coaching. The technology exists. Balto is the market leader, and their numbers are real. But integrating real-time coaching with VICIdial is manual and fragile. It needs browser extensions or SIP mirror setups. It works once you get it stable, but expect real engineering time. This is a Phase 2 addition, not a launch feature. ML-powered predictive pacing. VICIdial's ADAPT_TAPERED mode adjusts dial ratios based on current answer rates and agent availability. ML-based pacing that learns from historical patterns and predicts per-lead behavior is better in theory, but there is no off-the-shelf VICIdial plugin. You would need to fork the dialing engine. Not worth it until the basics are running smoothly. ## The Infrastructure: Servers, Storage, and Network A production 50-agent outbound center needs a proper server cluster. Trying to run everything on one box works for a 10-agent demo and falls apart under real load. ### Server architecture Server 1: Database. Always the bottleneck. A 50-agent shop doing 8-hour days generates 40,000-60,000 call records per day. After six months, vicidial_log and vicidial_list hold 5-8 million rows. If the InnoDB buffer pool cannot keep hot indexes in RAM, every dial attempt stutters. - CPU: 8-16 cores (AMD EPYC 7443P or Xeon E-2388G) - RAM: 64 GB (InnoDB buffer pool eats most of it) - Storage: 2x 1TB NVMe in RAID1 for the database, 4x 4TB SATA SSD in RAID10 for recordings - OS: AlmaLinux 9 or Rocky Linux 9 Server 2: Dialer/Telephony. Each concurrent call uses 30-50 MB of RAM when recording is enabled (raw WAV sits in a tmpfs ramdisk before MP3 conversion). At peak, 50 agents with a 2.5:1 dial ratio means 125 concurrent channels -- roughly 5-7 GB for call processing alone. - CPU: 8 cores minimum - RAM: 16-32 GB - Storage: 500 GB NVMe - Network: 1 Gbps with under 5ms jitter to your SIP trunk provider Server 3: Web/Admin. Handles the agent web interface, real-time reports, admin panel. Apache plus PHP. Not CPU-intensive. - CPU: 4-8 cores - RAM: 16 GB - Storage: 250 GB NVMe Server 4: AI/GPU. This box did not exist in call center builds two years ago. - CPU: 8-16 cores - RAM: 64 GB - GPU: NVIDIA RTX 4090 (24 GB VRAM) or RTX 3090 (24 GB VRAM) - Storage: 1 TB NVMe The RTX 4090 transcribes audio at 19x real-time with Whisper large-v3 -- a 60-second call takes about 3 seconds. The RTX 3090 is within 0.5% of the same speed for this workload. With faster-whisper using CTranslate2 and 8-bit quantization, you get 4x faster inference versus stock Whisper. One GPU handles post-call transcription for 100+ agents. For real-time transcription during live calls, one GPU handles 20-30 simultaneous streams with the medium model. Buy the hardware. A used RTX 3090 runs about $700, an RTX 4090 about $1,400. Build the box for $2,500-3,500 total and your inference costs $0/month forever. Cloud GPU instances (AWS g5) run $760-1,210/month. The on-prem hardware pays for itself versus cloud in under two months. ### Recording storage math At MP3 (32kbps), 50 agents generate about 3.5 GB/day or ~75 GB/month. Budget 1 TB for a year of recordings with headroom. VICIdial records to WAV in a tmpfs ramdisk first, then a cron job converts to MP3 and moves to permanent storage. ### Cloud vs. bare metal: the real numbers | | Bare Metal (Hetzner) | AWS Reserved | AWS On-Demand | DigitalOcean | |---|---|---|---|---| | Servers | $295/mo | $1,168/mo | $1,862/mo | $342/mo | | GPU/AI | $0 (on-prem) | $760/mo | $1,210/mo | $0 (on-prem) | | SIP Trunks (est.) | $400-800/mo | $400-800/mo | $400-800/mo | $400-800/mo | | Total | $695-1,095/mo | $2,328-2,728/mo | $3,472-3,872/mo | $742-1,142/mo | AWS is 4-7x more expensive than bare metal for this workload. Hetzner is the price-to-performance winner for VoIP. Note: Hetzner raised prices 30-50% in early 2026 due to AI-driven memory demand and IPv4 costs. The numbers above reflect post-increase pricing. Cloud makes sense for burst capacity -- spin up extra dialer nodes during a campaign push. Not as your primary infrastructure. ## SIP Trunks: Provider Choice Matters More Than You Think Every outbound call in 2026 gets a STIR/SHAKEN attestation level. You need A-level (carrier verified your identity AND confirmed you own the caller ID number). Without it, your calls display as spam and answer rates collapse. This means your provider must complete KYC verification on your business and register your numbers. Some providers take 3-5 business days for KYC -- start on day one. | Provider | Per-Minute Rate | STIR/SHAKEN | Notes | |---|---|---|---| | Skyetel | $0.005-0.006 | Full A-level | Cheapest per-minute, popular with VICIdial shops | | Telnyx | $0.005-0.007 | Full A-level | Best API, elastic SIP, first 10 channels $12/mo | | VoIP.ms | $0.009-0.01 | Full A-level | Community favorite, pay-per-minute only | | TILTX | $0.005-0.008 | Full A-level | Markets VICIdial compatibility specifically | | Flowroute | $0.008-0.012 | Full A-level | Solid for mid-volume | | Twilio | $0.013-0.015 | Full A-level | Most expensive, most reliable | Provider choice matters enormously at scale. At Skyetel's $0.005/min, 50 agents running 33,750 dialed minutes per day costs about $3,700/month in SIP. At Twilio's $0.014/min, the same volume costs $10,350/month. That is a $6,650/month difference for identical calls. Asterisk trunk config for Telnyx: ini ; /etc/asterisk/sip.conf [telnyx](!) type=peer host=sip.telnyx.com fromdomain=sip.telnyx.com qualify=yes dtmfmode=rfc2833 insecure=invite,port canreinvite=no disallow=all allow=ulaw allow=g729 nat=force_rport,comedia [telnyx-out](telnyx) username=your_username secret=your_password context=trunkinbound Outbound route: ini ; /etc/asterisk/extensions.conf [vicidial-auto] exten => _91NXXNXXXXXX,1,AGI(agi://127.0.0.1:4577/call_log) exten => _91NXXNXXXXXX,n,Dial(SIP/telnyx-out/${EXTEN:1},,tTr) exten => _91NXXNXXXXXX,n,Hangup() ## Step-by-Step Build: Zero to Live Calls ### Phase 1: Planning and procurement (Days 1-3) Order servers, sign SIP trunk contracts, start KYC verification, order DIDs. Start KYC on day one -- it is the number one delay in every call center build. STIR/SHAKEN attestation cannot happen until KYC clears. Without attestation, your calls get flagged as spam and you are dead before you start. ### Phase 2: Server setup (Days 3-5) AlmaLinux 9 minimal install on all servers. Configure networking, hostnames, private VLAN between servers. bash # On all servers timedatectl set-timezone America/New_York sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config dnf update -y dnf install -y epel-release git wget curl vim htop net-tools cat >> /etc/hosts <<EOF 10.0.1.50 db1.callcenter.local db1 10.0.1.10 dialer1.callcenter.local dialer1 10.0.1.20 web1.callcenter.local web1 10.0.2.10 gpu1.callcenter.local gpu1 EOF Firewall rules for VoIP -- SIP from trunk providers only: bash iptables -A INPUT -p udp -s <telnyx_ip_range> --dport 5060 -j ACCEPT iptables -A INPUT -p udp -s <telnyx_ip_range> --dport 10000:20000 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 8089 -j ACCEPT iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A INPUT -p udp --dport 5060 -j DROP ### Phase 3: VICIdial install (Days 5-8) Three options ranked by pain level: ViciBox ISO (least pain). Download the ISO from vicibox.com, boot, follow the installer. Working VICIdial in about an hour. Auto-installer scripts (moderate). Clone the community installer from GitHub, run it. Handles Asterisk, DAHDI, VICIdial, Apache, and MariaDB on one box. After install, reconfigure the database connection to point at your dedicated DB server. **Scratch install


Read the full article

About

Built by ViciStack — enterprise VoIP and call center infrastructure.

License

MIT