npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@vicistack/vicidial-database-partitioning

v1.0.0

Published

Your VICIdial Database Has 50 Million Rows. Here's How to Fix It. — ViciStack call center engineering guide

Readme

Your VICIdial Database Has 50 Million Rows. Here's How to Fix It.

Every VICIdial deployment eventually hits the same wall: the database becomes the bottleneck. When your vicidial_log table has 50 million rows, your recording_log has grown to 30 GB, and your real-time reports take 15 seconds to load, you are past the point where adding more CPU or RAM to the dialer solves the problem. The database is doing full table scans on tables that were never designed for this volume. This guide covers the practical steps to partition VICIdial's MySQL database, archive old data, tune InnoDB for high-volume dialing, and optimize the queries that matter most for daily operations. ## When the Database Becomes the Bottleneck VICIdial uses MySQL (typically MariaDB on modern installs) as its sole data store. Every call event, agent status change, lead update, and recording reference writes to MySQL in real time. For a 25-agent center making 500 calls per agent per day, that is 12,500 new rows per day in vicidial_log alone. Add in vicidial_agent_log, call_log, vicidial_closer_log, recording_log, and vicidial_did_log, and you are easily generating 50,000+ rows per day. At that rate, you hit 1 million rows in three weeks. After a year, you have 18 million rows in vicidial_log. After two years, you are approaching 40 million. The table does not partition itself. ### Symptoms of a Database Bottleneck Watch for these warning signs: - Real-time reports lag: The agent performance screen takes 5+ seconds to load - Outbound dialing hesitates: The hopper refill query takes longer, causing gaps between calls - Recordings page is slow: Searching recordings by date range takes 30+ seconds - Server load spikes during reports: Running a campaign summary report pins a CPU core - SHOW PROCESSLIST shows long-running queries: Queries against vicidial_log or call_log running for 10+ seconds Verify with: sql -- Check table sizes SELECT table_name, ROUND(data_length / 1024 / 1024, 2) AS data_mb, ROUND(index_length / 1024 / 1024, 2) AS index_mb, table_rows FROM information_schema.tables WHERE table_schema = 'asterisk' ORDER BY data_length DESC LIMIT 20; If vicidial_log, call_log, or recording_log are over 1 GB or over 10 million rows, you need partitioning and archiving. ## VICIdial Tables That Grow Unbounded Not every VICIdial table needs attention. These are the ones that grow continuously and cause performance issues: | Table | Grows By | Contains | |-------|----------|----------| | vicidial_log | Every outbound call attempt | Dial results, talk time, status | | vicidial_closer_log | Every inbound/transfer call | Closer call details | | call_log | Every call (CDR) | Asterisk CDR records | | vicidial_agent_log | Every agent state change | Login, pause, wait, talk times | | recording_log | Every recorded call | Recording file paths and metadata | | vicidial_did_log | Every inbound DID call | DID routing log | | vicidial_carrier_log | Every carrier event | SIP trunk activity | | server_performance | Every update interval | Server metrics over time | | vicidial_dial_log | Every dial attempt | Raw dial events | ## Partitioning vicidial_log by Date MySQL partition pruning allows queries that filter by date to scan only the relevant partition instead of the entire table. For vicidial_log, which is almost always queried with a date filter, this is a massive performance win. ### Step 1: Assess the Current Table sql -- Check current table structure SHOW CREATE TABLE vicidial_log\G -- Check row count and size SELECT COUNT(*) FROM vicidial_log; SELECT ROUND(data_length/1024/1024, 2) AS size_mb FROM information_schema.tables WHERE table_name = 'vicidial_log' AND table_schema = 'asterisk'; ### Step 2: Backup Before Any Schema Change This is non-negotiable. A failed ALTER TABLE on a multi-million row table can leave you with no data and no dialer. bash # Full backup of the asterisk database mysqldump --single-transaction --routines --triggers \ asterisk vicidial_log > /backup/vicidial_log_$(date +%Y%m%d).sql # Verify the backup mysql -e "SELECT COUNT(*) FROM vicidial_log;" asterisk wc -l /backup/vicidial_log_$(date +%Y%m%d).sql ### Step 3: Create the Partitioned Table VICIdial's vicidial_log uses call_date as its primary time column. We partition by RANGE on this column, using monthly partitions: sql -- First, check if the primary key includes call_date -- MySQL requires the partition column to be part of every unique index SHOW INDEX FROM vicidial_log; If call_date is not part of the primary key, you need to modify the key. This is the trickiest part. VICIdial typically uses uniqueid as the primary key for vicidial_log: sql -- Modify primary key to include call_date (required for partitioning) -- WARNING: This locks the table. Do this during off-hours. ALTER TABLE vicidial_log DROP PRIMARY KEY, ADD PRIMARY KEY (uniqueid, call_date); Now add partitions: sql ALTER TABLE vicidial_log PARTITION BY RANGE (TO_DAYS(call_date)) ( PARTITION p2025_01 VALUES LESS THAN (TO_DAYS('2025-02-01')), PARTITION p2025_02 VALUES LESS THAN (TO_DAYS('2025-03-01')), PARTITION p2025_03 VALUES LESS THAN (TO_DAYS('2025-04-01')), PARTITION p2025_04 VALUES LESS THAN (TO_DAYS('2025-05-01')), PARTITION p2025_05 VALUES LESS THAN (TO_DAYS('2025-06-01')), PARTITION p2025_06 VALUES LESS THAN (TO_DAYS('2025-07-01')), PARTITION p2025_07 VALUES LESS THAN (TO_DAYS('2025-08-01')), PARTITION p2025_08 VALUES LESS THAN (TO_DAYS('2025-09-01')), PARTITION p2025_09 VALUES LESS THAN (TO_DAYS('2025-10-01')), PARTITION p2025_10 VALUES LESS THAN (TO_DAYS('2025-11-01')), PARTITION p2025_11 VALUES LESS THAN (TO_DAYS('2025-12-01')), PARTITION p2025_12 VALUES LESS THAN (TO_DAYS('2026-01-01')), PARTITION p2026_01 VALUES LESS THAN (TO_DAYS('2026-02-01')), PARTITION p2026_02 VALUES LESS THAN (TO_DAYS('2026-03-01')), PARTITION p2026_03 VALUES LESS THAN (TO_DAYS('2026-04-01')), PARTITION p2026_04 VALUES LESS THAN (TO_DAYS('2026-05-01')), PARTITION p_future VALUES LESS THAN MAXVALUE ); ### Step 4: Verify Partition Pruning sql -- This should only scan one partition EXPLAIN PARTITIONS SELECT * FROM vicidial_log WHERE call_date BETWEEN '2026-03-01' AND '2026-03-19'; -- Look for "partitions: p2026_03" in the output -- NOT "partitions: p2025_01,p2025_02,...,p_future" ### Step 5: Automate Future Partitions Create a cron job that adds new partitions before the p_future catchall is needed: sql -- Run monthly: reorganize p_future to add next month ALTER TABLE vicidial_log REORGANIZE PARTITION p_future INTO ( PARTITION p2026_05 VALUES LESS THAN (TO_DAYS('2026-06-01')), PARTITION p_future VALUES LESS THAN MAXVALUE ); Script this: bash #!/bin/bash # /opt/vicistack/add_partition.sh # Run via cron on the 15th of each month NEXT_MONTH=$(date -d "next month" +%Y_%m) NEXT_MONTH_START=$(date -d "next month" +%Y-%m-01) FOLLOWING_MONTH_START=$(date -d "2 months" +%Y-%m-01) mysql asterisk -e " ALTER TABLE vicidial_log REORGANIZE PARTITION p_future INTO ( PARTITION p${NEXT_MONTH} VALUES LESS THAN (TO_DAYS('${FOLLOWING_MONTH_START}')), PARTITION p_future VALUES LESS THAN MAXVALUE ); " echo "$(date): Added partition p${NEXT_MONTH} for vicidial_log" >> /var/log/vicistack-partitions.log ## Archiving Old CDR and Log Data Partitioning makes archiving trivial. Instead of running a slow DELETE FROM vicidial_log WHERE call_date < '2025-01-01' that locks the table and generates massive binary log entries, you drop entire partitions instantly: ### Archive Process bash #!/bin/bash # /opt/vicistack/archive_partition.sh # Archive and drop partitions older than 6 months ARCHIVE_DIR="/backup/vicidial_archive" CUTOFF=$(date -d "6 months ago" +%Y_%m) # Export the partition data first mysqldump --single-transaction asterisk vicidial_log \ --where="call_date < '$(date -d '6 months ago' +%Y-%m-01)'" \ > ${ARCHIVE_DIR}/vicidial_log_archive_${CUTOFF}.sql # Compress the archive gzip ${ARCHIVE_DIR}/vicidial_log_archive_${CUTOFF}.sql # Drop the old partition (instant, no table lock) mysql asterisk -e "ALTER TABLE vicidial_log DROP PARTITION p${CUTOFF};" echo "$(date): Archived and dropped partition p${CUTOFF}" >> /var/log/vicistack-archive.log Dropping a partition is an O(1) operation -- it removes the data file instantly regardless of how many rows are in it. Compare this to DELETE which would need to scan every row, update indexes, and write to the binary log. ### Apply the Same Pattern to Other Tables Repeat the partitioning process for call_log, vicidial_closer_log, vicidial_agent_log, and recording_log. The process is identical -- just change the table name and the date column (most use call_date or event_time). ## MySQL Performance Tuning for VICIdial Beyond partitioning, MySQL configuration has an outsized impact on VICIdial performance. Most installs run with default or near-default settings, which are tuned for general-purpose workloads, not for the write-heavy, real-time pattern VICIdial demands. ### InnoDB Buffer Pool Optimization The InnoDB buffer pool is MySQL's in-memory cache for table data and indexes. This single setting has the largest impact on database performance. Rule of thumb: Set the buffer pool to 60-70% of total system RAM on a dedicated database server, or 40-50% if MySQL shares the server with Asterisk. ini # /etc/my.cnf.d/vicidial-tuning.cnf [mysqld] # Buffer pool - set to 60-70% of RAM on dedicated DB server # 8 GB example (on a 12 GB server) innodb_buffer_pool_size = 8G innodb_buffer_pool_instances = 8 # 1 instance per GB # Buffer pool dump/load for fast restarts innodb_buffer_pool_dump_at_shutdown = ON innodb_buffer_pool_load_at_startup = ON Verify your buffer pool hit rate (should be > 99%): sql SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read%'; -- Calculate hit rate: -- hit_rate = 1 - (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests) * 100 -- If below 99%, increase buffer pool size ### InnoDB Log File Tuning InnoDB write performance depends on the redo log configuration: ini [mysqld] # Larger log files = less checkpoint flushing = faster writes # Set to 25% of buffer pool size innodb_log_file_size = 2G innodb_log_buffer_size = 64M # Flush behavior (trade-off: durability vs. performance) # 1 = flush every commit (safest, slowest) # 2 = flush to OS cache every commit, flush to disk every second (good balance) # 0 = flush to OS cache every second (fastest, risk losing 1 sec of data on crash) innodb_flush_log_at_trx_commit = 2 # For VICIdial, setting 2 is the right balance. The data being written # (call logs, agent events) can be reconstructed from Asterisk logs if # the worst happens. ### Write Optimization VICIdial writes constantly. Optimize for that workload: ini [mysqld] # Reduce fsync overhead innodb_flush_method = O_DIRECT # Increase write throughput innodb_io_capacity = 2000 # for SSD innodb_io_capacity_max = 4000 innodb_write_io_threads = 8 innodb_read_io_threads = 8 # Adaptive flushing innodb_adaptive_flushing = ON innodb_adaptive_flushing_lwm = 10 # Change buffer (for secondary index updates) innodb_change_buffering = all innodb_change_buffer_max_size = 25 ### Query Cache (MariaDB) For MariaDB (which VICIdial commonly runs on), the query cache can help with repetitive reads like real-time report queries: ini [mysqld] query_cache_type = 1 query_cache_size = 256M query_cache_limit = 4M query_cache_min_res_unit = 2048 Note: MySQL 8.0 removed the query cache entirely. If you are on MySQL 8.0+, skip this and rely on buffer pool caching instead. ### Connection and Thread Tuning ini [mysqld] max_connections = 500 # VICIdial opens many connections thread_cache_size = 50 table_open_cache = 4000 table_definition_cache = 2000 # Temporary tables (for complex reports) tmp_table_size = 256M max_heap_table_size = 256M ## Query Optimization for Large Datasets Even with partitioning and tuning, poorly optimized queries can still cause problems. Here are the VICIdial queries that most commonly cause issues and how to fix them. ### The Hopper Refill Query VICIdial's hopper (the queue of numbers to dial) is refilled by a query that joins vicidial_list with vicidial_hopper and several exclusion tables. On a large list, this query can take 10+ seconds: sql -- Check if the hopper refill is slow SHOW PROCESSLIST; -- Look for queries hitting vicidial_list with long Time values Optimization: Ensure proper indexes exist: sql -- Critical indexes for hopper performance ALTER TABLE vicidial_list ADD INDEX idx_list_hopper (list_id, status, called_since_last_reset, phone_number); -- If you use GMT offset filtering ALTER TABLE vicidial_list ADD INDEX idx_gmt (gmt_offset_now, list_id, status); ### Real-Time Report Queries The real-time agent screen queries vicidial_live_agents and vicidial_auto_calls. These tables are small but queried constantly: sql -- These should be in memory. Verify: SELECT table_name, engine FROM information_schema.tables WHERE table_name IN ('vicidial_live_agents', 'vicidial_auto_calls') AND table_schema = 'asterisk'; -- If engine is MEMORY, good. If InnoDB, they'll still be cached in buffer pool. ### Campaign Summary Reports Reports that aggregate across vicidial_log benefit enormously from partitioning. But also add covering indexes: sql -- Index for campaign reports by date ALTER TABLE vicidial_log ADD INDEX idx_campaign_date (campaign_id, call_date, status, length_in_sec); -- This lets the report query use an index scan instead of a table scan -- for queries like: SELECT status, COUNT(*), AVG(length_in_sec) FROM vicidial_log WHERE campaign_id = 'SALES' AND call_date BETWEEN '2026-03-01' AND '2026-03-19' GROUP BY status; ### Slow Query Log Enable the slow query log to catch problematic queries: ini [mysqld] slow_query_log = ON slow_query_log_file = /var/log/mysql/slow-query.log long_query_time = 2 # log queries taking > 2 seconds log_queries_not_using_indexes = ON Review regularly: ```bash # Find the most common slow queries mysqldumpslow -s c


Read the full article

About

Built by ViciStack — enterprise VoIP and call center infrastructure.

License

MIT