npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

selectors-scan

v1.0.13

Published

A CLI utility to scan and analyze CSS selectors (Python-powered, Node wrapper)

Downloads

4

Readme

🎯 CSS Usage Mapper

A powerful Python tool that analyzes CSS selector usage across HTML files, helping you identify unused styles and optimize your CSS for better performance.

📋 Table of Contents

✨ Features

  • Comprehensive CSS Analysis: Extracts and analyzes all CSS selectors from multiple files
  • Smart Selector Matching: Uses multiple strategies to handle complex modern CSS selectors
  • Detailed Reporting: Generates 3 different CSV reports for various analysis needs
  • Complex Selector Support: Handles :is(), :where(), :has(), nested :not(), and more
  • Error Handling: Robust parsing with detailed error reporting
  • Performance Tracking: Detailed statistics including timing and complexity analysis
  • Fallback Strategies: Intelligent handling of selectors that BeautifulSoup can't parse directly
  • Web Scraping: Fetches HTML content from a URL specified in the .env file

🚀 Installation

Prerequisites

  • Python 3.7+
  • Virtual environment (recommended)

Setup

  1. Clone or download the project to your local machine

  2. Create and activate virtual environment:

    python -m venv venv
    # Windows
    venv\Scripts\activate
    # macOS/Linux
    source venv/bin/activate
  3. Install dependencies:

    pip install beautifulsoup4 cssutils python-dotenv requests
  4. Set up the .env file:

    • Create a .env file in the root directory.
    • Add the website URL:
      WEBSITE_URL=https://websiteurl.com
  5. Set up the config.json file:

    • Create a config.json file in the root directory
    • Add website pages path:
      {
       "pages": [
          "about-us",
          "contact",
          "services"
       ]
      }

🏃‍♂️ Quick Start

  1. Place your files:

    • The tool will automatically search for all CSS and HTML files in the project directory or fetch HTML from the specified URL.
  2. Run the analysis:

    python css_usage_mapper.py
  3. View results:

    • css_selector_usage.csv - Summary of all selectors
    • css_selector_details.csv - Detailed selector-to-page mapping
    • css_analysis_stats.csv - Comprehensive statistics

🔍 Matching Methods

The tool uses a sophisticated two-tier matching system to handle both simple and complex CSS selectors:

1. Direct Matching (Primary Method)

Uses BeautifulSoup's native soup.select(selector) method:

matches = soup.select(".header nav ul li")  # Direct CSS selector matching

Best for:

  • Simple class selectors (.class)
  • ID selectors (#id)
  • Element selectors (div, p)
  • Basic combinations (div.class, #id .class)
  • Standard pseudo-classes (:hover, :first-child)

2. Simplified Matching (Fallback Method)

When direct matching fails, the tool automatically tries simplified versions:

# Original: ".button:is(.primary,.secondary)"
# Simplified attempts:
# 1. ".button.primary"
# 2. ".button.secondary" 
# 3. ".button"
# 4. ".primary .secondary"

Handles:

  • :is() pseudo-class expansion
  • :not() removal
  • Pseudo-element stripping (::before → ``)
  • Class/ID extraction from complex selectors

3. Complex Selector Detection

Automatically identifies selectors that need special handling:

COMPLEX_PATTERNS = [
    r':is\(',           # :is(.a,.b)
    r':where\(',        # :where(.a,.b)  
    r':has\(',          # :has(.child)
    r':not\([^)]*\([^)]*\)', # nested :not()
    r'::',              # ::before, ::after
    r'@',               # @media, @keyframes
    r'\[.*\*=.*\]',     # complex attributes
]

4. Match Method Tracking

Each result shows which method succeeded:

  • direct - BeautifulSoup parsed the selector directly
  • simplified - Used fallback simplification strategy

📊 Output Files

1. css_selector_usage.csv - Summary Report

| Column | Description | |--------|-------------| | Selector | The CSS selector | | Used In HTML Files | Comma-separated list of files using this selector | | Match Count | Total number of matching elements across all files | | Match Method | direct or simplified | | Is Complex | Yes if selector uses modern CSS features |

2. css_selector_details.csv - Detailed Mapping

| Column | Description | |--------|-------------| | Selector | The CSS selector | | HTML File | Individual file using this selector (one row per file) | | Match Count | Number of matches in this specific file | | Match Method | How the selector was matched | | Is Complex | Complexity indicator | | Status | Used, Unused, or Invalid |

Use cases:

  • Find which pages use specific selectors
  • Identify selectors used only on certain pages
  • Filter by file for page-specific analysis

3. css_analysis_stats.csv - Statistics Report

Comprehensive metrics including:

  • Selector Counts: Total, used, unused, invalid
  • Complexity Analysis: Simple vs complex selectors
  • Performance Metrics: Processing time, match counts
  • Error Details: Failed selectors with error messages
  • Usage Rate: Percentage of selectors actually used

⚙️ Configuration

Folder Structure

project/
├── css_usage_mapper.py
├── venv/               # Virtual environment
└── output files...     # Generated CSV reports

The application uses a config.json file to specify the paths of the web pages to analyze. This file should be located in the root directory of the project.

config.json Structure

The config.json file contains a list of page paths that the application will use to construct full URLs for analysis. Each path corresponds to a specific page on the website.

Example config.json

{
  "pages": [
    "about-us",
    "contact",
    "services"
  ]
}

Explanation

  • pages: An array of strings, where each string is a path to a page on the website. The application will combine these paths with the base URL specified in the .env file to form complete URLs.

Generating Page Titles

The application automatically generates page titles from the paths by replacing hyphens with spaces and capitalizing each word. For example, the path about-us will be converted to the title "About Us".

Usage

Ensure that the config.json file is correctly formatted and placed in the root directory before running the application. The application will read this file to determine which pages to analyze.

🔧 Advanced Usage

Analyzing Specific File Types

The tool processes:

  • CSS files: .css extension
  • HTML files: .html extension

Performance Tips

  1. Large Projects: Process files in batches for very large codebases
  2. Memory Usage: The tool loads all files into memory - consider available RAM
  3. Processing Time: Complex selectors take longer to analyze

Integration Examples

# Import as module
from css_usage_mapper import CSSUsageAnalyzer

analyzer = CSSUsageAnalyzer()
analyzer.run_analysis()

# Access statistics
print(f"Usage rate: {analyzer.stats['used_selectors']} / {analyzer.stats['total_selectors']}")

🛠️ Troubleshooting

Common Issues

Permission Error (Windows)

PermissionError: [Errno 13] Permission denied: 'css_selector_usage.csv'

Solution: Close Excel or any program that has the CSV files open.

No Selectors Found

  • Check that CSS files have changes by using Git.
  • Verify CSS files have .css extension
  • Check CSS syntax is valid

No Matches Found

  • Ensure the website has html pages.
  • Verify HTML files have .html extension
  • Check that HTML contains the expected class/ID names

High Invalid Selector Count

This is normal for modern CSS! Many advanced selectors aren't supported by BeautifulSoup but may still be functional in browsers.

Debugging Tips

  1. Check logs: The tool provides detailed logging during execution
  2. Review stats CSV: Contains error details for failed selectors
  3. Verify file structure: Ensure correct folder organization

🔬 Technical Details

Dependencies

  • cssutils: CSS parsing and rule extraction
  • BeautifulSoup4: HTML parsing and CSS selector matching
  • requests: Fetching HTML content from URLs
  • dotenv: Loading environment variables
  • re: Regular expressions for complex selector handling
  • csv: Output file generation
  • logging: Progress tracking and error reporting

Performance Characteristics

  • Memory Usage: O(n + m) where n = CSS selectors, m = HTML elements
  • Time Complexity: O(n × m × f) where f = number of HTML files
  • Typical Processing: 6,000+ selectors across 3 files in ~2-3 minutes

Selector Complexity Handling

The tool categorizes selectors by complexity:

Simple Selectors (Direct matching):

.header { }
#navigation { }
div.content { }
ul li a { }

Complex Selectors (Simplified matching):

.button:is(.primary, .secondary) { }
.card:not(.hidden):has(.content) { }
.element::before { }

Accuracy Notes

  • Direct matches: 100% accurate (BeautifulSoup native support)
  • Simplified matches: ~90-95% accurate (best-effort approximation)
  • Invalid selectors: Logged for manual review

📈 Output Analysis Tips

Finding Unused CSS

# Filter for unused selectors
grep ",Unused" css_selector_details.csv

# Count usage by file
grep "Share Your Story" css_selector_details.csv | wc -l

Identifying Complex Selectors

# Find complex selectors that failed
grep ",Yes," css_selector_details.csv | grep "Invalid"

Performance Insights

  • High "simplified" method usage indicates modern CSS features
  • Low usage rates suggest opportunities for CSS cleanup
  • Invalid selectors may need manual browser testing

🚀 Ready to Optimize Your CSS?

This tool provides comprehensive insights into your CSS usage, helping you:

  • Remove unused styles for faster loading
  • Identify complex selectors that need testing
  • Understand style dependencies across pages
  • Optimize CSS architecture for better maintainability

🌐 Web Scraping for Analysis

The tool fetches HTML content from the URL specified in the .env file. Ensure the .env file is correctly set up with the WEBSITE_URL:

project/
├── css_usage_mapper.py
├── venv/               # Virtual environment
└── output files...     # Generated CSV reports

*This file needs to be reviewed again.

Node.js Wrapper

This package can now be installed and used directly with npm or npx.

# Install globally
npm install -g selectors-scan

# Run anywhere
selectors-scan --help

# Or run without installing
npx selectors-scan path/to/your/project

Prerequisite: Ensure Python 3 is installed and available in your PATH.