npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

css-wrangler

v0.12.0

Published

Computed styles scraping and reporting

Downloads

7

Readme

Frozen DOM/Computed Style testing

To help refactor CSS across a large site this will crawl through an original and revised version of the site and show you the difference to the computed styles (the final result of all CSS applied to an element) of all the elements on all of the pages.

The simplest way to use this is to provide:

  • Two different URLs to the site you want to check - the original and revised (normally the site hosted on your local dev machine and the live version of the site)
  • A number of paths to pages you want the to crawl (i.e. /home/ or /article/)
  • A number of selectors for elements on those pages (i.e. #header or .article-heading). This can just be the html element if you don't want to divide up the results

When run the application will generate three files:

  • A JSON file containing all the original computed styles scraped
  • A JSON file containing all the revised computed styles scraped
  • A JSON file containing the difference between the two

The difference can then be viewed through Results/Results.html screen shot

Running the Crawler

To run crawler use the following command

node crawler\bin\css-crawler --config your-config-file.js

make sure you have Selenium Driver and its location in your path

The config file is a CommonJS module exporting a variable called crawlerConfig

export const crawlerConfig = {
    beforeUrl: "127.0.0.1:8080/a.html",
    afterUrl: "127.0.0.1:8080/b.html",
    pages: [
        {
            id: 'home',
            name: 'Home page',
            path: '/',
            elementsToTest: ['h1', '.element-with-class', '#elementWithId']
        }
        {
            id: 'about',
            name: 'About page',
            path: '/about/',
            elementsToTest: ['h1', '.element-with-class', '#elementWithId']
        }
    ],
    outputPath: 'c:/crawlerOutput.txt'
};

The pages object's id must not contain whitespace or hyphens.

Viewing the results

Open Results/Results.html and load any JSON file the crawler has produced. If there were any difference you should see a list of each page where differences occurred.

The numbers next to each page show the number of style changes and the number of element/content changes. Potentially there are more style changes within these elements but without something to compare it to the crawler can't tell.

Only gathering styles

To gather styles which will be compared at a later date add this switch:

--getOriginal

It will gather styles using the beforeUrl from the provided config file.

This can be used locally, before and after you make changes. It can also be used to gather styles as part of continuous integration to gather styles for each build.

Comparing previously gathered styles

You can compare previously gathered styles:

--original my-original-styles.json

This will generate a diff from the original styles in the file and the afterUrl of the config. You can load this diff file into the results.html

Roadmap

Short term

  • Fix the xpath's missing slash
  • Show the number of ignored elements on results page
  • Break down ComputedStyleTest.ts into separate concerns
  • Open the results page from a command
  • Improve logging (Standard log, verbose log, error log and switches)
  • Find a cross browser replacement for document.querySelector() without effecting the page so results can be gathered from IE8
  • On the results page add a checkbox next to each difference allowing property/value combinations to be ignored for the Original, Comparand or both.
  • Replace vanilla webdriver with BrowserTime https://www.npmjs.com/package/browsertime

Medium term

  • Allow the gathering of a/b computed styles and the comparision to happen at separate times
  • Improve the flexability of Selenium -- Allow JavaScript features to be tested by describe page 'states' within the config. These would be selenium commands run before scraping the styles (i.e. loginButtonElement.click() or action.moveToElement(dropDownMenu)). -- Allow configuration of Selenium from the config and command line. IE, Chrome, Firefox etc. and browser width.

Long term

  • Save data to database instead of JSON file
  • Make the consumer site faster by lazy loading data from database