npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@sozialhelden/fetch-cache

v2.0.1

Published

A cached WhatWG fetch with URLs as cache keys, featuring eviction based on LRU, maximal item count and/or TTL.

Downloads

23

Readme

fetch-cache 🐕

A cache for WhatWG fetch calls.

  • Supports TypeScript
  • Uses normalized URLs as cache keys
  • Can normalize URLs for better performance (you can configure how)
  • Does not request the same resource twice if the first request is still loading
  • Customizable TTLs per request, dependent on HTTP status code or in case of network errors
  • Supports all Hamster Cache features, e.g. eviction based on LRU, maximal cached item count and/or per-item TTL.
  • Runs in NodeJS, but should be isometric && browser-compatible (not tested yet! try at your own risk 🙃)

Installation

npm install --save @sozialhelden/fetch-cache
#or
yarn add @sozialhelden/fetch-cache

Usage examples

Initialization

Bring your own fetch - for example:

Configure the cache and use cache.fetch() as if you would call fetch() directly:

import FetchCache from '@sozialhelden/fetch-cache';

const fetch = require('node-fetch'); // in NodeJS
// or
const fetch = window.fetch; // in newer browsers

const fetchCache = new FetchCache({
  fetch,
  cacheOptions: {
    // Don't save more than 100 responses in the cache. Allows infinite responses by default
    maximalItemCount: 100,
    // When should the cache evict responses when its full?
    evictExceedingItemsBy: 'lru', // Valid values: 'lru' or 'age'
    // ...see https://github.com/sozialhelden/hamster-cache for all possible options
  },
});

// either fetches a response over the network,
// or returns a cached promise with the same URL (if available)
const url = 'https://jsonplaceholder.typicode.com/todos/1';
fetchCache
  .fetch(url, fetchOptions)
  .then(response => response.body())
  .then(console.log)
  .catch(console.log);

Basic caching operations

// Add an external response promise and cache it for 10 seconds
const response = fetch('https://api.example.com');

// Insert a response you got from somewhere else
fetchCache.cache.set('http://example.com', response);

// Set a custom TTL of 10 seconds for this specific response
fetchCache.cache.set('http://example.com', response, { ttl: 10000 });

// gets the cached response without side effects
fetchCache.cache.peek(url);

// `true` if a response exists in the cache, `false` otherwise
fetchCache.cache.has(url);

// same as `peek`, but returns response with meta information
fetchCache.cache.peekItem(url);

// same as `get`, but returns response with meta information
fetchCache.cache.getItem(url);

// Let the cache collect garbage to save memory, for example in fixed time intervals
fetchCache.cache.evictExpiredItems();

// removes a response from the cache
fetchCache.cache.delete(url);

// forgets all cached responses
fetchCache.cache.clear();

Vary TTLs depending on HTTP response code, headers, and more

While the cache tries to guess working TTLs for most use cases, you might want to customize how long a response (or rejected promise) should stay in the cache before it makes a new request when you fetch the same URL again.

For example, you could set the TTL to one second, no matter if a request succeeds or fails (please don't really do this, except you have a good reason):

const fetchCache = new FetchCache({ fetch, ttl: () => 1000 });

…or configure varying TTLs for specific HTTP response status codes (better):

const fetchCache = new FetchCache({
  fetch,
  ttl: ({ response, state, error }) => {
    // state is 'running', 'resolved' or 'rejected' here.
    if (response) {
      // If a response is successful, keep it in the cache for 2 minutes
      if (response.status === 200) return 2 * 60 * 1000;
      // If a response is successful, keep it in the cache for 10 seconds so it shows up if the
      // resource begins to exist in the meantime
      if (response.status === 404) return 10 * 1000;
    }
    // If you return `undefined` here, the cache will use default TTL values for all other cases.
  },
});

For an overview about more cases, consult the default implementation.

Normalize URLs

You can improve caching performance by letting the cache know if more than one URL points to the same server-side resource. For this, provide a normalizeURL function that builds a canonical URL from a given one.

The cache will only hold one response per canonical URL then. This saves memory and network bandwidth.

normalize-url is a helpful NPM package implementing real-world normalization rules like SSL enforcement and www. vs. non-www.-domain names. You can use it as normalization function:

# Install the package with
npm install normalize-url
# or
yarn add normalize-url
import normalizeURL from 'normalize-url';
import fetch from 'node-fetch';

// See https://github.com/sindresorhus/normalize-url#readme for all available normalization options
const cache = new FetchCache({
  fetch,
  normalizeURL(url) {
    return normalizeURL(url, { forceHttps: true });
  },
});

Contributors

Supported by

.