npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@normy/core

v0.10.0

Published

Automatic normalization and data updates for data fetching libraries

Downloads

905

Readme

Normy

npm version gzip size Coverage Status lerna code style: prettier

Automatic normalization and data updates for data fetching libraries

Table of content

Introduction :arrow_up:

normy is a library, which allows your application data to be normalized automatically. Then, once data is normalized, in many cases your data can be updated automatically.

The core of normy - namely @normy/core library, which is not meant to be used directly in applications, has logic inside which allows an easily integration with your favourite data fetching libraries. There are already official integrations with react-query, swr and RTK Query. If you use another fetching library, you could raise the Github issue, so it might be added as well.

Motivation :arrow_up:

In order to understand what normy actually does, it is the best to see an example. Let's assume you use react-query. Then you could refactor a code in the following way:

  import React from 'react';
  import {
    QueryClientProvider,
    QueryClient,
    useQueryClient,
  } from '@tanstack/react-query';
+ import { QueryNormalizerProvider } from '@normy/react-query';

  const queryClient = new QueryClient();

  const Books = () => {
    const queryClient = useQueryClient();

    const { data: booksData = [] } = useQuery(['books'], () =>
      Promise.resolve([
        { id: '1', name: 'Name 1', author: { id: '1001', name: 'User1' } },
        { id: '2', name: 'Name 2', author: { id: '1002', name: 'User2' } },
      ]),
    );

    const { data: bookData } = useQuery(['book'], () =>
      Promise.resolve({
        id: '1',
        name: 'Name 1',
        author: { id: '1001', name: 'User1' },
      }),
    );

    const updateBookNameMutation = useMutation({
      mutationFn: () => ({
        id: '1',
        name: 'Name 1 Updated',
      }),
-     onSuccess: mutationData => {
-       queryClient.setQueryData(['books'], data =>
-         data.map(book =>
-           book.id === mutationData.id ? { ...book, ...mutationData } : book,
-         ),
-       );
-       queryClient.setQueryData(['book'], data =>
-         data.id === mutationData.id ? { ...data, ...mutationData } : data,
-       );
-     },
    });

    const updateBookAuthorMutation = useMutation({
      mutationFn: () => ({
        id: '1',
        author: { id: '1004', name: 'User4' },
      }),
-     onSuccess: mutationData => {
-       queryClient.setQueryData(['books'], data =>
-         data.map(book =>
-           book.id === mutationData.id ? { ...book, ...mutationData } : book,
-         ),
-       );
-       queryClient.setQueryData(['book'], data =>
-         data.id === mutationData.id ? { ...data, ...mutationData } : data,
-       );
-     },
    });

    const addBookMutation = useMutation({
      mutationFn: () => ({
        id: '3',
        name: 'Name 3',
        author: { id: '1003', name: 'User3' },
      }),
      // with data with top level arrays, you still need to update data manually
      onSuccess: mutationData => {
        queryClient.setQueryData(['books'], data => data.concat(mutationData));
      },
    });

    // return some JSX
  };

  const App = () => (
+   <QueryNormalizerProvider queryClient={queryClient}>
      <QueryClientProvider client={queryClient}>
        <Books />
      </QueryClientProvider>
+   </QueryNormalizerProvider>
  );

So, as you can see, apart from top level arrays, no manual data updates are necessary anymore. This is especially handy if a given mutation should update data for multiple queries. Not only this is verbose to do updates manually, but also you need to exactly know, which queries to update. The more queries you have, the bigger advantages normy brings.

How does it work? By default all objects with id key are organized by their ids. Now, any object with key id will be normalized, which simply means stored by id. If there is already a matching object with the same id, a new one will be deeply merged with the one already in the state. So, if a server response data from a mutation is { id: '1', title: 'new title' }, this library will automatically figure it out to update title for object with id: '1' for all dependent queries.

It also works with nested objects with ids, no matter how deep. If an object with id has other objects with ids, then those will be normalized separately and parent object will have just reference to those nested objects.

Installation :arrow_up:

react-query

To install the package, just run:

$ npm install @normy/react-query

or you can just use CDN: https://unpkg.com/@normy/react-query.

swr

To install the package, just run:

$ npm install @normy/swr

or you can just use CDN: https://unpkg.com/@normy/swr.

rtk-query

To install the package, just run:

$ npm install @normy/rtk-query

or you can just use CDN: https://unpkg.com/@normy/rtk-query.

another lirary

If you want to write a plugin to another library than react-query, swr or rtk-query:

$ npm install @normy/core

or you can just use CDN: https://unpkg.com/@normy/core.

To see how to write a plugin, for now just check source code of @normy/react-query, it is very easy to do, in the future a guide will be created.

Required conditions :arrow_up:

In order to make automatic normalization work, the following conditions must be met:

  1. you must have a standardized way to identify your objects, usually this is done by key id
  2. ids must be unique across the whole app, not only across object types, if not, you will need to append something to them, the same has to be done in GraphQL world, usually adding _typename
  3. objects with the same ids should have a consistent structure, if an object like book in one query has title key, it should be title in others, not name out of a sudden

There is a function which can be passed to createQueryNormalizer to meet those requirements, namely getNormalizationObjectKey.

getNormalizationObjectKey can help you with 1st point, if for instance you identify objects differently, like by _id key, then you can pass getNormalizationObjectKey: obj => obj._id.

getNormalizationObjectKey also allows you to pass the 2nd requirement. For example, if your ids are unique, but not across the whole app, but within object types, you could use getNormalizationObjectKey: obj => obj.id && obj.type ? obj.id + obj.type : undefined or something similar. If that is not possible, then you could just compute a suffix yourself, for example:

const getType = obj => {
  if (obj.bookTitle) {
    return 'book';
  }

  if (obj.surname) {
    return 'user';
  }

  return undefined;
};

createQueryNormalizer(queryClient, {
  getNormalizationObjectKey: obj =>
    obj.id && getType(obj) && obj.id + getType(obj),
});

Point 3 should always be met, if not, your really should ask your backend developers to keep things standardized and consistent. As a last resort, you can amend responses on your side.

Normalization of arrays :arrow_up:

Unfortunately it does not mean you will never need to update data manually anymore. Some updates still need to be done manually like usually, namely adding and removing items from array. Why? Imagine a REMOVE_BOOK mutation. This book could be present in many queries, library cannot know from which queries you would like to remove it. The same applies for ADD_BOOK, the library cannot know to which query a book should be added, or even as which array index. The same thing for action like SORT_BOOKS. This problem affects only top level arrays though. For instance, if you have a book with some id and another key like likedByUsers, then if you return new book with updated list in likedByUsers, this will work again automatically.

In the future version of the library though, with some additional pointers, it will be possible to do above updates as well!

Debugging :arrow_up:

If you are interested, what data manipulations normy actually does, you can use devLogging option:

<QueryNormalizerProvider
  queryClient={queryClient}
  normalizerConfig={{ devLogging: true }}
>
  {children}
</QueryNormalizerProvider>

false by default, if set to true, you could see in the console information, when queries are set or removed.

Note that this works only in development, even if you pass true, no logging will be done in production (when precisely process.env.NODE_ENV === 'production'). NODE_ENV is usually set by module bundlers like webpack for you, so probably you do not need to worry about setting NODE_ENV yourself.

Performance :arrow_up:

As always, any automatisation comes with a cost. In the future some benchmarks could be added, but for now manual tests showed that unless in your data you have tens of thousands of normalized objects, then the overhead should be not noticable. However, you have several flexible ways to improve performance:

  1. You can normalize only queries which have data updates, and only mutations which should update data - that's it, you can have only part of your data normalized. Check an integration documentation how to do it.
  2. Like 1., but for queries and mutations with extremely big data.
  3. There is a built-in optimalization, which checks data from mutation responses if they are actually different than data in the normalized store. If it is the same, dependent queries will not be updated. So, it is good for mutation data to include only things which could actually be different, which could prevent unnecessary normalization and queries updates.
  4. Do not disable structuralSharing option in libraries which support it - if a query data after update is the same referentially as before update, then this query will not be normalized. This is a big performance optimization, especially after refetch on refocus, which could update multiple queries at the same time, usually to the very same data.
  5. You can use getNormalizationObjectKey function to set globally which objects should be actually normalized. For example:
<QueryNormalizerProvider
  queryClient={queryClient}
  normalizerConfig={{
    getNormalizationObjectKey: obj => (obj.normalizable ? obj.id : undefined),
  }}
>
  {children}
</QueryNormalizerProvider>

Moreover, in the future some additional performance specific options will be added.

Integrations :arrow_up:

Currently the is only two official integrations with data fetching libraries, namely with react-query and swr. There are more to come though. See dedicated documentations for specific integrations:

Examples :arrow_up:

I highly recommend to try examples how this package could be used in real applications.

There are following examples currently:

Licence :arrow_up:

MIT