npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@planning-inspectorate/data-model

v2.29.0

Published

JSON Schemas for the Planning Inspectorate's Data Model

Readme

Data Model

JSON Schemas for the Planning Inspectorate's Data Model.

All messages over the enterprise service bus are in Data Model compliant formats, the schema's in this repository defined those formats.

Rules of engagement

  • Schema names should use the singular form (for example, service-user rather than service-users).
  • Messages broadcast over the service bus must be pre-validated against the schema.
  • Each service bus topic corresponds to one schema.
  • Each message encompasses the complete state of an entity (as defined by ECST) and does not delineate specific changes (the delta).
  • Each message will include a message event type, which is sent as applicationProperties.type (see applicationProperties), and is one of the values defined by the eventType enum in enums-static.schema.json
  • Consumers must support processing related messages received through different topics regardless of the order in which they are sent.
  • Values required by the Data Model cannot be null, while optional values must be null if they have no value.

Schema Changes

Always consult a Data Architect about the proposed change - who will consider the change in terms of affect on other services and continuity with the overall data model.

Since we are using these schemas across a distributed system we must ensure backwards and forwards compatibility. This enables systems to adopt new schema versions independently of one another. Also, consumers don't need to know the schema version of the messages they have received, as it should validate against any version.

Backwards compatibility ensures that messages using a newer schema version can be understood and processed by consumers that have not been updated to support the new version.

Forwards compatibility ensures that messages using an older schema version can be understood and processed by consumers that have been updated to support a newer schema version.

Rules

To ensure compatibility, the following rules must be followed:

  1. No fields can be removed from the schema
  2. No fields can be removed from the required list
  3. Optional fields must support the null type
  4. Fields cannot be renamed
  5. Any new fields must be optional
  6. Any enum changes must only add options
  7. additionalProperties must always be true

Examples

Given a simple schema at v1:

{
    "type": "object",
    "title": "Book",
    "required": ["id", "title"]
    "additionalProperties": true,
    "properties": {
        "id": {
            "type": "integer",
            "description": "The unique identifier"
        },
        "title": {
            "type": "string",
            "description": "The title or name of this book"
        },
        "blurb": {
            "type": ["string", "null"],
            "description": "The blurb"
        },
        "category": {
            "type": ["string", "null"],
            "description": "The top-level category",
            "enum": [
                "fiction",
                "non-fiction",
                null
            ]
        }
    }
}

An example valid message at v1 could be:

{
    "id": 85347,
    "title": "Mastering the Rubik's Cube",
    "blurb": null,
    "category": "non-fiction"
}

Let's make a v2 schema:

{
    "type": "object",
    "title": "Book",
    "required": ["id", "title"]
    "additionalProperties": true,
    "properties": {
        "id": {
            "type": "integer",
            "description": "The unique identifier"
        },
        "title": {
            "type": "string",
            "description": "The title or name of this book"
        },
        "blurb": {
            "type": ["string", "null"],
            "description": "The blurb"
        },
        "category": {
            "type": ["string", "null"],
            "description": "The top-level category",
            "enum": [
                "fiction",
                "non-fiction",
                "ai-generated",
                null
            ]
        },
        "subCategory": {
            "type": ["string", "null"],
            "description": "The sub category",
            "enum": [
                "manuals",
                "stories",
                null
            ]
        }
    }
}

Two changes have been made:

  1. a new enum valid for category
  2. a new field, subCategory

Now the original v1 message still validates against this schema. And a new v2 message, for example:

{
    "id": 85347,
    "title": "Mastering the Rubik's Cube",
    "blurb": null,
    "category": "non-fiction",
    "subCategory": "manuals"
}

validates against the old schema and the new schema!

Structure

  • docs: generated documentation from the JSON schemas
  • pins_data_model: Python library code (inc. generated Pydantic models)
  • schemas: JSON schemas
  • src: Node.js library code

Commands

There are some messages, primarily used from Front Office to Back Office systems that aren't strictly compliant to the model, since the Front Office doesn't author the data. These are known as commands (or submissions) and are usually a combination or subset of Data Model entities in a single message.

Schemas

The schemas in this repository are written in jsonc (JSON with comments), and can be parsed with node-jsonc-parser, or jsonc-parser for Python.

While we'd prefer to use something with a standard, such as JSON5, this has less support with our current workflows. The jsonc format is the same format that VSCode uses for settings.

Node.js Usage

To use these schemas in a Node project, add this repo as a dependency:

npm i -s @planning-inspectorate/data-model

As usual with npm you can optionally specify a version:

npm i -s @planning-inspectorate/[email protected]

Then import as required:

import {loadAllSchemas} from '@planning-inspectorate/data-model';

or to import enum constants, for example:

import { APPEAL_CASE_STATUS } from '@planning-inspectorate/data-model';

if (someAppeal.status === APPEAL_CASE_STATUS.COMPLETE) {
    console.log(`we're done!`);
}

Tests

We are starting to add js tests to the models using node test runner

These are run automatically by the pipeline and can be run locally with:

node --test

Please continue to expand this test suite going forward to help avoid breaking changes impacting consumers of the model

Python usage

To use these schemas in a Python project, add this repo as a dependency (with a particular tag or commit as the version):

pip install git+https://github.com/Planning-Inspectorate/data-model@task/python-setup#egg=pins_data_model

install deps from requirements.txt: pip install -r requirements.txt

you may want to run python in a virtual environment (handle python version/deps separately from system installed)

create a venv in a folder (.venv is git ignored): python -m venv .venv

you can specify a python version with: python3.13.2 -m venv .venv

activate it before running any python script: source .venv/bin/activate

deactivate once finished: deactivate

from pins_data_model.load_schemas import load_all_schemas

schemas = load_all_schemas()
print(schemas)

Contributing

Once you have updated the schemas, before committing, ensure the generated code/documentation is up to date by running:

npm run gen

and

python pins_data_model/gen_models.py

Commits must follow conventional commits, and the commit types will be used by semantic-release to determine the next version number. For example feat commits will result in a minor version bump, while fix commits will result in a patch version bump.

The package will be released automatically using semantic-release, on merge to main. This will include a git tag for the release, and publishing to NPM.