npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

chatito

v2.3.5

Published

Generate training datasets for NLU chatbots using a simple DSL

Downloads

697

Readme

Chatito

npm version CircleCI branch npm License

Alt text

Try the online IDE!

Overview

Chatito helps you generate datasets for training and validating chatbot models using a simple DSL.

If you are building chatbots using commercial models, open source frameworks or writing your own natural language processing model, you need training and testing examples. Chatito is here to help you.

This project contains the:

Chatito language

For the full language specification and documentation, please refer to the DSL spec document.

Tips

Prevent overfit

Overfitting is a problem that can be prevented if we use Chatito correctly. The idea behind this tool, is to have an intersection between data augmentation and a description of possible sentences combinations. It is not intended to generate deterministic datasets that may overfit a single sentence model, in those cases, you can have some control over the generation paths only pull samples as required.

Tools and resources

Adapters

The language is independent from the generated output format and because each model can receive different parameters and settings, this are the currently implemented data formats, if your provider is not listed, at the Tools and resources section there is more information on how to support more formats.

NOTE: Samples are not shuffled between intents for easier review and because some adapters stream samples directly to the file and it's recommended to split intents in different files for easier review and maintenance.

Rasa

Rasa is an open source machine learning framework for automated text and voice-based conversations. Understand messages, hold conversations, and connect to messaging channels and APIs. Chatito can help you build a dataset for the Rasa NLU component.

One particular behavior of the Rasa adapter is that when a slot definition sentence only contains one alias, and that alias defines the 'synonym' argument with 'true', the generated Rasa dataset will map the alias as a synonym. e.g.:

%[some intent]('training': '1')
    @[some slot]

@[some slot]
    ~[some slot synonyms]

~[some slot synonyms]('synonym': 'true')
    synonym 1
    synonym 2

In this example, the generated Rasa dataset will contain the entity_synonyms of synonym 1 and synonym 2 mapping to some slot synonyms.

Flair

Flair A very simple framework for state-of-the-art NLP. Developed by Zalando Research. It provides state of the art (GPT, BERT, RoBERTa, XLNet, ELMo, etc...) pre trained embeddings for many languages that work out of the box. This adapter supports the text classification dataset in FastText format and the named entity recognition dataset in two column BIO annotated words, as documented at flair corpus documentation. This two data formats are very common and with many other providers or models.

The NER dataset requires a word tokenization processing that is currently done using a simple tokenizer.

NOTE: Flair adapter is only available for the NodeJS NPM CLI package, not for the IDE.

LUIS

LUIS is part of Microsoft's Cognitive services. Chatito supports training a LUIS NLU model through its batch add labeled utterances endpoint, and its batch testing api.

To train a LUIS model, you will need to post the utterance in batches to the relevant API for training or testing.

Reference issue: #61

Snips NLU

Snips NLU is another great open source framework for NLU. One particular behavior of the Snips adapter is that you can define entity types for the slots. e.g.:

%[date search]('training':'1')
   for @[date]

@[date]('entity': 'snips/datetime')
    ~[today]
    ~[tomorrow]

In the previous example, all @[date] values will be tagged with the snips/datetime entity tag.

Default format

Use the default format if you plan to train a custom model or if you are writing a custom adapter. This is the most flexible format because you can annotate Slots and Intents with custom entity arguments, and they all will be present at the generated output, so for example, you could also include dialog/response generation logic with the DSL. E.g.:

%[some intent]('context': 'some annotation')
    @[some slot] ~[please?]

@[some slot]('required': 'true', 'type': 'some type')
    ~[some alias here]

Custom entities like 'context', 'required' and 'type' will be available at the output so you can handle this custom arguments as you want.

NPM package

Chatito supports Node.js >= v8.11.

Install it with yarn or npm:

npm i chatito --save

Then create a definition file (e.g.: trainClimateBot.chatito) with your code.

Run the npm generator:

npx chatito trainClimateBot.chatito

The generated dataset should be available next to your definition file.

Here is the full npm generator options:

npx chatito <pathToFileOrDirectory> --format=<format> --formatOptions=<formatOptions> --outputPath=<outputPath> --trainingFileName=<trainingFileName> --testingFileName=<testingFileName> --defaultDistribution=<defaultDistribution> --autoAliases=<autoAliases>
  • <pathToFileOrDirectory> path to a .chatito file or a directory that contains chatito files. If it is a directory, will search recursively for all *.chatito files inside and use them to generate the dataset. e.g.: lightsChange.chatito or ./chatitoFilesFolder

  • <format> Optional. default, rasa, luis, flair or snips.

  • <formatOptions> Optional. Path to a .json file that each adapter optionally can use

  • <outputPath> Optional. The directory where to save the generated datasets. Uses the current directory as default.

  • <trainingFileName> Optional. The name of the generated training dataset file. Do not forget to add a .json extension at the end. Uses <format>_dataset_training.json as default file name.

  • <testingFileName> Optional. The name of the generated testing dataset file. Do not forget to add a .json extension at the end. Uses <format>_dataset_testing.json as default file name.

  • <defaultDistribution> Optional. The default frequency distribution if not defined at the entity level. Defaults to regular and can be set to even.

  • <autoAliases> Optional. The generaor behavior when finding an undefined alias. Valid opions are allow, warn, restrict. Defauls to 'allow'.

Author and maintainer

Rodrigo Pimentel

sr.rodrigopv[at]gmail