npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@emengweb/nodejieba

v3.5.8

Published

chinese word segmentation for node

Downloads

49

Readme

Build Status Financial Contributors on Open Collective Author Platform Performance License NpmDownload Status NPM Version Code Climate


NodeJieba "结巴"分词的Node.js版本

介绍

NodeJieba是"结巴"中文分词的 Node.js 版本实现, 由CppJieba提供底层分词算法实现, 是兼具高性能和易用性两者的 Node.js 中文分词组件。

特点

  • 词典载入方式灵活,无需配置词典路径也可使用,需要定制自己的词典路径时也可灵活定制。
  • 底层算法实现是C++,性能高效。
  • 支持多种分词算法,各种分词算法见CppJieba的README.md介绍。
  • 支持动态补充词库。
  • 支持TypeScript,提供完整的类型定义。
  • 支持包含空格的关键词(如 "Open Claw")。
  • 支持无空格版本匹配(如 "OpenClaw" 可匹配 "Open Claw")。
  • 支持英文大小写不敏感匹配(如 "open claw"、"OPEN CLAW" 都可匹配 "Open Claw")。

对实现细节感兴趣的请看如下博文:

安装

npm install nodejieba

快速开始

var nodejieba = require("nodejieba");
var result = nodejieba.cut("南京市长江大桥");
console.log(result);
//["南京市","长江大桥"]

更多示例请参考 demo

词典载入可灵活配置

如果没有主动调用词典函数时, 则会在第一次调用cut等功能函数时,自动载入默认词典。

如果要主动触发词典载入,则使用以下函数主动触发。

nodejieba.load();

以上用法会自动载入所有默认词典, 如果需要载入自己的词典,而不是默认词典。 比如想要载入自己的用户词典,则使用以下函数:

nodejieba.load({
  userDict: './test/testdata/userdict.utf8',
});

字典载入函数load的参数项都是可选的, 如果没有对应的项则自动填充默认参数。 所以上面这段代码和下面这代代码是等价的。

nodejieba.load({
  dict: nodejieba.DEFAULT_DICT,
  hmmDict: nodejieba.DEFAULT_HMM_DICT,
  userDict: './test/testdata/userdict.utf8',
  idfDict: nodejieba.DEFAULT_IDF_DICT,
  stopWordDict: nodejieba.DEFAULT_STOP_WORD_DICT,
});

词典说明

  • dict: 主词典,带权重和词性标签,建议使用默认词典。
  • hmmDict: 隐式马尔科夫模型,建议使用默认词典。
  • userDict: 用户词典,建议自己根据需要定制。
  • idfDict: 关键词抽取所需的idf信息。
  • stopWordDict: 关键词抽取所需的停用词列表。

API 文档

分词

1. 默认分词

var nodejieba = require("nodejieba");
var result = nodejieba.cut("南京市长江大桥");
console.log(result);
// ["南京市", "长江大桥"]

2. 使用HMM模型分词

var result = nodejieba.cutHMM("南京市长江大桥");
console.log(result);
// ["南京市", "长江大桥"]

3. 全模式分词

var result = nodejieba.cutAll("南京市长江大桥");
console.log(result);
// ["南京", "南京市", "市长", "长江", "长江大桥", "大桥"]

4. 搜索引擎模式分词

var result = nodejieba.cutForSearch("南京市长江大桥");
console.log(result);
// ["南京", "市", "长江", "大桥", "南京市", "长江大桥"]

5. 小粒度分词

var result = nodejieba.cutSmall("南京市长江大桥", 3);
console.log(result);
// ["南京市", "长江大桥"]

词性标注

var nodejieba = require("nodejieba");
var result = nodejieba.tag("红掌拨清波");
console.log(result);
// [ { word: '红掌', tag: 'n' },
//   { word: '拨', tag: 'v' },
//   { word: '清波', tag: 'n' } ]

关键词提取

var nodejieba = require("nodejieba");
var sentence = "我是拖拉机学院手扶拖拉机专业的。不用多久,我就会升职加薪,当上CEO,走上人生巅峰。";
var result = nodejieba.extract(sentence, 5);
console.log(result);
// [ { word: '升职', weight: 11.739204307083542 },
//   { word: '加薪', weight: 10.8561552143 },
//   { word: 'CEO', weight: 10.642581114 },
//   { word: '手扶拖拉机', weight: 10.0088573539 },
//   { word: '巅峰', weight: 9.49395840471 } ]

TextRank关键词提取

var nodejieba = require("nodejieba");
var sentence = "我是拖拉机学院手扶拖拉机专业的。不用多久,我就会升职加薪,当上CEO,走上人生巅峰。";
var result = nodejieba.textRankExtract(sentence, 5);
console.log(result);
// [ { word: '当上', weight: 1 },
//   { word: '不用', weight: 0.9897190043 },
//   { word: '多久', weight: 0.9897190043 },
//   { word: '加薪', weight: 0.9897190043 },
//   { word: '升职', weight: 0.9897190043 } ]

添加自定义词语

var nodejieba = require("nodejieba");
console.log(nodejieba.cut("男默女泪"));
// ["男默", "女泪"]
nodejieba.insertWord("男默女泪");
console.log(nodejieba.cut("男默女泪"));
// ["男默女泪"]

包含空格的关键词(新功能)

支持在自定义词典中使用包含空格的关键词,且支持无空格版本匹配和大小写不敏感匹配。

用户词典格式

用户词典支持以下格式:

# 只有关键词
Open Claw

# 关键词 + 词性标签
Open Claw n

# 关键词 + 词频 + 词性标签
Open Claw 100 n

# 包含多个空格的关键词
Machine Learning 200 n
Artificial Intelligence 300 n

使用示例

var nodejieba = require("nodejieba");
var fs = require('fs');
var path = require('path');

// 创建包含空格关键词的用户词典
var dictContent = `Open Claw 100 n
Machine Learning 200 n
Artificial Intelligence 300 n
`;

var testDictPath = path.join(__dirname, 'user_dict.utf8');
fs.writeFileSync(testDictPath, dictContent);

// 加载词典
nodejieba.load({
  userDict: testDictPath,
});

// 测试1: 包含空格的关键词匹配
console.log(nodejieba.cut("I want to use Open Claw tool"));
// 输出包含: ['Open Claw']

// 测试2: 大小写不敏感匹配
console.log(nodejieba.cut("open claw"));        // 匹配 Open Claw
console.log(nodejieba.cut("OPEN CLAW"));        // 匹配 Open Claw
console.log(nodejieba.cut("Open Claw"));        // 匹配 Open Claw

// 测试3: 无空格版本匹配
console.log(nodejieba.cut("OpenClaw"));         // 匹配 Open Claw
console.log(nodejieba.cut("openclaw"));         // 匹配 Open Claw
console.log(nodejieba.cut("OPENCLAW"));         // 匹配 Open Claw

// 测试4: 其他包含空格的关键词
console.log(nodejieba.cut("Machine Learning is great"));
// 输出包含: ['Machine Learning']

console.log(nodejieba.cut("Artificial Intelligence will change the world"));
// 输出包含: ['Artificial Intelligence']

// 清理测试文件
fs.unlinkSync(testDictPath);

功能说明

  1. 包含空格的关键词: 词典中的 "Open Claw" 可以匹配文本中的 "Open Claw"
  2. 无空格版本匹配: 词典中的 "Open Claw" 也可以匹配文本中的 "OpenClaw"
  3. 大小写不敏感: 词典中的 "Open Claw" 可以匹配 "open claw"、"OPEN CLAW"、"Open Claw" 等任意大小写组合

More Detals in demo

关键词抽取

var nodejieba = require("nodejieba");
var topN = 4;
console.log(nodejieba.extract("升职加薪,当上CEO,走上人生巅峰。", topN));
//[ { word: 'CEO', weight: 11.739204307083542 },
//  { word: '升职', weight: 10.8561552143 },
//  { word: '加薪', weight: 10.642581114 },
//  { word: '巅峰', weight: 9.49395840471 } ]

console.log(nodejieba.textRankExtract("升职加薪,当上CEO,走上人生巅峰。", topN));
//[ { word: '当上', weight: 1 },
//  { word: '不用', weight: 0.9898479330698993 },
//  { word: '多久', weight: 0.9851260595435759 },
//  { word: '加薪', weight: 0.9830464899847804 },
//  { word: '升职', weight: 0.9802777682279076 } ]

More Detals in demo

Develop NodeJieba

git clone --recurse-submodules https://github.com/yanyiwu/nodejieba.git
cd nodejieba
npm install
npm test

应用

性能评测

应该是目前性能最好的 Node.js 中文分词库 详见: Jieba中文分词系列性能评测

Contributors

Code Contributors

This project exists thanks to all the people who contribute. [Contribute].

Financial Contributors

Become a financial contributor and help us sustain our community. [Contribute]

Individuals

Organizations

Support this project with your organization. Your logo will show up here with a link to your website. [Contribute]