@cyberlangke/tokkit-inclusionai
v1.11.0
Published
inclusionAI tokenizer families for tokkit.
Downloads
68
Readme
@cyberlangke/tokkit-inclusionai
inclusionAI 官方文本模型的 tokkit 子包。
当前纳入的官方主线模型:
- 新 family:
inclusionAI/LLaDA2.0-miniinclusionAI/LLaDA2.0-flashinclusionAI/LLaDA2.1-miniinclusionAI/LLaDA2.1-flashinclusionAI/LLaDA-MoE-7B-A1B-BaseinclusionAI/Ring-2.5-1TinclusionAI/Ling-2.5-1TinclusionAI/Ling-mini-2.0inclusionAI/Ling-flash-2.0inclusionAI/Ling-1TinclusionAI/Ring-mini-2.0inclusionAI/Ling-flash-base-2.0inclusionAI/Ring-flash-2.0inclusionAI/Ring-1T
- 复用现有
qwenfamily:inclusionAI/GroveMoE-BaseinclusionAI/Qwen3-32B-AWorldinclusionAI/AReaL-SEA-235B-A22BinclusionAI/GroveMoE-InstinclusionAI/AReaL-boba-2-14B-OpeninclusionAI/AReaL-boba-2-8B-OpeninclusionAI/AReaL-boba-2-32BinclusionAI/AReaL-boba-2-8BinclusionAI/AReaL-boba-2-14B
当前不纳入:
preview/distill/linear/CAP/exp后缀对象FP8/GPTQ/AWQ/GGUF/ 其他量化衍生发布- 早期
Ling-lite*、Ling-plus*、Ling-Coder-lite*、Ring-lite*历史线
说明:
Qwen3-32B-AWorld、AReaL-*、GroveMoE-Inst当前与已支持的qwen3family 完全一致。GroveMoE-Base当前与已支持的qwen2.5family 完全一致。LLaDA2.0/2.1/LLaDA-MoE-7B-A1B-Base当前共享同一组 tokenizer,因此统一收口到llada2。Ling/Ring当前主线里还存在多组不同 tokenizer,当前按官方 hash 分组拆成ring-2.5-1t、ling-2、ring-mini-2.0、ring-flash-2.0、ring-1t。
使用方法
npm install @cyberlangke/tokkit-inclusionaiimport { getEncoding } from "@cyberlangke/tokkit-inclusionai"
const llada = await getEncoding("inclusionAI/LLaDA2.1-mini")
const qwenAlias = await getEncoding("inclusionAI/Qwen3-32B-AWorld")
console.log(llada.encode("Hello, inclusionAI"))
console.log(qwenAlias.encode("Hello, AWorld"))