npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-zai

v0.1.9

Published

n8n community node for Z.ai API - Connect to GLM-4.5-Flash and other Z.ai language models for chat completions, streaming responses, and AI-powered automation workflows

Readme

n8n-nodes-zai

This is an n8n community node that integrates Z.ai (GLM) language models into your n8n workflows. It provides access to Zhipu AI's powerful GLM series models including GLM-4.5, GLM-4.6, GLM-4.7, and CodeGeeX for chat completions, AI agents, coding tasks, and more.

n8n is a fair-code licensed workflow automation platform.

Supported Models

Access to Zhipu AI's latest GLM models including:

  • GLM-4.5 - Mixture-of-Experts model for AI agents
  • GLM-4.5-Flash - Free tier for agentic tasks
  • GLM-4.5-Air - Lightweight model
  • GLM-4.6 - 200K context window for long conversations
  • GLM-4.7 - Flagship model with strong coding capabilities (default)
  • GLM-4.7-Flash - Free tier with latest features
  • GLM-4.7-FlashX - Ultra-fast model for low-latency applications
  • GLM-4-32B-0414-128K - 128K context for long documents
  • GLM-5 - Latest generation flagship for complex reasoning
  • GLM-5-Code - Specialized for advanced software development
  • GLM-5-Turbo - High-performance with speed and reasoning balance
  • CodeGeeX - Specialized model for coding tasks

Installation

Using npm

npm install n8n-nodes-zai

In n8n

  1. Go to Settings > Community Nodes
  2. Click Add and search for n8n-nodes-zai
  3. Click Install

Or follow the installation guide in the n8n documentation.

Credentials

To use this node, you need a Z.ai API key:

  1. Visit Z.ai Open Platform
  2. Sign up for an account
  3. Navigate to API Keys section
  4. Create a new API key
  5. In n8n, create a new Zai API credential and paste your API key

Note: The Z.ai API provides free tiers for Flash models, making it easy to get started without costs.

Operations

Chat

Generate chat completions using Z.ai's GLM models.

Inputs:

  • Model: Select which GLM model to use (see Supported Models section for options, default: GLM-4.7)
  • Messages: Array of chat messages with roles (system, user, assistant)
  • Temperature: Control randomness (0.0 - 2.0, default: 0.7)
  • Max Tokens: Maximum tokens in the response (default: 4096)
  • Top P: Nucleus sampling parameter (0.0 - 1.0, default: 1.0)
  • Top K: Remove "long tail" low probability responses (0 - 32, default: 32.0)
  • Timeout: Request timeout in milliseconds (default: 0, no timeout)

Built-in Tools:

  • Web Search: Enable the model to search the web for current information
    • Search Context Size: Amount of context for search (low/medium/high)
    • Allowed Domains: Restrict search to specific domains (comma-separated)

Safety Settings:

  • Harassment: Filter harassment content
  • Hate Speech: Filter hate speech and content
  • Sexually Explicit: Filter sexually explicit content
  • Dangerous Content: Filter dangerous content
  • Block Threshold: Set sensitivity level (LOW/MEDIUM/HIGH/NONE)

Outputs:

  • Response: The model's text response
  • Usage: Token usage statistics (prompt tokens, completion tokens, total)
  • Model: Which model was used
  • Finish Reason: Why the generation stopped (length, stop, etc.)

Usage Examples

Basic Chat Completion

{
  "model": "glm-4.5-flash",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms."
    }
  ]
}

Using the Free Tier

Use GLM-4.5-Flash or GLM-4.7-Flash for free AI-powered text generation.

Long Context Conversations

Use GLM-4.6 with its 200K token context window for maintaining long conversation history.

Coding Tasks

Use CodeGeeX, GLM-4.7 (strong coding model), or GLM-5-Code for advanced software development tasks including code generation, debugging, and explanation.

Web Search Integration

Enable the Web Search built-in tool to give the model access to current information from the web. Configure search context size and restrict to specific domains if needed.

Latest Generation Models

Use GLM-5 for complex reasoning tasks or GLM-5-Turbo for a balance of speed and capability.

Compatibility

  • Minimum n8n version: 1.0.0
  • Tested with: n8n 1.60.0+

Resources

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

If you encounter any issues or have questions:

Author

Ali Shan ([email protected])