n8n-nodes-watsonxai
v1.0.0
Published
Custom Watsonx AI node for n8n
Downloads
18
Readme
# n8n-nodes-watsonxai
**IBM Watsonx AI Custom Node for n8n**
This node allows you to interact with IBM Watsonx.ai for text generation using foundation models. You can generate text, summaries, or completions directly from n8n workflows.
---
## Installation
### 1. Local Installation
If you are installing locally:
```bash
npm install n8n-nodes-watsonxaiThen restart n8n. The Watsonx AI node will appear under AI in the node panel.
The Watsonx AI node allows you to generate text using IBM Watsonx foundation models.
Parameters
| Parameter | Type | Description |
| --------------- | ------ | ------------------------------------------------------------------------ |
| Prompt | String | Input text for the model to continue or complete. |
| Model ID | String | Watsonx.ai foundation model ID (e.g., ibm/granite-13b-instruct-v2). |
| Temperature | Number | Sampling temperature for text generation (0.0 = greedy, 2.0 = creative). |
| Max Tokens | Number | Maximum number of tokens to generate. |
| Project ID | String | IBM Cloud project ID where the model is hosted. |
Credentials
The node requires IBM Watsonx AI API credentials:
| Credential | Description |
| -------------------- | ----------------------------------------------------------------------------------- |
| API Key | Your IBM Cloud IAM API key. |
| Watsonx Base URL | The base URL of your Watsonx.ai service, e.g., https://us-south.ml.cloud.ibm.com. |
Note: Make sure your API key has the proper permissions to access Watsonx.ai models in the selected project.
Example Workflow
- Drag a Watsonx AI node into your workflow.
- Select or create credentials with your API key and base URL.
- Enter a prompt, model ID, temperature, max tokens, and project ID.
- Connect the node to any trigger (manual or scheduled).
- Execute the workflow to receive generated text and metadata.
Output Example:
{
"generated_text": "Hello, this is a generated response from Watsonx AI.",
"model_id": "ibm/granite-13b-instruct-v2",
"stop_reason": "length",
"token_count": 15
}Tips
- Use short, clear prompts to get more accurate results.
- Adjust temperature for creativity: lower (0.0–0.5) = deterministic, higher (1.0–2.0) = creative.
- Keep max tokens reasonable to avoid exceeding limits and reduce latency.
License
MIT © Muhammad Muazam Arshad Email: [email protected]
---
