@iflow-mcp/kintopp-rijksmuseum-mcp-plus
v0.20.0
Published
[ group at the University of Basel and builds on our ongoing work on benchmarking and optimizing humanities research tasks carried out by large language models (LLMs). We are particularly interested in exploring the research opportunities and technical challenges posed by using structured humanities data with LLMs. If you are an interested in collaborating with us in this area, please get in touch.
Features
You can explore artworks with the same (with minor exceptions) search filters offered by the Rijksmuseum on their search collections page. Beyond this, rijksmuseum-mcp+ provides the following additional features:
Full-text corpora — (
description,inscription,provenance,creditLine,curatorialNarrative). This permits, for example, comparative analyses of the collection's catalogue entries and the curated wall texts.Semantic search — multilingual, concept/meaning-based explorations across multiple metadata categories. For example, queries like "vanitas symbolism" or "sense of loneliness in domestic interiors" which can't be expressed as structured metadata.
Spatial dimensions — proximity radius searches on the museum's (
nearPlace) and size filters (minWidth/maxHeight) enable spatial queries like "artworks related to places within 25 km of Leiden" or "prints smaller than 10 cm wide" as well as two new parameters (nearLatandnearLon) to enable spatial queries from arbitrary locations ("find artworks depicting places near me").Smart searching and ranking — English subject-based queries use morphological stemming (plurals, gerunds, past tenses) to make search term more forgiving. Large result sets that need to be truncated include faceted counts to allow the AI assistant to suggest additional filters. Textual queries are ranked by relevance instead of catalogue order, while filter-only queries are ordered by their expected importance to most users (drawing on image availability, metadata richness, and
curatorialNarrative).More metadata — several metadata fields not searchable from the museum's search portal:
birthPlace/deathPlace,profession, creator demographics (gender,birth/death years,biographical notes), title search across all 6 title variants (title— brief, full, former × EN/NL vs the website's brief titles only), and bibliography citations for individual artworks.Iconclass — access to its own Iconclass database, cross-linked with the Rijksmuseum's metadata, which can be searched and explored not just by notation by also title, description, parent/child classes and semantically by concept.
Interactive Image Viewer — view high-resolution images of artworks inline in your chat discussion (N.B. this feature requires Claude Desktop or claude.ai). Zoom, pan, rotate, flip horizontally or view the image full-screen.
AI image analysis (experimental) — the AI assistant can analyse images visually in combination with its own background knowledge and the artwork's structured data (e.g. "which iconographic elements of the Annunciation in this image have corresponding entries in Iconclass?").
AI image annotation (experimental) - the AI assistant can annotate images in the interactive image viewer with elements it has recognised (e.g. "highlight the biblical scenes depicted in the painting's panels").
User image annotation (experimental) - click inside the image viewer to give it focus, then press
ior click the rightmost button in the image viewer toolbar. This puts the viewer ininteractivemode. Now click and draw a rectangle around an area of interest to you. You may be asked for permission to allow a prompt (with the coordinates of the area you selected) to be written into the chat. Then add your own prompt after it (e.g. 'what's inside the highlighted area' or simply 'what is that?').Structured outputs — As most of the data provided by rijksmuseum-mcp+ is in structured form, it's often straightforward for the AI assistant to also represent or export these in a structured manner (e.g. tabular formats) or draw on them for follow-up tasks, such as visualizations or other AI-enabled analyses.
Quick Start
The best way to get started is with Claude Desktop or claude.ai by adding a custom 'Connector' to Claude using the URL below. This currently requires a paid ('Pro') or higher subscription from Anthropic.
https://rijksmuseum-mcp-plus-production.up.railway.app/mcpGoto Settings → Connectors → Add custom connector → Name it whatever you like and paste the URL shown above into the Remote MCP Server URL field. You can ignore the Authentication section. Once the connector is configured, set the permissions for its tools (e.g. 'Always allow'). See Anthropic's instructions for more details.
Choosing an AI system
Technically speaking, rijksmuseum-mcp+ is a Model Context Protocol (MCP) server. As such, it also works with many other browser based chatbots including those whose large language models (LLMs) can be used without a paid subscription. Mistral's LeChat is a good example (follow these instructions - note: no authentication is required). It's also compatible with many open-source desktop 'LLM client' applications such as Jan.ai that are able to make use of local or cloud based LLMs, and agentic coding tools such as Claude Code or OpenAI Codex. In comparison, OpenAI's ChatGPT still only offers limited, 'developer mode' support for MCP servers and while Google has announced MCP support for Gemini it has not indicated when this will be ready.
However, none can view and interact with images from the Rijksmuseum's collections in the chat timeline. For this reason, the best way to use this MCP server remains Claude Desktop or claude.ai. For complex object recognition tasks, switching to Claude Opus with extended thinking will often produce better results.
Note to developers: rijksmuseum-mcp+ can also be run as a local MCP server in STDIO mode with local copies of its metadata and embedding databases. Please see the technical notes for details.
Sample Queries
After you've added the rijksmuseum-mcp+ 'connector' (aka custom MCP server) to your AI system, test that everything is working correctly by asking your AI assistant to confirm its access: "Which MCP tools can you use to explore the Rijksmuseum's collections?".
After that, ask your own questions:
- What artworks evoke vanitas and mortality?
- A list of works of the interior of the Nieuwe Kerk in Amsterdam
- Is there an iconclass code for mythical creatures?
- Which artworks have a provenance linked to Napoleon Bonaparte?
- I'm looking for works with inscriptions mentioning 'luctor et emergo'
- Show me sculptures in the collection by artists born in Leiden
- Which paintings are wider than 3 meters?
- Does the Rijksmuseum hold any works made in the manner of Hieronymus Bosch?
- What photographs does the collection have by artists born in Indonesia?
- Show me the Roermond Passion and highlight the Betrayal of Judas
For samples of more complex questions, please see the research scenarios.
How it works
to be added. Here are references to the available search parameters and metadata categories. These diagrams illustrate the structure and flow of information when using rijksmuseum-mcp+.
Technical notes
to be added For now, please see this file or consult the DeepWiki entry for this repo.
Tips
Say what you are actually looking for, not how to find it. The assistant generally does better when given a research question than a list of parameters. "What prints were made after paintings by Rembrandt?" works better than "search for prints with technique etching by Rembrandt", because the first framing lets the assistant choose the right combination of tools and strategies.
Try a concept search when structured filters return nothing useful. If searching by subject, Iconclass, or description doesn't find what you're looking for, asking the assistant to try a concept search (semantic search) can find artworks by meaning rather than exact vocabulary terms. This is especially useful for atmospheric or thematic queries. The assistant can also search Iconclass by concept — finding the right notation code by meaning rather than exact keyword — and then use that notation for a precise structured search.
The MCP server (rijksmuseum-mcp+) seems stuck. If the server is not responding, it could be that it has been updated and the connection needs to be refreshed. To fix this, in your AI system's settings (e.g. in Settings in Claude Desktop or claude.ai) disconnect and reconnect the server, and then click on Configure to verify that all permissions are still correct. In other MCP clients, you may not be able to disconnect/reconnect. In that case, remove and add the server again.
Known Limitations
Text coverage and language vary by field. About 61% of records include a cataloguer's description (in Dutch). Curatorial wall texts (in English) cover only about 14,000 artworks — mostly highlights and recent acquisitions. Because description is in Dutch, English search terms won't match — use curatorialNarrative for English full-text search, or semantic_search which works across languages. Structured vocabulary labels for subjects, types, materials, and techniques are bilingual for about 70% of terms (using Getty AAT equivalents). Places, events, professions, and production roles are mostly Dutch-only — though major cities, countries, and common roles (e.g. "painter", "photographer") have English labels. The AI assistant knows to try the Dutch term when an English search returns no results (and vice versa).
Iconclass subject classification can be counterintuitive. The Iconclass system assigns subjects to specific branches of a strict hierarchy that does not always match everyday expectations. However, the assistant can search Iconclass by concept as well as by keyword — describing what you're looking for in plain language (e.g. "domestic animals" or "religious suffering") will often find the right notation even when the exact vocabulary term is unknown.
Not all maker relation types are available. The Rijksmuseum's collection search offers 16 maker sub-types (e.g. "Attributed to", "Made after", "Signed by", "Rejected maker"). rijksmuseum-mcp+ currently captures four of these as structured attributionQualifier values — "attributed to", "workshop of", "circle of", and "follower of". Three additional qualifiers ("after", "possibly", and a second "circle of" type) are present in the Linked Art data and will be added in a future update. The remaining sub-types ("Signed by", "Manner of", "Rejected maker", "Falsification after") are being looked at – these may not be available via the public Linked Art API.
Image analysis works better than image annotation. LLMs are generally more accurate at describing the contents of an image than annotating it. For example, the AI assistant will often correctly describe what it can 'see' (even drawing on the detailed description field for guidance) but struggle to place accurate bounding-boxes around this content.
Roadmap
Recent (v0.20):
- Search by creator gender, birth year range, and attribution qualifier (e.g. "works from Rembrandt's workshop")
- Place hierarchy expansion (searching for "Netherlands" now includes Amsterdam, Delft, Haarlem, etc. automatically)
- Artwork details now show creator biographical info: life dates, gender, biographical notes, Wikidata links
- 31,000 places now geocoded with lat/longs (up from 21,000)
- Updated search parameter reference with all 37 filters documented
- Draw a region on the image viewer and ask the AI assistant about it
Soon:
- review capabilities of MCP clients besides Anthropic's Claude
- update documentation
- fine-tune query strategies
- v1.0 release
- paper/presentation
Later:
- investigate support for MCP elicitations
- create a SKILL file for exploring the collection
- investigate adding
attributionQualifier: "after", "possibly", and "circle of" (second type) - investigate exporting jpg/png from image viewer together with overlays
- investigate adding RGB pixel analyses of images
Maybe:
- investigate adding
attributionQualifier: "Signed by", "Manner of", "Rejected maker", "Falsification after" - investigate incorporating historical exhibition data
- investigate integration with other Linked Open Data resources (e.g. Colonial Collections)
- investigate support for image similarity search (whole image, image segments)
- investigate browsing all related images in the image viewer
- review remaining toponyms without geolocation data
Authors
Arno Bosse — RISE, University of Basel with Claude Code, Anthropic.
Citation
If you use rijksmuseum-mcp+ in your research, please cite it as follows. A CITATION.cff file is included for use with Zotero, GitHub's "Cite this repository" button, and other reference managers.
APA (7th ed.)
Bosse, A. (2026). rijksmuseum-mcp+ (Version 0.19.0) [Software]. Research and Infrastructure Support (RISE), University of Basel. https://github.com/kintopp/rijksmuseum-mcp-plus
BibTeX
@software{bosse_2026_rijksmuseum_mcp_plus,
author = {Bosse, Arno},
title = {{rijksmuseum-mcp+}},
year = {2026},
version = {0.20},
publisher = {Research and Infrastructure Support (RISE), University of Basel},
url = {https://github.com/kintopp/rijksmuseum-mcp-plus},
orcid = {0000-0003-3681-1289},
note = {Developed with Claude Code (Anthropic, \url{https://www.anthropic.com})}
}Image and Data Credits
Collection data and images are provided by the Rijksmuseum, Amsterdam via their Linked Open Data APIs.
Licensing: Information and data that are no longer (or never were) protected by copyright carry the Public Domain Mark and/or CC0 1.0. Where the Rijksmuseum holds copyright, it generally waives its rights under CC0 1.0; in cases where it does exercise copyright, materials are made available under CC BY 4.0. Materials under third-party copyright without express permission are not made available as open data. Individual licence designations appear on the collection website.
Attribution: The Rijksmuseum considers it good practice to provide attribution and/or source citation via a credit line and data citation, regardless of the licence applied.
Please see the Rijksmuseum's information and data policy for the full terms.
License
This project is licensed under the MIT License.
