Imagine being able to talk to an openEHR expert whenever you need to, asking for clinical models, tweaking archetypes, or prototyping new compositions in plain language, and having the underlying formalisms handled for you.
That is the promise of our brand new openEHR Assistant MCP Server: a bridge between openEHR’s rich ecosystem of archetypes, templates, and modelling workflows, and the new generation of LLM‑based assistants.
What does it do? The TL;DR
The openEHR Assistant MCP Server exposes openEHR knowledge artefacts and modelling workflows to LLM-based assistants, using the Model Context Protocol. By combining deterministic tools, canonical resources, and guided prompts, it supports exploration, comparison, explanation, and early-stage archetype or template design, while remaining aligned with openEHR semantics and governance.
With this assistant, we aim to make openEHR clinical modelling and using the openEHR Specifications faster and more accessible to newcomers. In this article, we’ll first dive into some of the basics (what is MCP and why does it matter?). Next, we’ll discuss the promising combination of MCP and openEHR and how this has led to the development of the openEHR Assistant MCP Server.
MCP in a nutshell: servers, clients, and LLM engines
The Model Context Protocol (MCP) is an open standard for connecting AI applications to external tools, data, and resources through a consistent interface. In MCP terms, the host is the AI application that embeds an LLM, a client is the connector inside that host, and an MCP server is the service that exposes capabilities such as tool calls (‘functions’), retrievable resources (‘URIs’), and structured prompts. The intent is similar to what the Language Server Protocol (LSP) did for IDE language features: a shared contract that allows ecosystems to grow around interoperable integrations.
In practice, this means an MCP-compatible client (for example Claude Desktop, Cursor, IDE integrations, or LibreChat) can connect to a domain-specific MCP server and immediately gain new skills: search, retrieval, validation, transformation, and action – all as first-class LLM tools. Instead of manually embedding large amounts of context into prompts, the assistant can fetch authoritative artefacts on demand and re-check details deterministically.
Why MCP matters, and where things were heading by the beginning of 2026
Most contemporary ‘Agentic AI’ systems can be described as LLM-driven loops that observe → plan → act → reflect, typically by invoking tools, inspecting results, and iterating. The hard part isn’t generating fluent text – it’s making those interactions reliable, composable, and auditable.
During 2025, the agentic hype cycle was in full swing. Analysts warned that many ‘agentic AI’ initiatives would be scrapped when costs and unclear ROI met operational reality – and that ‘agent washing’ (rebranding ordinary chatbots as agents) was widespread. The implication isn’t that agents are a dead end; it’s that the next phase is about standards, governance, and engineering discipline. Open protocols like MCP – and broader standardisation efforts – make it easier to build agents that can be tested, monitored, secured, and swapped between ecosystems.
As language models improved, the main bottleneck shifted toward integration: how reliably an agent can discover tools, call them correctly, verify results, and remain observable and governable in production environments.
Toward the very end of 2025, multiple major AI organisations publicly emphasised agent interoperability and the importance of open standards under neutral governance, signaling a longer-term direction: agents that can ‘plug into’ environments (APIs, knowledge bases, internal tools) using shared protocols rather than proprietary hooks.
MCP addresses this gap by providing a shared, open protocol for tool and resource integration. Open protocols such as MCP reduce fragmentation and make it feasible to build agents that behave consistently across different hosts and environments.
Why openEHR is complex: two-level modelling and accumulated clinical knowledge
openEHR is built around a deliberate separation between a stable technical foundation and evolving clinical knowledge – a powerful, but complex combination. The Reference Model (RM) defines the generic structures of health record data, while clinical meaning is expressed through archetypes and templates that constrain and specialise those structures. This approach, commonly referred to as two-level modelling, allows systems to remain technically stable while clinical models evolve over time.
Over the years, this has resulted in a substantial body of published artefacts that capture years of clinical, informatics, and governance experience. Archetypes are not merely schemas; they encode consensus on what should be recorded, how it should be constrained, and how it should interoperate semantically. The Clinical Knowledge Manager (CKM) acts as the focal point for this ecosystem, supporting review, versioning, translation, and lifecycle governance of these artefacts.
The complexity is therefore intentional rather than accidental. It is the natural consequence of combining formal models, clinical semantics, and long-term interoperability goals. From a modelling perspective, this richness enables reuse and safety – but it also raises the entry barrier for newcomers.
Why this is hard for new developers and modellers
Successful openEHR implementation typically requires a blend of skills that are rarely developed together early on:
- Clinical domain understanding, beyond simple data capture.
- Modelling literacy, including constraints, specialisation, and reuse patterns.
- Governance awareness, understanding why shared artefacts matter and how change is managed.
- Tooling fluency, spanning CKM navigation, template composition, terminology lookup, and validation workflows.
For developers coming from traditional software backgrounds, progress often depends less on writing code and more on understanding and navigating knowledge artefacts. This is precisely the space where AI-assisted workflows can add value: not by replacing governance or expertise, but by accelerating exploration, retrieval, explanation, and early-stage drafting.
Introducing openehr-assistant-mcp
The openEHR Assistant MCP Server aims to make openEHR’s modelling surface area more approachable inside modern AI clients. It exposes openEHR-specific capabilities as MCP tools, resources, and guided prompts, allowing LLM-based assistants to interact with canonical artefacts in a structured and repeatable way. In practice, the MCP server can be used from a growing ecosystem of MCP-compatible clients. It works equally well with conversational assistants and IDE-integrated tools such as Claude, Claude Code, Cursor, LibreChat, and IntelliJ-based IDE environments. This allows the same openEHR-aware capabilities to be available during modelling sessions, documentation work, and hands-on development. Having immediate access to archetypes, specifications, explanations, comparisons, and draft artefacts directly from the working environment enables rapid iteration and tight feedback loops between modelling and implementation. This is particularly valuable during early project phases, proof-of-concept development, and exploratory design work, where both requirements and models evolve quickly. As technical details, the server is implemented in PHP 8.4, follows PSR conventions, and is distributed with Docker support for both local experimentation and deployment.What it exposes to an LLM client
Tools (deterministic actions)
CKM (Clinical Knowledge Manager)ckm_archetype_search– list archetypes matching search criteriackm_archetype_get– retrieve a CKM archetype by identifierckm_template_search– list templates matching criteriackm_template_get– retrieve a CKM template by identifier
terminology_resolve– resolve an openEHR concept identifier to human-readable rubrics and groups
type_specification_search– search bundled openEHR type specificationstype_specification_get– retrieve a type specification as BMM JSON
Prompts, guides, and nudging LLM behaviour
Beyond raw tools, the server provides guided prompts that deliberately orchestrate tool usage in combination with embedded guides. A prompt does not simply ask the LLM to ‘explain’ or ‘design’ something; it nudges the model toward a consistent procedure, for example:- Search for relevant artefacts.
- Retrieve authoritative definitions.
- Interpret structure and constraints.
- Summarise or critique using a predefined checklist.
Resources (fetchable canonical artefacts)
The server exposesopenehr://... resource URIs so an MCP client can fetch artefacts on demand:
- Guides (Markdown):
openehr://guides/{category}/{name} - Type specifications (BMM JSON):
openehr://spec/type/{component}/{name} - Terminology projections:
openehr://terminology/{type}/{id}
What problems it aims to solve
From a workflow perspective, these capabilities target common sources of friction in openEHR projects:- Discovering and understanding existing archetypes and templates.
- Navigating specifications and type hierarchies efficiently.
- Resolving terminology identifiers and bindings.
- Iterating on draft artefacts with faster feedback loops.
- Providing structured review scaffolding for modelling discussions.
Concrete use cases enabled by the MCP server
The following use cases are derived from real prompt–response interactions using the openEHR Assistant MCP Server. They illustrate how MCP tools, resources, and guided prompts support common openEHR modelling and analysis tasks.Use case 1: Comparing similar archetypes and understanding semantic intent
Two published archetypes on CKM may appear similar at first glance:openEHR-EHR-OBSERVATION.progress_note.v1 and
openEHR-EHR-EVALUATION.clinical_synopsis.v1.
Using the MCP server, the assistant retrieves both archetypes, analyses their Reference Model entry classes, and explains the semantic distinction between observation and evaluation. It compares their structural differences (event-based vs. non-event-based data), explains the implications for temporal modelling, and summarises when each should be used in practice.
The result is a clear mapping between modelling choice and clinical intent-helping modellers avoid semantic misuse while selecting appropriate archetypes for progress notes, summaries, and discharge documentation.
Claude transcript: Progress note vs clinical synopsis archetypes
Use case 2: Explaining a specialised clinical assessment archetype
When encountering a specialised archetype such asopenEHR-EHR-OBSERVATION.four_a_test.v1, the assistant can explain both the clinical concept and the formal model.
The MCP workflow allows the assistant to describe the 4AT delirium screening tool, break down its components and scoring logic, and position it relative to related CKM archetypes such as ACVPU, NEWS2, and the Glasgow Coma Scale. It also demonstrates how the archetype can be combined with others in admission, emergency, geriatric, or postoperative templates.
This supports developers and modellers in understanding not just what the archetype contains, but how and why it is used in clinical workflows.
Claude transcript: Four A test archetype overview and integration
Use case 3: Designing a new archetype when none exists
In some cases, no suitable archetype exists on the CKM; for example, when modelling the 6-minute walk test (6MWT). Here, the assistant confirms the absence of an existing archetype, reviews relevant clinical guidelines and literature, and drafts a new openEHR archetype aligned with established modelling patterns. Where possible, existing CKM cluster archetypes are reused via slots, and scientific references are included directly in the model. The assistant also recommends an appropriate COMPOSITION archetype for capturing the data during a clinical encounter. While such output is intended as a draft or proof-of-concept, it significantly accelerates early modelling and design discussions. Claude transcript: 6-minute walk test data capture archetypeUse case 4: Bridging openEHR models and application code
The assistant can also translate openEHR artefacts into coding and application-friendly representations. In this use case, it analyses theopenEHR-EHR-EVALUATION.precaution.v1 archetype and produces a PHP DTO class that mirrors the archetype structure while adapting Reference Model semantics into idiomatic application code.
Claude transcript: openEHR Precaution DTO classThese examples show that, by reducing friction throughout the design process, MCP-enabled workflows help bridge the gap between initial questions and informed drafts. While human expertise remains essential for validation, and AI-generated outputs may occasionally require semantic or technical correction, they provide a structured and semantically grounded foundation that is far more efficient than starting from scratch. This approach allows teams to produce POC drafts rapidly, catching design issues early and significantly accelerating the overall feedback loop.
Closing: reflections and an invitation
Working on this MCP server reinforced the idea that the real leverage of AI in complex domains such as healthcare lies in integration and discipline, not in replacing expertise. By grounding agentic behaviour in deterministic tools and curated guidance, it becomes possible to experiment safely and productively within established governance frameworks.
If you are working with openEHR, exploring MCP, or simply interested in applying agentic patterns to complex knowledge domains such as clinical modelling, you are invited to experiment, provide feedback, and contribute to the openEHR Assistant project on GitHub.

