Where AI Belongs in Your Architecture Diagram (and Where It Absolutely Doesn’t)
Over the last two years, many architecture diagrams have grown a new box.
Somewhere near the edge of the drawing, there is a rectangle labelled “AI” or “LLM”. Arrows go in, arrows come out, and the rest of the system looks unchanged.
It is an understandable shorthand. But as AI moves from experiments into core systems, that shorthand starts to hide more than it reveals.
Where you place AI in your architecture diagram is not just cosmetic. It encodes decisions about:
- how data flows,
- where control lives,
- which components can fail without bringing everything down,
- and how people are supposed to reason about behaviour.
In this article, we look at where AI belongs in your diagrams, and where it absolutely does not.
The Problem with the “AI Box on the Side”
The simplest pattern we see is:
- Existing services and data stores drawn as they always were.
- A new “AI” box connected to one or two services.
- A belief that this is enough to “show where AI lives”.
This hides several important realities:
- AI is rarely a single component. It shows up in data flows, decision points, and user interactions.
- The most critical risks (privacy, safety, correctness) often sit at boundaries – not inside a generic AI blob.
- People reading the diagram cannot tell what the AI is allowed to do, or what happens when it fails.
When the diagram is this vague, different teams quietly build different mental models of how AI behaves. That makes it harder to reason about failures, responsibilities, and changes.
Thinking in Capabilities, Not a Single AI Component
A more accurate way to think about AI in architecture is as a set of capabilities embedded in specific places, not a single standalone box.
Some examples:
- Understanding and transforming inputs – classification, extraction, normalisation, ranking.
- Assisting humans – summarising context, suggesting next steps, drafting content.
- Automating decisions – approving, routing, prioritising, or escalating.
Each of these belongs in different parts of the diagram:
- Input processing capabilities sit near ingress and integration boundaries.
- Assistance capabilities sit near user-facing services and workflows.
- Automation capabilities sit near decision and control points.
In other words, the right question is rarely “where do we draw the AI box?” but “which boxes and arrows now have AI-influenced behaviour?”
Where AI Clearly Belongs
From an architectural point of view, AI tends to belong in places where it can
- enrich information,
- reduce cognitive load, or
- improve decisions
without silently taking over control.
Some practical examples of “good” placements:
-
At the edges, shaping raw input
- Extracting entities from unstructured text.
- Normalising user-submitted data into structured fields.
- Classifying events or tickets into queues.
-
Next to humans, improving interfaces
- Summarising long histories or logs for support and operations teams.
- Suggesting next actions in a workflow, while humans still confirm.
- Drafting communication that humans edit before sending.
-
Inside analysis and review loops
- Highlighting anomalies in metrics or traces.
- Proposing architectural review points based on system signals.
- Ranking risks or options for humans to examine.
In diagrams, these show up as AI capabilities attached to existing services or flows, with clear labels about what they do and do not do.
Where AI Absolutely Doesn’t Belong
There are also places where tucking AI into the diagram is actively dangerous.
A few strong “no” areas:
-
As an undefined decision oracle
- A box that simply says “AI decides” between two critical paths.
- No explanation of inputs, thresholds, or fallbacks.
- No separation between “recommends” and “executes”.
-
As a hidden controller of critical side effects
- AI directly initiating irreversible financial actions.
- AI auto-approving access or permissions without guardrails.
- AI triggering production changes without human or policy checks.
-
As a silent extension of every component
- Every box annotated with “+ AI” with no explanation.
- No way to tell which behaviours are deterministic and which are model-driven.
In these cases, the diagram stops serving its purpose. It no longer helps people understand who is responsible, how to reason about failures, or where to put guardrails.
Showing Guardrails, Not Just Capabilities
Placing AI correctly is only half the job. The other half is showing guardrails.
For each AI-influenced part of your diagram, try to show:
- Inputs and outputs – what information flows through this capability?
- Controls and limits – rate limits, access controls, quotas.
- Fallbacks – what happens if the AI is unavailable or produces low-confidence results?
- Auditability – where logs and traces are captured.
That might mean drawing:
- separate boxes for policy engines or approval workflows around AI-driven actions,
- explicit edges for human-in-the-loop review,
- or additional components for safety checks (e.g., content filters, anomaly detectors).
If the diagram makes it impossible to answer “what stops this from going wrong?”, AI is probably drawn in the wrong place or at the wrong level of detail.
Aligning Diagrams with Real System Boundaries
Architecture diagrams are at their best when they reflect real boundaries:
- deployment units,
- data stores,
- network zones,
- and organisational responsibilities.
When AI is shown as a floating box that does not match any of these, it becomes hard to align:
- who owns which capability,
- where to monitor,
- where to escalate incidents,
- and how to evolve behaviour safely.
By contrast, when AI shows up inside or alongside real components – a service, an edge processor, a review tool – people can connect it to code, pipelines, and operational practices.
How We Approach This at Fentrex
When we review or design systems that include AI, we try to make diagrams answer a few specific questions:
- Where is AI enriching or transforming data?
- Where is AI assisting humans versus automating decisions?
- Which components change behaviour because of AI, and what are their failure modes?
- Where are guardrails, approvals, and audit trails encoded?
We treat AI as a set of capabilities that belong in specific parts of the system, not a mysterious new tier.
That mindset tends to produce diagrams – and systems – that remain understandable even as AI takes on more work.
Questions to Ask About Your Own Diagrams
If you sketch your current architecture diagram and circle every place where AI is involved, what do you see?
A few useful questions:
- Do we have a single vague “AI” box, or are we explicit about capabilities?
- Can someone new to the system tell where AI suggestions end and automated actions begin?
- Are guardrails and fallbacks as visible as the AI components themselves?
- If a particular AI capability failed or misbehaved, does the diagram make it obvious what would happen next?
Answering those questions honestly is one of the fastest ways to see where AI truly belongs in your architecture diagram – and where it absolutely should not be hiding.