AGENT SECURITY ARCHITECTURE — CTO REFERENCE MODELYOUR PERIMETERERP / SORYour dataAgent RuntimeExecution layerModel InferenceAI reasoningAudit StoreImmutable logPermission StoreAccess controlGovernance APIChange controlVOLTUSWAVEORCHESTRATORAll model inference runs inside the perimeter. No raw data crosses the boundary.
Agent Security · CTO Architecture

Agent Security for CTOs: The Architecture Decisions That Determine Risk

S
Charles Sasi Paul
Founder & CEO, VoltusWave Technologies
April 2026 · 12 min read

The security posture of an AI agent platform is determined almost entirely by four architectural decisions made before the first line of agent logic is written: where model inference happens, how data flows from system of record to agent, how agent credentials are scoped and managed, and how the audit trail is generated and stored. Get these four decisions right and the system is fundamentally defensible. Get any one of them wrong and no amount of access controls, encryption, or compliance certification will fully compensate.

This article is written for the CTO and solution architect evaluating AI agent platforms. It covers the specific architectural patterns that create security risk, what the secure alternatives look like, and the questions to ask vendors when you can't see their source code.

Decision 1: Where Model Inference Happens

This is the most consequential security decision in AI agent architecture, and it's one that most platform vendors have already made on your behalf — in their favour.

In a cloud-hosted AI agent platform, when your agent processes an invoice, reads a purchase order, or analyses a patient record, that data is sent to a remote inference endpoint — typically a large language model running on the platform vendor's infrastructure or a third-party model provider's infrastructure. The data crosses your network boundary, enters their environment, and is processed by a model you do not control, on servers you do not own, under a terms of service agreement their legal team wrote.

🔐The architectural test: Ask your vendor to trace the path of a single data element — say, a supplier invoice amount — from the moment an agent reads it from your ERP to the moment a decision is made and acted upon. If any step in that trace involves the data leaving your network, you have a data residency and inference boundary problem.

The secure architecture for enterprises in regulated industries is on-prem inference: the model runs inside your perimeter, on your infrastructure, with your security controls applied. The agent reads from your ERP, the inference happens locally, and the decision is executed against your ERP. The data never leaves. VoltusWave is designed with this architecture as the default for on-prem deployments — not as a premium add-on or a future roadmap item.

Decision 2: Data Flow from System of Record to Agent

The integration between an AI agent and an enterprise system of record — your SAP, Oracle, or Dynamics instance — is an attack surface. The specific attack surface depends entirely on how that integration is implemented.

The dangerous pattern: direct database access

Some AI agent platforms integrate with ERP systems via direct database connections — read-only replica access, JDBC connections, or custom database views. This pattern is architecturally dangerous for two reasons. First, it exposes the full data model of your ERP to the agent platform, not just the data the agent needs. Second, direct database access typically bypasses the ERP's own access control layer — the permissions model your IT team has spent years configuring is circumvented in a single integration decision.

The dangerous pattern: custom ERP code

Other platforms install custom code inside your ERP — ABAP function modules in SAP, Oracle custom objects, Dynamics extensions. This creates a persistent footprint inside your system of record that exists independently of the AI platform. If you terminate the vendor relationship, the custom code remains. If the vendor's code has a vulnerability, it's in your ERP. If the code is modified in a platform update, it changes your ERP without your explicit approval.

The secure pattern: published API surface only

The secure integration architecture uses only the published, standard API surface of the ERP: OData for SAP, REST APIs for Oracle Cloud and Dynamics 365. This approach has three security properties that the alternatives lack: the ERP's own access control layer applies (the agent can only access what the service account is authorised to access), there is no custom code footprint in the ERP, and the integration is upgrade-safe because it uses the vendor-supported API surface.

VoltusWave integration architecture: Standard published APIs only. OData for SAP (no ABAP, no BAPIs, no custom function modules). REST APIs for Oracle Cloud and Dynamics 365. Per-agent service accounts scoped to the minimum required API endpoints. No direct database access. No custom ERP code. The integration surface is fully auditable, bounded, and upgrade-safe.

Decision 3: Agent Credential Management

Every AI agent that connects to an enterprise system needs credentials — an API key, an OAuth token, a service account password. How those credentials are created, stored, rotated, and scoped is a security decision with significant implications.

The anti-pattern: A single shared service account with broad ERP access, credentials stored in the platform's own database, rotated infrequently or never, shared across all agents regardless of function. This is unfortunately the default for many platforms — it's the easiest implementation path.

The secure pattern: Per-agent service accounts, scoped to the minimum API endpoints the agent needs (least privilege), credentials stored in your secrets manager (not the platform's database), automatic rotation integrated with your identity management infrastructure, and credentials that expire if not rotated within a defined interval.

The practical implication: when you ask a vendor "how does Agent A authenticate to our SAP system?", the answer should include the specific service account name, the specific OData services it has access to, where the credential is stored, how rotation is handled, and what happens to access if the credential expires. Any vague answer is a red flag.

Decision 4: Audit Trail Architecture

The audit trail for an AI agent workforce is not a logging feature — it is the compliance backbone of the entire system. For regulated industries, it may also be a legal requirement. The architecture of that audit trail determines whether it can actually be used for compliance, incident investigation, and governance.

What the audit trail must contain

For each agent action, the audit record should capture: the trigger event (what data change or scheduled event caused the agent to activate), the data read (specifically what fields from what records were read), the reasoning chain (what the agent evaluated and why it made the decision it made), the action taken (what was written, updated, or triggered), the outcome (what the ERP confirms happened), the timestamp (with timezone and sequence number), and the agent identity (which agent version executed, with configuration hash).

Where it must be stored

The audit trail must be stored in your storage, not the platform's storage. The two are not the same. If the audit trail lives in the platform vendor's database, you are dependent on the vendor for compliance queries, audit requests, and incident investigation. If the vendor experiences an outage, your audit trail is unavailable. If you terminate the relationship, audit history becomes a negotiation point. The correct architecture writes audit records to your storage — your S3, your data lake, your SIEM — in real time, in an immutable format.

⚠️A specific question to ask: "If we terminate our agreement with you today, what happens to our audit trail from the past 18 months? Who owns it, where is it stored, in what format, and can we export it independently of your platform?" A vendor who cannot answer this cleanly does not have an architecture you want to bet your compliance on.

The Architecture Review Checklist

Before signing any AI agent platform agreement, your security architect should be able to answer these questions from direct technical review — not from vendor documentation:

1
Can you trace a single data element from ERP read through model inference to action execution, with the network boundary explicitly shown at each step?
2
Is there any custom code installed in the ERP? If yes, what is it, who maintains it, and what happens to it if we terminate?
3
How many service accounts does the platform use to connect to our ERP? What is the access scope of each?
4
Where are agent credentials stored? How are they rotated? What is the process if a credential is compromised?
5
Where is the audit trail stored? Can you show us a sample audit record with all fields populated? Can we write a compliance query against it independently?
6
What is the rollback procedure if an agent posts a journal entry or executes a payment that needs to be reversed? How long does it take? Who can initiate it?
7
Who in your organisation can access our production data? Under what circumstances? With what approval process?
For CTOs & Solution Architects

VoltusWave's architecture team will walk through each of the seven checklist questions with your architect directly — with our actual technical documentation, not a sales deck. Bring the hard questions.