Agent Security for CTOs: The Architecture Decisions That Determine Risk
The security posture of an AI agent platform is determined almost entirely by four architectural decisions made before the first line of agent logic is written: where model inference happens, how data flows from system of record to agent, how agent credentials are scoped and managed, and how the audit trail is generated and stored. Get these four decisions right and the system is fundamentally defensible. Get any one of them wrong and no amount of access controls, encryption, or compliance certification will fully compensate.
This article is written for the CTO and solution architect evaluating AI agent platforms. It covers the specific architectural patterns that create security risk, what the secure alternatives look like, and the questions to ask vendors when you can't see their source code.
Decision 1: Where Model Inference Happens
This is the most consequential security decision in AI agent architecture, and it's one that most platform vendors have already made on your behalf — in their favour.
In a cloud-hosted AI agent platform, when your agent processes an invoice, reads a purchase order, or analyses a patient record, that data is sent to a remote inference endpoint — typically a large language model running on the platform vendor's infrastructure or a third-party model provider's infrastructure. The data crosses your network boundary, enters their environment, and is processed by a model you do not control, on servers you do not own, under a terms of service agreement their legal team wrote.
The secure architecture for enterprises in regulated industries is on-prem inference: the model runs inside your perimeter, on your infrastructure, with your security controls applied. The agent reads from your ERP, the inference happens locally, and the decision is executed against your ERP. The data never leaves. VoltusWave is designed with this architecture as the default for on-prem deployments — not as a premium add-on or a future roadmap item.
Decision 2: Data Flow from System of Record to Agent
The integration between an AI agent and an enterprise system of record — your SAP, Oracle, or Dynamics instance — is an attack surface. The specific attack surface depends entirely on how that integration is implemented.
The dangerous pattern: direct database access
Some AI agent platforms integrate with ERP systems via direct database connections — read-only replica access, JDBC connections, or custom database views. This pattern is architecturally dangerous for two reasons. First, it exposes the full data model of your ERP to the agent platform, not just the data the agent needs. Second, direct database access typically bypasses the ERP's own access control layer — the permissions model your IT team has spent years configuring is circumvented in a single integration decision.
The dangerous pattern: custom ERP code
Other platforms install custom code inside your ERP — ABAP function modules in SAP, Oracle custom objects, Dynamics extensions. This creates a persistent footprint inside your system of record that exists independently of the AI platform. If you terminate the vendor relationship, the custom code remains. If the vendor's code has a vulnerability, it's in your ERP. If the code is modified in a platform update, it changes your ERP without your explicit approval.
The secure pattern: published API surface only
The secure integration architecture uses only the published, standard API surface of the ERP: OData for SAP, REST APIs for Oracle Cloud and Dynamics 365. This approach has three security properties that the alternatives lack: the ERP's own access control layer applies (the agent can only access what the service account is authorised to access), there is no custom code footprint in the ERP, and the integration is upgrade-safe because it uses the vendor-supported API surface.
Decision 3: Agent Credential Management
Every AI agent that connects to an enterprise system needs credentials — an API key, an OAuth token, a service account password. How those credentials are created, stored, rotated, and scoped is a security decision with significant implications.
The anti-pattern: A single shared service account with broad ERP access, credentials stored in the platform's own database, rotated infrequently or never, shared across all agents regardless of function. This is unfortunately the default for many platforms — it's the easiest implementation path.
The secure pattern: Per-agent service accounts, scoped to the minimum API endpoints the agent needs (least privilege), credentials stored in your secrets manager (not the platform's database), automatic rotation integrated with your identity management infrastructure, and credentials that expire if not rotated within a defined interval.
The practical implication: when you ask a vendor "how does Agent A authenticate to our SAP system?", the answer should include the specific service account name, the specific OData services it has access to, where the credential is stored, how rotation is handled, and what happens to access if the credential expires. Any vague answer is a red flag.
Decision 4: Audit Trail Architecture
The audit trail for an AI agent workforce is not a logging feature — it is the compliance backbone of the entire system. For regulated industries, it may also be a legal requirement. The architecture of that audit trail determines whether it can actually be used for compliance, incident investigation, and governance.
What the audit trail must contain
For each agent action, the audit record should capture: the trigger event (what data change or scheduled event caused the agent to activate), the data read (specifically what fields from what records were read), the reasoning chain (what the agent evaluated and why it made the decision it made), the action taken (what was written, updated, or triggered), the outcome (what the ERP confirms happened), the timestamp (with timezone and sequence number), and the agent identity (which agent version executed, with configuration hash).
Where it must be stored
The audit trail must be stored in your storage, not the platform's storage. The two are not the same. If the audit trail lives in the platform vendor's database, you are dependent on the vendor for compliance queries, audit requests, and incident investigation. If the vendor experiences an outage, your audit trail is unavailable. If you terminate the relationship, audit history becomes a negotiation point. The correct architecture writes audit records to your storage — your S3, your data lake, your SIEM — in real time, in an immutable format.
The Architecture Review Checklist
Before signing any AI agent platform agreement, your security architect should be able to answer these questions from direct technical review — not from vendor documentation:
VoltusWave's architecture team will walk through each of the seven checklist questions with your architect directly — with our actual technical documentation, not a sales deck. Bring the hard questions.