MLOPS / AGENTOPS LIFECYCLE — PRODUCTION AI OPERATIONS1Model Selectionselect2Prompt Engineeringengineer3Integration Buildconnect4Governance Setupgovern5Deploymentdeploy6Monitorobserve7RetrainimproveContinuous loop — monitoring feeds retraining which feeds new deployment
← Intelligence Hub|MLOps16 min read

MLOps Best Practices for Agentic AI Operations — The Complete Operational Playbook for Enterprises Running AI Agents at Scale

From MLOps to AgentOps: What Changes

Traditional MLOps addresses the lifecycle of a single model: train, evaluate, deploy, monitor, retrain. AgentOps addresses the lifecycle of an entire system of agents — each with their own model, prompt, tool integrations, governance rules, and performance characteristics — operating as a coordinated workforce on real enterprise processes.

AgentOps inherits all of MLOps' practices and adds several new ones: prompt operations (PromptOps), agent orchestration operations, system-of-record integration management, trust and governance tier management, and — critically — human-in-the-loop operations, which is the management of the interface between autonomous agent decisions and human oversight.

💡The most important mental shift from MLOps to AgentOps: you are not operating models, you are operating a workforce. The relevant analogy is not "deploying a model" — it is "running an operations team." The team needs: defined roles (agent specialisation), clear escalation paths (governance thresholds), performance management (monitoring and metrics), professional development (retraining and improvement), and a manager (the AgentOps engineer or team).

Phase 1: Model and Agent Selection

Selection framework for enterprise agents

Agent selection in enterprise contexts is not primarily about benchmark scores. It is about task fit, governance compatibility, deployment flexibility, and total cost of ownership. The selection criteria for a freight document extraction agent and a financial close reasoning agent are different — even if both use LLMs as their core capability.

Selection CriterionWhat to EvaluateRed Flags
Task fitQuality on production-representative samples — not benchmarksBenchmark performance far exceeds production-sample performance (selection bias)
Output reliabilityFormat compliance rate, structured output consistency, error rate on edge casesHigh variance on identical inputs; format non-compliance requiring frequent retry
Inference cost at scaleCost per 1M tokens at expected production volumePricing that looks competitive at low volume but scales poorly (per-request minimums)
Deployment flexibilitySelf-hosted option, on-prem support, air-gap compatibilityCloud-only with no data residency controls
Governance compatibilityExplainable outputs, audit trail support, human override hooksBlack-box outputs with no reasoning trace

Phase 2: Prompt Engineering as Operations

Prompt engineering for production agents is an ongoing operational discipline, not a one-time setup task. Prompts drift from their optimal state as: the input distribution shifts (new document types, new suppliers, new regulatory requirements), the model is updated by the provider, business rules change, and edge cases accumulate that the original prompt did not anticipate.

The prompt operations cadence

Monthly prompt audit: review every production prompt for relevance, token efficiency, and edge case coverage. Quarterly prompt optimisation: systematically test improvements using automated evaluation against a held-out sample. Event-driven prompt updates: when a significant distribution shift or model update is detected, trigger an immediate prompt review cycle.

Prompt testing framework

For every prompt change, before production promotion: regression test against a minimum of 200 representative samples from the production distribution. Automated quality gate: new prompt must match or exceed baseline on all primary metrics. A/B test in shadow mode (parallel run, outputs not used) for 48 hours before canary promotion. Human review of a random sample of shadow mode outputs to catch quality regressions not captured by automated metrics.

Phase 3: Integration Build and Maintenance

Enterprise agent integrations — SAP via OData/BAPI, carrier APIs, customs portals, banking APIs — require operational maintenance that is distinct from application code maintenance. External APIs change schemas, deprecate endpoints, change authentication schemes, and modify rate limits without notice that reaches your development team.

Integration contract testing: For every external integration, maintain a contract test suite that validates the integration's expected request/response schema. Run contract tests daily against production integrations. When a contract test fails, it is an early warning of an API change — detectable before it causes a production incident.

Integration health monitoring: Track per-integration success rate, latency P99, and error type distribution in real time. An integration whose error rate rises from 0.1% to 2% is a leading indicator of a change in the external system that may cascade to agent failures within hours.

Phase 4: Production Monitoring — The AgentOps Dashboard

The AgentOps monitoring dashboard aggregates metrics across all agents and presents them in a unified view. The dashboard should answer five questions at a glance:

  • Is the workforce healthy? — Aggregate automated processing rate, error rate, and queue depth across all agents
  • Which agents need attention? — Per-agent metrics sorted by deviation from baseline; anomaly-highlighted
  • What is the cost trend? — Daily/weekly cost per agent and per task type; trend vs. plan
  • Are there security events? — Injection attempts, trust tier violations, anomalous output patterns
  • What is the retraining queue status? — Agents with drifting input distributions, models approaching retraining schedules
📋VoltusWave's production AgentOps dashboard monitors 9 specialised agents across freight operations deployments. The dashboard runs on a 5-minute refresh cycle and sends alerts for: automated rate drop of more than 5% over 24 hours; error rate spike above 1.5%; cost increase of more than 20% week-over-week; any trust tier violation or security event. In 18 months of operation, the alert system caught 11 issues that would have caused visible production incidents if undetected — 8 related to external API changes, 3 related to input distribution shifts.

Phase 5: Retraining and Continuous Improvement

Retraining in an agentic context is broader than model retraining. It includes: model fine-tuning (when domain-specific performance requires it), prompt optimisation (monthly discipline as described above), agent configuration updates (confidence threshold adjustments based on production evidence), escalation criteria refinement (based on human reviewer override patterns), and system-of-record schema updates (when upstream systems change their data structures).

The human feedback loop

Every case where a human overrides an agent decision is a training signal. Systematically capturing these override events — the agent's decision, the human's decision, and the human's reasoning — creates a continuously growing dataset of cases where the agent's judgment diverged from expert human judgment. This dataset is the highest-quality training signal available for improving agent performance over time, because it is drawn from real production cases where the agent's behaviour was inadequate.

Phase 6: Agent Sunset and Succession

Agents do not run forever. Business processes change, integrations are deprecated, better models become available, regulatory requirements shift. An agent that has served its purpose needs a disciplined sunset process: traffic reduction over a deprecation window, data migration for any agent-managed state, integration decommissioning, documentation archive, and a formal sign-off that verifies no downstream dependencies remain active.

Skipping the sunset process is how "zombie agents" accumulate — agents that are technically still running but not actively monitored or maintained, consuming cost and creating security exposure without delivering value.

The AgentOps Maturity Ladder

Maturity LevelCharacteristicsNext Step
L1 — Ad HocAgents deployed without formal operations; monitoring by user complaintsImplement structured monitoring dashboard and alert thresholds
L2 — ReactiveMonitoring in place; respond to issues as they arise; no proactive operationsAdd proactive drift detection and monthly prompt audit cadence
L3 — ManagedRegular operations cadence; champion-challenger testing; version control for prompts and modelsAutomate deployment tracks; implement integration contract testing
L4 — OptimisingAutomated quality gates; systematic A/B testing; human feedback loop capturedBuild feedback-driven retraining pipeline; agent self-improvement from override events
L5 — Autonomous OpsSelf-healing agents; automated retraining triggered by drift detection; minimal human operational overheadFocus on expanding agent scope and capability rather than operations maintenance
AgentOps at L4–L5

VoltusWave's platform is designed to operate at AgentOps L4–L5: automated quality gates, champion-challenger deployment tracks, human feedback loop capture, and automated retraining triggers. Our engineering team provides L4/L5 AgentOps support as part of every enterprise deployment.