Every integration you've ever built was a bet that you knew, upfront, exactly how producer and consumer would talk. For fifty years, that bet paid off. It's about to stop paying off — because the consumer now has its own opinions about how to use your system.

Thesis: When LLM-powered agents can discover, understand, and compose capabilities at runtime, integrations shift from pre-programmed contracts to dynamically negotiated workflows. This changes the economics, maintenance burden, and architecture of enterprise integration.


This is Post 4 of 6 in The Interface Inflection series. Previously: Post 1: Interfaces Are Changing explored the paradigm shift. Post 2: MCP: The Semantic Data Layer covered how MCP makes capabilities legible to AI. Post 3: Headless AI: Every App Is a Head showed how multiple interfaces consume the same capability layer.


The Old Integration Model

Traditional integrations follow a predictable, expensive pattern: two teams agree on a message format, build adapters on both sides, test the handshake, deploy, and maintain it indefinitely. Each new consumer means another project. Each schema change means coordinated updates across every integration that touches it.

The math is punishing. An enterprise with 50 internal systems and 10 external partners doesn't need 60 integrations — it needs some combinatorial subset of 60×59 possible connections, each maintained independently. Integration teams spend more time updating existing connections than building new ones.

This model worked because there was no alternative. If the consumer was a known application with a known purpose, you could afford to hardcode the contract. But when the consumer is an agent whose purpose changes with every user prompt, the static contract model breaks down.

The New Model: Discover, Understand, Compose, Execute

With MCP and LLMs, the integration flow inverts. Instead of pre-programming every connection, the agent negotiates the integration at runtime.

sequenceDiagram
    participant User
    participant Agent as LLM Agent
    participant MCP as MCP Server
    participant API as Backend Systems

    User->>Agent: "Set up IT support with ticket creation"
    Agent->>MCP: List available tools
    MCP-->>Agent: Tool schemas + descriptions
    Agent->>Agent: Select relevant tools for intent
    Agent->>Agent: Compose workflow from tools
    Agent->>MCP: Execute tool calls
    MCP->>API: Perform operations
    API-->>MCP: Results
    MCP-->>Agent: Structured responses
    Agent->>Agent: Adapt based on results
    Agent-->>User: Completed workflow

The six-step cycle:

  1. Discover — agent enumerates available tools from the MCP server
  2. Understand — agent reads tool schemas and semantic descriptions
  3. Select — agent picks relevant tools based on the user's intent
  4. Compose — agent sequences tools into a workflow
  5. Execute — agent calls tools and handles results
  6. Adapt — agent notices new capabilities and adjusts approach

None of this is pre-programmed. The integration is negotiated at runtime.

The Micronization of Integrations

The old model: build a comprehensive Salesforce integration. Months of work. Covers 80% of use cases. The other 20% gathers dust in a backlog nobody will prioritize.

The new model: expose Salesforce capabilities as individual MCP tools — create opportunity, update contact, query pipeline, log activity. Agents compose only what they need, when they need it. The integration is assembled just in time from atomic capabilities.

This is the integration equivalent of microservices — except the "services" are individual capabilities, and the "orchestrator" is an LLM that understands what the user actually wants.

Integrations used to be built once and maintained forever. Now they're composed on demand and adapted per request.

The Agent That Found a New API

Here's a real story from the Ema platform. An MCP consumer had a function for cloning data between environments. A developer later noticed the agent was using a new API endpoint for copies that nobody had explicitly integrated. The agent discovered the endpoint through MCP, understood its schema from the tool description, and started using it because it was a better fit for the operation.

From the user's perspective, nothing changed — same request, same result. From an integration perspective, everything changed — no new code, no configuration update, no deployment. The agent discovered and negotiated a better approach on its own.

This is the core shift: integrations become living, adaptive connections rather than static wiring.

Dynamic Composition in Practice

Consider a concrete workflow using the Ema MCP toolkit:

  1. Agent receives: "Set up an IT support bot, test it, then promote to production"
  2. Agent calls catalog(type="templates") — finds IT support templates
  3. Agent calls persona(method="create") with the selected template
  4. Agent calls persona(data={method:"upload"}) to add knowledge base docs
  5. Agent calls workflow(mode="get") then workflow(mode="validate") to verify configuration
  6. Agent calls sync(method="preview", from="demo", to="prod") to preview promotion
  7. Agent requests human confirmation, then sync(method="execute")

Each step was composed on the fly. The agent chose which tools to call, in what order, based on the user's intent and the results of each prior step. A different request — "clone the HR bot and customize it for APAC" — would produce an entirely different tool sequence from the same capability surface.

Hyper-Personalized Experiences

When integrations are negotiated rather than pre-built, the consumer experience becomes adaptive:

  • Different users get different capability compositions based on their role and intent
  • The same platform serves sales, engineering, and operations without predefined workflows for each
  • Responses can be documents, configurations, charts, or code — assembled and tailored per request

One-size-fits-all integrations become one-size-fits-one. The platform describes what it can do; the agent decides what's relevant for this user, right now.

Decision Framework: Pre-Built vs. Negotiated

Not every integration should be negotiated. High-volume, well-defined workflows still benefit from hardcoded optimization. The decision framework:

flowchart TD
    A["Is the workflow well-defined<br/>and high-volume?"] -->|Yes| B["Pre-built integration<br/>(hardcoded, optimized)"]
    A -->|No| C["Is the consumer<br/>an agent or LLM?"]
    C -->|Yes| D["Negotiated via MCP<br/>(dynamic, composable)"]
    C -->|No| E["SDK or Library<br/>(traditional)"]

    style B fill:#e8f5e9
    style D fill:#e3f2fd
    style E fill:#fff3e0

The detailed comparison:

Factor Pre-Built Negotiated
Setup cost High upfront, low per-use Low upfront, per-use reasoning cost
Flexibility Fixed workflows Any composition of available tools
Maintenance Per-consumer updates required Describe once, agents adapt
Performance Optimized, predictable latency Reasoning overhead per invocation
Governance Static, auditable by design Requires explicit guardrails
Best for High-volume, known workflows Exploratory, diverse consumers

The two models aren't competing — they're complementary. Negotiated integrations are where you discover what works. Pre-built integrations are where you harden what's proven.

Trust and Governance

The immediate objection: "You can't trust agents to negotiate critical integrations." Valid. The answer isn't to avoid negotiated integrations — it's to layer controls appropriately.

MCP includes structural validation. Fingerprint-based locking prevents stale deployments. Schema validation rejects malformed requests. Deprecation blocking prevents agents from using retired capabilities.

Human-in-the-loop patterns exist. Preview modes let agents compose workflows that humans review before execution. Approval gates separate "plan" from "execute." The agent negotiates; the human ratifies.

Negotiated does not mean uncontrolled. The MCP layer defines the bounds — what tools exist, what parameters they accept, what validation they enforce. The agent negotiates within those bounds. It can't invent capabilities that aren't exposed.

Start low-risk, harden incrementally. Use negotiated integrations for exploratory tasks — prototyping, sandbox environments, internal tooling. As patterns stabilize and volume increases, harden them into pre-built integrations with optimized paths.

The Enterprise Angle

Compliance and governance teams need concrete assurances, not philosophical arguments.

Audit trails. Every MCP call is loggable — tool name, parameters, timestamp, response. Negotiated integrations produce richer audit data than pre-built ones because every step is an explicit, recorded tool invocation.

Access control. Scope what agents can discover and use. An agent with access to read-only MCP tools can't accidentally trigger a production deployment. Capability-level permissions are more granular than endpoint-level ACLs.

Validation gates. The pattern: negotiate freely in sandbox, gate before production. Agents can compose and test workflows in demo environments without restriction. Promotion to production requires preview, validation, and human approval.

This mirrors how enterprises already handle infrastructure-as-code — author freely, review before merge, gate before deploy.

Next Steps

Start with an honest assessment of your current integration portfolio:

  1. Identify "build once, rarely change" integrations — high-volume data pipelines, compliance reporting, core transaction flows. Keep these pre-built and optimized.
  2. Identify "different every time" integrations — ad-hoc reporting, cross-system queries, workflow setup for new use cases. These are candidates for negotiation.
  3. Expose the "different every time" capabilities as MCP tools. Give each capability a clear name, typed schema, and semantic description. See MCP: From Hardcoded to Live Data for patterns.
  4. Add validation gates for agent-negotiated workflows before they touch production data. Preview modes and human-in-the-loop approvals are table stakes.
  5. Monitor what agents compose. The workflows agents negotiate will reveal integration patterns you didn't anticipate — and highlight which ones deserve hardening into pre-built paths.
  6. Treat MCP tool descriptions as a product. Your descriptions determine whether agents can use your capabilities correctly. Invest in them like you invest in API documentation. See Post 1 for why this matters.

Next in the series: Post 5 explores how negotiated integrations enable a new class of builder — non-developers who compose enterprise workflows through conversation rather than code.