Internet-Draft Agentic AI Use Cases March 2026
Schott, et al. Expires 3 September 2026 [Page]
Workgroup:
WG Working Group
Internet-Draft:
draft-scrm-aiproto-usecases-02
Published:
Intended Status:
Informational
Expires:
Authors:
R. Schott
Deutsche Telekom
J. Maisonneuve
Nokia
L. M. Contreras
Telefonica
J. Ros-Giralt
Qualcomm Europe, Inc.

Agentic AI Use Cases

Abstract

Agentic AI systems rely on large language models to plan and execute multi-step tasks by interacting with tools and collaborating with other agents, creating new demands on Internet protocols for interoperability, scalability, and safe operation across administrative domains. This document inventories representative Agentic AI use cases and captures the protocol-relevant requirements they imply, with the goal of helping the IETF determine appropriate standardization scope and perform gap analysis against emerging proposals. The use cases are written to expose concrete needs such as long-lived and multi-modal interactions, delegation and coordination patterns, and security/privacy hooks that have protocol implications. Through use case analysis, the document also aims to help readers understand how agent-to-agent and agent-to-tool protocols (e.g., [A2A] and [MCP]), and potential IETF-standardized evolutions thereof, could be layered over existing IETF protocol substrates and how the resulting work could be mapped to appropriate IETF working groups.

About This Document

This note is to be removed before publishing as an RFC.

Status information for this document may be found at https://datatracker.ietf.org/doc/draft-scrm-aiproto-usecases/.

Source for this draft and an issue tracker can be found at https://github.com/https://github.com/giralt/draft-scrm-aiproto-usecases.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 3 September 2026.

Table of Contents

1. Introduction

Agentic AI systems—software agents that use large language models to reason, plan, and take actions by interacting with tools and with other agents—are seeing rapid adoption across multiple domains. The ecosystem is also evolving quickly through open-source implementations and emerging protocol proposals; however, open source alone does not guarantee interoperability, since rapid iteration and fragmentation can make stable interoperation difficult when long-term compatibility is required. Several protocols have been proposed to support agentic systems (e.g., [A2A], [MCP], ANP, Agntcy), each with different design choices and strengths, targeting different functions, properties, and operating assumptions.

This document inventories a set of representative Agentic AI use cases to help the IETF derive protocol requirements and perform gap analysis across existing proposals, with a focus on Internet-scale interoperability. The use cases are intended to highlight protocol properties that matter in practice—such as long-lived interactions, multi-modal context exchange, progress reporting and cancellation, and safety-relevant security and privacy hooks—and to help the IETF determine appropriate scope as well as how related work should be organized across existing working groups or, if needed, a new effort.

2. Conventions and Definitions

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

3. Use Cases Requirements

The use cases in this document are intended to inform IETF standardization work on Agentic AI protocols by clarifying scope, enabling gap analysis, and guiding working group ownership. The requirements below define the minimum level of detail and structure expected from each use case so that the IETF can derive actionable protocol requirements and identify where coordination with other SDOs is necessary. Use cases that do not meet these requirements risk being insufficiently precise for protocol design and evaluation.

4. Use Cases

This section inventories representative Agentic AI use cases to make their protocol-relevant requirements explicit and comparable. The use cases are written to expose concrete needs such as multi-step delegation, agent-to-agent coordination, agent-to-tool interactions, and long-lived and multi-modal exchanges that must operate safely and reliably across administrative domains. By grounding the discussion in specific scenarios, the document supports gap analysis against emerging agent protocols (e.g., agent-to-agent and agent-to-tool approaches such as A2A and MCP) and clarifies how candidate solutions could be layered over existing IETF protocol substrates and mapped to appropriate IETF working groups, including the necessary security and privacy hooks.

4.2. Hybrid AI

Hybrid AI generally refers to an edge–cloud cooperative inference workflow in which two or more models collaborate to solve a task: (1) a smaller on‑device model (typically a few billion parameters) that prioritizes low latency, lower cost, and privacy; and (2) a larger cloud model (hundreds of billions to trillion‑scale parameters) that offers higher capability and broader knowledge. The two models coordinate over an agent‑to‑agent channel and may invoke tools locally or remotely as needed. Unlike single‑endpoint inference, Hybrid AI is adaptive and budget‑aware: the on‑device model handles as much work as possible locally (classification, summarization, intent detection, light reasoning), and escalates to the cloud model when additional capability is required (multi‑hop reasoning, long‑context synthesis, domain expertise). The models can exchange plans, partial results, and constraints over [A2A], and both sides can discover and invoke tools via [MCP].

4.2.1. Building Blocks

A Hybrid AI workflow may generally comprise the components shown in the next Figure:

  • On‑device Model (Edge). A compact LLM or task‑specific model (a few billion parameters) running on user hardware (e.g., phone, laptop). Advantages include: low latency for interactive turns, reduced cost, offline operation, and improved privacy by default (data locality). Typical functions: intent parsing, entity extraction, local retrieval, preliminary analysis, redaction/summarization prior to escalation.

  • Cloud Model (Hosted). A large, higher‑capability LLM (hundreds of billions to ~trillion parameters) with stronger reasoning, knowledge coverage, tool‑use proficiency, and longer context windows. Typical functions: complex synthesis, multi‑step reasoning, broad web/KG retrieval, code execution, and advanced evaluation.

  • A2A Inter‑Model Coordination. The edge and cloud models communicate via an Agent‑to‑Agent channel to exchange capabilities, cost/latency budgets, privacy constraints, task state, and partial artifacts. Common patterns include negotiate‑and‑delegate, ask‑for‑help with evidence, propose/accept plan updates, and critique‑revise cycles [A2A].

  • MCP Tooling (Edge and Cloud). Both models use the Model Context Protocol to discover and invoke tools with consistent schemas (e.g., local sensors/files, calculators, vector indexes on edge; search/crawling, KB/RAG, Python/services in cloud). MCP enables capability discovery, streaming/progress, cancellation, and explicit consent prompts across transports [MCP].

  • Policy, Budget, and Privacy Controls. Guardrails and policies that encode user/enterprise constraints (e.g., do not send raw PII to cloud; enforce token/time budgets; require consent for specific tools). The edge model may redact or summarize data before escalation; both sides log provenance and decisions for auditability.

  • Shared Task State and Provenance. A compact state (goals, sub‑tasks, citations, hashes, timestamps) that both models can read/update to enable reproducibility, debugging, and verifiable traces.

+--------------------------------------------------------------+
|                        User / Client                         |
|              (Goal, Query, Constraints)                      |
+--------------------------------------------------------------+
                             |
                             v
+--------------------------------------------------------------+
|                 On-Device Model (Edge)                       |
|  - few-B params; low latency, privacy by default             |
|  - local reasoning, redaction/summarization                  |
|  - local tools via MCP (sensors, files, crypto)              |
+--------------------------------------------------------------+
         |                           \
         | local MCP tools            \ when escalation is needed
         v                             \
+----------------------+                \
| Edge MCP Tools       |                 \
+----------------------+                  v
                                   +----------------------------------+
                                   |   A2A Session (Edge <-> Cloud)   |
                                   |   - capability/budget exchange   |
                                   |   - task handoff & updates       |
                                   +----------------------------------+
                                                |
                                                v
+--------------------------------------------------------------+
|                    Cloud Model (Hosted)                      |
|  - 100B–1T+ params; higher capability & breadth              |
|  - complex synthesis, long-context reasoning                 |
|  - cloud tools via MCP (search, KB/RAG, Python)              |
+--------------------------------------------------------------+
                             |
                     cloud MCP tool calls
                             v
+----------------------+   +------------------+   +------------------+
| Web Search & Crawl   |-->| KB / RAG Index   |-->| Python / Services|
+----------------------+   +------------------+   +------------------+
                             ^
                             |
                 results/evidence via A2A to edge/cloud
                             |
                             v
+--------------------------------------------------------------+
|                 Final Answer / Output                        |
|   (synthesis + citations + privacy/consent notes)            |
+--------------------------------------------------------------+

Each building block in the Hybrid AI architecture represents a logical function rather than a specific implementation, and components may be co‑located or distributed in practice.

4.2.2. Interaction Model

A typical Hybrid AI session proceeds as follows:

  1. Local First. The on‑device model interprets the user goal, applies local tools (e.g., retrieve snippets, parse files), and attempts a low‑cost solution within configured budgets.

  2. Escalate with Minimization. If the local model estimates insufficient capability (e.g., confidence below threshold, missing evidence), it redacts/summarizes sensitive data and escalates the task—along with compact evidence and constraints—over [A2A].

  3. Cloud Reasoning + Tools. The cloud model performs deeper reasoning and may invoke [MCP] tools (web search/crawl, KB/RAG, Python) to gather evidence and compute results.

  4. Refine & Return. Intermediate artifacts and rationales flow back over [A2A]. The edge model may integrate results, perform final checks, and produce the user‑facing output.

  5. Iterate as Needed. The models repeat plan‑act‑observe‑refine until success criteria (quality, coverage, cost/time budget) are met.

4.2.3. Why This Use Case Matters

Hybrid AI is inherently trade‑off aware: it balances privacy, latency, and cost at the edge with capability and breadth in the cloud. Without standard protocols, inter‑model negotiations and tool interactions become bespoke and hard to audit. Two complementary interoperability layers are especially relevant:

  • Inter‑Model Coordination (A2A). A2A provides a structured channel for capability advertisement, budget negotiation, task handoffs, progress updates, and critique/revision between edge and cloud models. This enables portable escalation policies (e.g., “do not send raw PII”, “cap tokens/time per turn”, “require human consent for external web calls”) and consistent recovery behaviors across vendors [A2A].

  • Tool Invocation (MCP). MCP standardizes tool discovery and invocation across both environments (edge and cloud), supporting consistent schemas, streaming/progress, cancellation, and explicit consent prompts. This allows implementers to swap local or remote tools—search, crawling, KB/RAG, Python/services—without rewriting agent logic or weakening privacy controls [MCP].

Implications for Hybrid AI. Using standardized protocols lets implementers compose portable edge–cloud stacks:

  • Edge‑first operation with escalation only when needed, guided by budgets and confidence.

  • Data minimization (local redaction/summarization) and consent workflows at protocol boundaries.

  • Consistent provenance (URIs, hashes, timestamps) and observability across edge and cloud for verifiable traces.

  • Seamless tool portability (local/remote) and policy enforcement that travel with the task rather than the deployment.

4.3. AI-based Troubleshooting and Automation

Telecom networks have significantly increased in scale, complexity, and heterogeneity. The interplay of technologies such as Software-Defined Networking (SDN), virtualization, cloud-native architectures, network slicing, and 5G/6G systems has made infrastructures highly dynamic. While these innovations provide flexibility and service agility, they also introduce substantial operational challenges, particularly in fault detection, diagnosis, and resolution.

Traditional troubleshooting methods, based on rule engines, static thresholds, correlation mechanisms, and manual expertise, struggle to process high-dimensional telemetry, multi-layer dependencies, and rapidly evolving conditions. Consequently, mean time to detect (MTTD) and mean time to repair (MTTR) may increase, impacting service reliability and user experience.

Artificial Intelligence (AI) and Machine Learning (ML) offer new capabilities to enhance troubleshooting. AI-driven approaches apply data-driven models and automated reasoning to detect anomalies, determine root causes, predict failures, and recommend or execute corrective actions, leveraging telemetry, logs, configuration, topology, and historical data.

Beyond troubleshooting, it is essential to further exploit network and service automation to enable coordinated, policy-based actions across multi-technology (e.g., RAN, IP, optical, virtualized), multi-layer, and dynamic environments. As degradations and faults often span multiple devices, domains, and layers, effective handling requires intelligent and increasingly autonomous mechanisms, ranging from proactive service assurance to automated fault-triggered workflows.

This use case envisions a multi-agent AI framework that enhances network and service automation. Agents perform diagnosis and root cause analysis (RCA), while also supporting anomaly prediction, intent-based protection, and policy-driven remediation. The proposed multi-agent interworking autonomously maintains the network in an optimal operational state by correlating heterogeneous data sources, performing collaborative reasoning, and interacting with network elements and operators through standardized protocols, APIs, and natural language interfaces.

AI agents form a distributed and scalable ecosystem leveraging advanced AI/ML, including generative AI (Gen-AI), combined with domain expertise to accelerate RCA, assess impact, and propose corrective actions. Each agent encapsulates capabilities such as data retrieval, hypothesis generation, validation, causal analysis, and action recommendation. Designed as composable and interoperable building blocks, agents operate across diverse domains (e.g., RAN, Core, IP, Optical, and virtualized infrastructures), while supporting lifecycle management, knowledge bases, and standardized interfaces.

4.3.1. Building Blocks

The use case relies on decentralized multi-agent coordination, where agents exchange structured, context-enriched information to enable dynamic activation and collaborative troubleshooting workflows. A resource-aware orchestration layer manages agent deployment, scaling, and optimization across the network–cloud continuum. Policy frameworks ensure security, compliance, trustworthiness, and explainability, supporting resilient AI-driven network operations.

4.3.2. Why This Use Case Matters

This use case highlights the need for interoperable, protocol-based integration of AI-driven troubleshooting and automation components within heterogeneous, multi-vendor environments. Telecom networks are inherently composed of equipment and control systems from different providers, spanning multiple administrative and technological domains. A multi-agent AI framework operating across such environments requires standardized mechanisms for data modeling, telemetry export, capability advertisement, and control interfaces. In particular, consistent information models (e.g., YANG-based models), secure transport protocols, and well-defined APIs are needed to ensure that AI agents can reliably discover, interpret, and act upon network state information across vendor boundaries.

Service discovery and capability negotiation are also critical. AI agents must be able to dynamically identify available data sources, peer agents, orchestration functions, and control points, as well as understand their supported features and policy constraints. This implies the need for standardized discovery procedures, metadata descriptions, and context exchange formats that enable composability and coordinated workflows in decentralized environments. Without such interoperability mechanisms, multi-agent troubleshooting systems risk becoming vertically integrated and operationally siloed.

Furthermore, governance, security, and trust frameworks are fundamental considerations. AI-driven agents capable of recommending or executing remediation actions introduce new requirements for authentication, authorization, accountability, and auditability. Secure communication channels, role-based access control, policy enforcement, and explainability mechanisms are necessary to prevent misuse, contain faults, and ensure compliance with operational and regulatory constraints.

4.4. AI-Based Operation Models

4.4.1. Agentic AI for Improved User Experience

AI agents have the potential to enhance future user experience by being integrated—individually or as collaborating groups—into telecom networks to deliver user-facing services. Such services may include autonomous multi-level Internet/Intranet search, coordination of calendar and email tasks, and execution of multi-step workflows involving multiple agents, as well as pre-built domain agents (e.g., HR, procurement, finance). This shift can fundamentally change enterprise operating models: agents can support decision-making and, where authorized, act on behalf of employees or the organization. In multi-agent scenarios, agents from different vendors communicate over networks and must interoperate. These interactions require coordinated communication flows and motivate a standardized agent communication protocol and framework. Given the need to comply with regulatory requirements (beyond network regulation), an open, standardized approach is preferable to proprietary implementations. Interoperability across operators and vendors implies an open ecosystem; therefore, a standardized AI agent protocol is required.

4.4.2. Voice-Based Human-to-Agent Communication

With the integration of AI and AI agents into networks, voice services may see renewed importance as a natural, low-friction interface for interacting with agents. Voice-based human-to-agent communication can complement text-based chat interfaces and enable rapid task initiation and conversational control. This use case introduces additional considerations, including security, authorization/permissions, and charging/accounting. Because voice services are regulated in many jurisdictions, this further motivates a standardized framework and standardized AI agent protocol. Network-integrated AI agents can assist users through voice interaction and improve overall user experience.

5. Security Considerations

TODO Security

6. IANA Considerations

This document has no IANA actions.

7. References

7.1. Normative References

[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/rfc/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/rfc/rfc8174>.

7.2. Informative References

[A2A]
"Agent2Agent (A2A) Protocol Specification", n.d., <https://a2a-protocol.org/latest/>.
[A2A-GITHUB]
"Agent2Agent Protocol – GitHub Repository", n.d., <https://github.com/a2aproject/A2A>.
[MCP]
"Model Context Protocol (MCP) Specification", , <https://modelcontextprotocol.io/specification/2025-03-26>.
[MCP-GITHUB]
"Model Context Protocol – GitHub Organization", n.d., <https://github.com/modelcontextprotocol>.
[ODS]
"Open Deep Search", , <https://arxiv.org/abs/2503.20201>.
[ODS-GITHUB]
"OpenDeepSearch", n.d., <https://github.com/sentient-agi/OpenDeepSearch>.

Acknowledgments

TODO acknowledge.

Authors' Addresses

Roland Schott
Deutsche Telekom
Julien Maisonneuve
Nokia
L. M. Contreras
Telefonica
Jordi Ros-Giralt
Qualcomm Europe, Inc.