AI agents need more than just language — they need access. In practice, that meant custom code, duplicated APIs, and bridge integrations across Azure Functions or OpenAPI specs. It slowed teams down, fractured architectures, and blocked scalability.
Model Context Protocol (MCP) changes that by introducing a shared, open interface for exposing tools and context. Azure AI Foundry’s native support for MCP means you can now connect agents to real systems — without glue code or reinventing the wheel.
Here’s why it matters:
You stop wiring tools manually. MCP gives you auto-discovery.
You reduce maintenance debt. Tools evolve? Your agent stays in sync.
You build faster. One integration works across Azure, OpenAI, and beyond.
This removes friction from tools & agent development and helps teams move faster while staying aligned with evolving systems
Originally proposed by Anthropic, MCP is a JSON-RPC-based open protocol. It acts as a standardized bridge between agents and tools — letting AI clients discover and call external functions, context providers, and APIs automatically.
Think of it like USB-C for AI tools: “Connect once. Integrate anywhere.”
You can read more about MCP and see a real world example here, behind the actual hype: https://medium.com/next-token/mcp-behind-the-hype-with-demo-a9b7620934aa
Before MCP, enabling an AI agent to interact with real-world systems in Azure often meant building and maintaining custom Azure Functions, writing OpenAPI specifications for every internal API, or building brittle plug-ins that quickly went out of sync.
These solutions worked, but they were tedious to maintain and hard to scale. Every integration was a one-off project that created long-term friction.
With the Model Context Protocol (MCP) now supported by the Azure AI Foundry Agent Service, you can connect your agents to MCP-compatible services using a standard interface and a few lines of configuration.
This unlocks faster development, cleaner architecture, and more sustainable integrations.
Press enter or click to view image in full size
Azure AI Foundry
With the July 2025 preview release, Azure AI Foundry Agent Service now natively supports remote MCP servers — whether they’re self-hosted or offered as SaaS.
This means developers can connect their agents to external tools and data sources in seconds, without managing custom wrappers or API specs. Once connected, Foundry handles the discovery, versioning, and invocation of those tools automatically.
Join Medium for free to get updates from this writer.
Subscribe
Subscribe
By removing the need to maintain OpenAPI documents or custom glue code, this capability significantly reduces integration overhead and enables more scalable agent workflows across the enterprise.
Step-by-step overview:
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
project_client = AIProjectClient(endpoint=PROJECT_ENDPOINT, credential=DefaultAzureCredential())
with project_client:
agent = project_client.agents.create_agent(
model=MODEL_DEPLOYMENT_NAME,
name="my-mcp-agent",
instructions="You are a helpful assistant. Use the tools provided to answer the user's questions.",
tools=[\
"type": "mcp",\
"server_label": "pricing-api",\
"server_url": "https://mcp.example.com",\
"require_approval": "never"\
When OpenAI added MCP support to its Agents SDK earlier this year, it opened the door for shared tooling and protocol-driven integrations. Azure AI Foundry builds on that by allowing teams to bring their own deployed models — like GPT-4o — and integrate them with the same standards.
Developers can now:
Adapt existing examples (like the MCP Filesystem agent) to Azure
Run local MCP servers via Node.js
AsyncAzureOpenAI
client to wire GPT-4o models hosted in Azure FoundryThe result? You get agent applications that combine OpenAI’s tooling ecosystem with Azure’s enterprise-grade infrastructure — without needing custom adapters for every use case.
It’s fast to prototype, secure by default, and works seamlessly in familiar environments like VS Code.
mcp_servers
parameter directly, aligned with best practices from recent Azure
import os
from dotenv import load_dotenv
from openai import AsyncAzureOpenAI
from openai.agents import Agent, AgentRunner
from openai.agents.protocols.chat import OpenAIChatCompletionsModel
from openai.agents.mcp import MCPServer
load_dotenv()
def get_azure_open_ai_client():
return AsyncAzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
azure_open_ai_client = get_azure_open_ai_client()
mcp_server = MCPServer(
server_url="https://mcp.example.com",
server_label="filesystem"
agent = Agent(
name="Assistant",
instructions="Use the tools to read the filesystem and answer questions based on those files.",
model=OpenAIChatCompletionsModel(
model=os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"),
openai_client=azure_open_ai_client
mcp_servers=[mcp_server],
runner = AgentRunner(agent)
response = await runner.run("List files in the /data directory")
print(response.content)
This setup:
Uses OpenAI’s SDK but relies on a GPT-4o model deployed on Azure
Adds a remote MCP toolset for functionality
Demonstrates how MCP makes the backend interchangeable while maintaining full agent logic and context in the SDK
The integration of Model Context Protocol into Azure AI Foundry is more than just an architectural improvement — it’s a practical step forward in how we build and maintain intelligent systems. It streamlines how agents discover, connect to, and use external tools, reducing integration overhead and making AI development more modular and sustainable.
If you’re working with LLMs, especially in enterprise settings, MCP is becoming a standard worth adopting — and Azure is giving you the infrastructure to do it at scale.
Whether you’re modernizing internal tooling or building AI-native services from scratch, this new stack of Azure
MCP
OpenAI SDK gives you flexibility.
Now is a great time to experiment, learn, and build.
Developer SDKs, repos, connectors, and infrastructure helpers. APIs, development tools, and integration resources.