Agentic AI refers to AI systems that can act on their own, making decisions and performing tasks with little human help. Think of it like a smart assistant that can plan a trip, book flights, and adjust plans based on real-time updates, all by itself. As of April 2025, itβs becoming more common in areas like customer service, where chatbots handle queries, and supply chain management, where it optimizes logistics.
Press enter or click to view image in full size
Agentic AI involves autonomous AI systems that make decisions with minimal human input, transforming industries like customer service and supply chain management.
It seems likely that frameworks like LangChain, LlamaIndex, and AutoGen are key tools for building these systems, supporting single-agent and multi-agent setups.
The evidence leans toward Agentic AI enhancing productivity, but challenges like coordination in multi-agent systems and potential biases are debated.
An unexpected detail is the integration of real-time data handling, enhancing decision-making in dynamic environments like financial trading.
Research suggests Agentic AI design patterns help build autonomous AI systems for complex tasks, with five key patterns: Reflection, Tool Use, ReAct, Planning, and Multi-Agent Collaboration.
It seems likely that frameworks like LangChain and AutoGen support these patterns, with LangChain for Reflection, Tool Use, and ReAct, and AutoGen for Planning and Multi-Agent Collaboration.
The evidence leans toward these patterns improving AI performance, but challenges like coordination in multi-agent systems are debated.
An unexpected detail is how the ReAct Pattern enhances interpretability by generating human-like task-solving trajectories, improving trust in AI systems.
To build these systems, developers use frameworks like:
LangChain: Great for creating chatbots that can use tools like web searches.
LlamaIndex: Helps connect AI to company data for smarter decisions.
AutoGen: Allows multiple AI agents to work together, like a team solving complex problems.
Single Agent: One AI does everything, like a chatbot answering questions. Itβs simpler but may struggle with complex tasks.
Multi-Agent: Multiple AIs work together, like a team where one handles logistics and another customer service. Itβs more robust but harder to coordinate.
For multi-agent setups, there are patterns like:
Parallel: Agents work on different parts at the same time, like processing text and images together.
Sequential: Agents take turns, like cleaning data then analyzing it.
Loop: An agent repeats a task until done, like refining a solution.
Router: One agent directs tasks to others, like routing customer queries.
Aggregator: Collects and combines results from multiple agents, like merging search results.
Network: Agents are connected like a web, sharing information.
Hierarchical: Agents have levels, like executives giving goals to managers.
These patterns help structure how agents work together, and examples show how to implement them using AutoGen and other tools.
Press enter or click to view image in full size
This section provides a comprehensive exploration of Agentic AI, covering all aspects requested in the query, including fundamentals, components, single vs. multi-agent architectures, and design patterns. The analysis is grounded in recent research and practical implementations, reflecting the state of the field as of April 8,
Key frameworks include:
AI agents comprise several components, each critical for autonomy:
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
result = search.run("What is the weather like today?")
from llama_index import SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
Sample Code
from langchain.tools import AIPluginTool
email_tool = AIPluginTool.from_plugin_url("[invalid url, do not cite]")
tools = [email_tool]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
agent.run("Send an email to john@example.com with the subject 'Hello' and body 'How are you?'")
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
Sample code with AutoGen
groupchat = autogen.GroupChat(agents=[user_proxy, assistant], messages=[], max_round=5)
Press enter or click to view image in full size
Press enter or click to view image in full size
This section explores five key design patterns: Reflection, Tool Use, ReAct, Planning, and Multi-Agent Collaboration. For each, weβll explain what it is, why it matters, and show how to use tools like LangChain and AutoGen with sample code.
Press enter or click to view image in full size
What It Is: The Reflection Pattern lets an AI review its own work to find mistakes and improve. Itβs like an AI checking its essay for errors before submitting.
Why It Matters: This helps the AI get better over time, reducing errors in tasks like writing code or creating content, which is crucial for reliability.
Tools and Frameworks: Use LangChain, especially its LangGraph part, for this. LangGraph lets the AI cycle through its work, reviewing and refining.
Sample Code:
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
# Set up generator to write an essay
generator_prompt = ChatPromptTemplate.from_messages([\
("system", "You are an essay assistant tasked with writing excellent 5-paragraph essays."),\
("human", "{input}"),\
generator = generator_prompt | ChatOpenAI(model="gpt-3.5-turbo") | AIMessage.log()
# Set up reflector to grade and give feedback
reflect_prompt = ChatPromptTemplate.from_messages([\
("system", "You are an essay grader tasked with grading essays and providing feedback."),\
MessagesPlaceholder(variable_name="history"),\
("human", "Grade the following essay and provide feedback: {input}"),\
reflector = reflect_prompt | ChatOpenAI(model="gpt-3.5-turbo") | AIMessage.log()
# Combine them in a cycle, stopping after 3 iterations
from langchain_core.runnables import RunnableBranch
def should_reflect(input):
return len(input["history"]) < 3
graph = RunnableBranch(
(should_reflect, generator | reflector),
RunnablePassthrough(),
# Run it for an essay on AI ethics
input_text = "Write a 5-paragraph essay on 'The Importance of AI Ethics'."
result = graph.invoke({"input": input_text, "history": []})
print(result)
This code shows the AI writing an essay, then checking and improving it up to three times, making it better each round.
Press enter or click to view image in full size
What It Is: The Tool Use Pattern lets the AI use external tools, like web search or APIs, to get information or do tasks it canβt do on its own.
Why It Matters: This makes the AI more versatile, letting it handle real-world tasks like finding facts online or doing math, which it couldnβt do just from memory.
Tools and Frameworks: LangChain is great here, with tools like DuckDuckGoSearchRun for web searches, making it easy to extend the AIβs abilities.
Sample Code:
from langchain.agents import initialize_agent, load_tools
from langchain.llms import OpenAI
# Set up the AI model
llm = OpenAI(temperature=0)
# Load tools for web search and math
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Create an agent that uses these tools
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
# Ask a complex question
result = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
print(result)
Here, the AI searches the web to find Olivia Wildeβs boyfriend and uses a math tool to calculate his age raised to a power, showing how tools expand its capabilities.
Press enter or click to view image in full size
What It Is: The ReAct Pattern, or βReason and Act,β means the AI thinks through a problem step by step and then takes actions, like using tools, based on that thinking. Itβs like planning and doing at the same time.
Why It Matters: This helps the AI solve tricky problems by combining thinking and doing, making it more dynamic and able to handle tasks that need both, like answering multi-part questions.
Tools and Frameworks: LangChain supports this with its βzero-shot-react-descriptionβ agent, which reasons and uses tools in a loop, perfect for ReAct.
Sample Code:
from langchain.agents import initialize_agent, load_tools
from langchain.llms import OpenAI
# Set up the AI model
llm = OpenAI(temperature=0)
# Load tools for web search and math
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Create a ReAct agent
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
# Ask a question needing both reasoning and action
result = agent.run("What is the current population of France? How does it compare to Germany?")
print(result)
The AI reasons about what to do (find populations and compare), then uses web search to get the numbers and reasons through the comparison, showing ReAct in action.
Press enter or click to view image in full size
What It Is: The Planning Pattern means the AI breaks down big tasks into smaller steps and makes a plan to tackle them, like outlining a project before starting.
Why It Matters: This is key for complex tasks, ensuring the AI organizes work into manageable parts, making it efficient for long-term goals like planning a trip or diagnosing a patient.
Tools and Frameworks: AutoGen is great for this, with features for multi-agent systems where a planner agent coordinates subtasks, like in medical diagnostics.
Join Medium for free to get updates fromΒ thisΒ writer.
Subscribe
Subscribe
Sample Code:
import autogen
# Define agents for each part of the task
symptom_agent = autogen.AssistantAgent(name="Symptom_Agent", ...)
history_agent = autogen.AssistantAgent(name="History_Agent", ...)
analysis_agent = autogen.AssistantAgent(name="Analysis_Agent", ...)
recommendation_agent = autogen.AssistantAgent(name="Recommendation_Agent", ...)
# Define planner to coordinate
planner = autogen.AssistantAgent(name="Planner", ...)
# Set up user proxy to start the chat
user_proxy = autogen.UserProxyAgent(name="User_Proxy", ...)
# Create a group chat with all agents
groupchat = autogen.GroupChat(agents=[user_proxy, planner, symptom_agent, history_agent, analysis_agent, recommendation_agent], ...)
# Run the chat, starting with a diagnosis task
manager = autogen.GroupChatManager(groupchat, llm_config=...)
user_proxy.initiate_chat(manager, message="Diagnose a patient with symptoms X, Y, Z")
Here, the planner outlines steps like collecting symptoms and analyzing data, with each agent handling its part, showing how planning organizes complex tasks.
Press enter or click to view image in full size
What It Is: The Multi-Agent Collaboration Pattern means several AI agents work together, each with a specific role, like a team where one plans, one codes, and one checks the work.
Why It Matters: This is great for big projects, letting agents specialize and collaborate, making the system more robust for tasks like software development or customer support.
Tools and Frameworks: AutoGen is designed for this, with group chats where agents communicate and delegate tasks, perfect for team-like workflows.
Sample Code:
import autogen
# Define agents with different roles
planner = autogen.AssistantAgent(name="Planner", ...)
coder = autogen.AssistantAgent(name="Coder", ...)
critic = autogen.AssistantAgent(name="Critic", ...)
# Set up user proxy to interact
user_proxy = autogen.UserProxyAgent(name="User_Proxy", ...)
# Create a group chat for collaboration
groupchat = autogen.GroupChat(agents=[user_proxy, planner, coder, critic], ...)
# Run the chat, asking for a coding task
manager = autogen.GroupChatManager(groupchat, llm_config=...)
user_proxy.initiate_chat(manager, message="Write a function to calculate Fibonacci sequence")
There are further subcategories of Multi Agent Systems
Multiple agents work simultaneously on subtasks. Example: Document processing with text and image agents. In AutoGen, use group chats for parallel execution. Sample code:
assistant1 = autogen.AssistantAgent(name="Assistant1", ...)
assistant2 = autogen.AssistantAgent(name="Assistant2", ...)
groupchat = autogen.GroupChat(agents=[user_proxy, assistant1, assistant2], ...)
Agents perform tasks in sequence. Example: Data pipeline (cleaning β analysis). In LangChain, use chains for sequential execution. Sample code:
chain = LLMChain(llm=llm) | Tool("tool_name") | LLMChain(llm=llm)
Agents repeat tasks until a condition is met. Example: Iterative solution refinement. Custom logic in AutoGen. Sample code:
while not condition_met:
response = assistant.generate()
An agent directs tasks to specialized agents. Example: Customer support routing. In AutoGen, use a router agent. Sample code:
router = autogen.AssistantAgent(name="Router", ...)
worker1 = autogen.AssistantAgent(name="Worker1", ...)
Collects and combines results from multiple agents. Example: Merging search results. Custom aggregator in AutoGen. Sample code:
aggregator = autogen.AssistantAgent(name="Aggregator", ...)
Agents connected in a network topology for communication. Example: Sensor networks. Custom setup in LangChain or AutoGen. Sample code:
agent1.add_tool(agent2.tool)
agent2.add_tool(agent3.tool)
Agents organized in levels, with higher levels controlling lower ones. Example: Company structure. In AutoGen, use hierarchical agent setup. Sample code:
executive = autogen.AssistantAgent(name="Executive", ...)
manager = autogen.AssistantAgent(name="Manager", ...)
worker = autogen.AssistantAgent(name="Worker", ...)
Agentic AI is transforming AI development, with frameworks like LangChain, LlamaIndex, and AutoGen providing robust tools for building autonomous systems. The choice between single and multi-agent architectures depends on task complexity, with design patterns offering structured solutions for multi-agent interactions. As of April 8, 2025, the field continues to evolve, with ongoing research addressing challenges like bias and alignment, promising significant industry impacts.
Ecosystems, libraries, and foundations to build on. Orchestration frameworks, agent platforms, and development foundations.