OpenAI agents SDK Core features:
· Introduction to Co-routines and other agents information:
o Co-routines Versus Functions: when using 'async def' in Python, the resulting object is a coroutine, not a traditional function, and calling a coroutine returns a coroutine object rather than executing the code immediately.
o Event Loop Mechanism: the event loop in the async IO library is a loop that schedules and executes coroutines, pausing them when they are waiting for IO and resuming others, enabling concurrency without traditional multithreading.
o Awaiting Coroutines: to execute a coroutine, it must be awaited using the 'await' keyword, which schedules it for execution in the event loop and blocks until the coroutine completes.
o Concurrent Execution with Gather: 'asyncio.gather' as a construct that allows multiple coroutines to be scheduled concurrently, with the event loop managing their execution and collecting their results as a list.
· Introduction to OpenAI Agents SDK:
o Framework Characteristics: the OpenAI Agents SDK is lightweight, flexible, and not prescriptive, allowing users to choose their preferred working style while abstracting away repetitive tasks such as JSON handling.
o Key Terminology in OpenAI Agents SDK- Agents Terminology: three core terms in the OpenAI Agents SDK—agents, handoffs, and guardrails.
§ Agents - agents as wrappers around LLMs with specific roles,
§ Handoffs - 'handoffs' as interactions between agents,
§ Guardrails - 'guardrails' as checks and controls to ensure agents behave within defined Guardrails.
· Steps to Run an Agent in OpenAI Agents SDK: three steps required to run an agent using the OpenAI Agents SDK:
o Agent Instance Creation: Create an instance of an agent, which will represent a role in the solution.
o Interaction Logging with Trace: using 'with trace' is recommended for logging all interactions with the agent, enabling monitoring through OpenAI's tools. Trace to track the agent.
o Running the Agent Coroutine: the agent is executed by calling 'runner.run', which is a coroutine and must be awaited to actually run the agent. Call runner.run to run the agent.
asyncio:
asyncio is Python’s built‑in asynchronous programming framework. It lets your program do multiple things at the same time without using threads or processes. asyncio will offer:
- While task A is waiting, Python can run task B
- While task B is waiting, Python can run task C
Asyncio, Python run many waiting tasks at the same time, making I/O-heavy programs dramatically faster. This makes the program much faster and more efficient. Use asyncio when the program does a lot of waiting.
Don’t use for CPU-heavy tasks (math, ML, image processing). All methods and functions should start with async as keyword. During the calling of methods or function use await keyword
import asyncio)
async def fetch_data():
await asyncio.sleep(2)
return "data"
async def main():
results = await asyncio.gather(
fetch_data(),
fetch_data(),
fetch_data()
)
print(results)
result = await Runner.run(main())
INSTRUCTIONS = " some instruction as prompt."
first_agent = Agent(
name="First agent",
instructions=INSTRUCTIONS,
tools=toolname,
model="gpt-4o-mini",
)
message = "Welcome AI Agent frameworks in 2025"
with trace("Welcome"):
result = await Runner.run(first_agent, message)
with trace("multiple agents"):
results = await asyncio.gather(
Runner.run(agent1, message),
Runner.run(agent2, message),
Runner.run(agent3, message),
Key concepts in asyncio
|
Concept |
Meaning |
|
async def |
Defines an asynchronous function |
|
await |
Pauses the function until the awaited task completes |
|
event loop |
The engine that schedules async tasks |
|
asyncio.gather() |
Runs multiple tasks at the same time |
|
asyncio.sleep() |
Non-blocking sleep (example of async I/O) |
Agent frameworks rely on asyncio: Agent frameworks need to:
- run multiple agents at the same time
- handle many I/O operations (LLM calls, tools, APIs)
- wait for responses without blocking the whole system
- coordinate tasks, messages, and events
Agent frameworks use asyncio, and the frameworks below are built around async execution:
1. LangChaina. Tool calls are async2. LangGraph
b. LLM calls can be async
c. Agents run async loops
d. Many integrations require async def
a. Entire architecture is async-first3. CrewAI
b. Nodes, edges, and tool calls run concurrently
c. Event-driven execution uses asyncio tasks
a. Uses asyncio for parallel task execution4. Microsoft Autogen (new version)
b. Agents can run concurrently
c. Async tool calls supported
a. Fully async5. MCP (Model Context Protocol)
b. Agents communicate via async message passing
a. Server and client interactions use async
b. Streaming responses require async
c. Tool execution often async
What we will learn during this journey
- Agent workflow
- Use of tools to call functions
- Agent collaboration via Tools and Handoffs
Example: Multi Agents
Steps1: Define instruction, trying to run multiple agent as parallel. Define instruction, description, and tools
instructions1 = "You are a agent1 working for ComplAI, define the action."
instructions2 = "You are a agent2 working for ComplAI, define the action."
instructions3 = "You are a agent3 working for ComplAI define the action."
description = "specific actions"
# Agent definition
agent1 = Agent(name="DeepSeek Sales Agent", instructions=instructions1, model=deepseek_model)
agent2 = Agent(name="Gemini Sales Agent", instructions=instructions2, model=gemini_model)
agent3 = Agent(name="Llama3.3 Sales Agent",instructions=instructions3,model=llama3_3_model)
# tool definition
tool1 = agent1.as_tool(tool_name="agent1", tool_description=description)
tool2 = agent2.as_tool(tool_name="agent2", tool_description=description)
tool3 = agent3.as_tool(tool_name="agent3", tool_description=description)
Steps2: Define the decorator function
@function_tool
def do_action(sub: str, des: str) -> Dict[str, str]:
""" Action definition"""
return {"status": "success"}
sub1_instructions = "write a detail for action perform by agent."
sub2_instructions = "write a detail for action perform by agent."
Agent4 = Agent(name="new_agent1", instructions= sub1_instructions, model="gpt-4o-mini")
Subject1_tool = Agent4.as_tool(tool_name=" Agent4", tool_description="tool description as action")
Agent5/ html_converter = Agent(name="new_agent2", instructions= sub2, model="gpt-4o-mini")
Subject2_tool = html_converter.as_tool(tool_name=" Agent5",tool_description=" tool description as action")
listOf_tools = [subject1_tool, Subject2_tool, do_action]
agent5 = Agent(
name="new_agent3",
instructions=instructions,
tools= listOf_tools,
model="gpt-4o-mini",
handoff_description="handoff agent will take care as instructed")
tools = [tool1, tool2, tool3]
handoffs = [handoff_agent]
main_instructions = """
You are a Role at ComplAI. Your goal is to do ….. using the agent_name tools.
Follow these steps carefully:
1. Generate Drafts: Use all agents agent_name tools ….
2. Handoff for Sending: Pass…. to the Handoff Manager' agent. The Handoff Manager will do….. """
agent_manager = Agent(
name="agent_name",
instructions=main_instructions,
tools=tools,
handoffs=handoffs,
model="gpt-4o-mini")
message = "add some message"
with trace("Automated SDR"):result = await Runner.run(agent_manager, message)
No comments:
Post a Comment