Sunday, January 4, 2026

MuleSoft

MuleSoft is an integration platform with a robust architecture connecting various systems, applications, and data sources to enable seamless communication and exchange. MuleSoft’s architecture is designed to facilitate the development of integration solutions, APIs (Application Programming Interfaces), and data flows. 

Overview of MuleSoft’s features and architecture:

  • Runtime Engine (Mule Runtime): 
    • At the core of MuleSoft’s architecture is the Mule Runtime Engine. This is the execution environment where integration applications, known as Mule applications or flows, run. It is responsible for processing data and orchestrating the flow of information between different components.
  • Mule Application (Flow): 
    • A Mule application is a collection of flows, which are individual integration components that define how data is processed. Each flow consists of message processors that perform specific actions like data transformation, routing, filtering, and connectivity to external systems.
  • Connectors: 
    • Connectors are reusable components that enable Mule applications to interact with various systems and data sources. MuleSoft provides many connectors for popular databases, cloud services, protocols, and APIs. Custom connectors can also be developed to connect to specific systems.
  • Anypoint Studio: 
    • Anypoint Studio is the integrated development environment (IDE) for designing, building, and testing Mule applications. It provides a visual interface for creating and configuring flows and connectors, making it easier for developers to design integration solutions.
  • API-Led Connectivity: 
    • MuleSoft promotes API-led connectivity, which involves designing and managing APIs as a central part of integration architecture. This approach consists of three layers:
  • System/API Layer: 
    • It connects to the underlying systems and data sources. It includes connectors and Mule flows responsible for system-level integration.
  • Process Layer: 
    • The process layer orchestrates data and functionality from various systems to create composite APIs and services.
  • Experience Layer: 
    • This layer exposes APIs to external consumers and developers, offering a well-defined interface for interacting with integrated systems.
  • Policies and Security: 
    • MuleSoft provides a range of policies for controlling access, securing data, and enforcing governance rules. Security features include OAuth support, encryption, and identity management.
  • Monitoring and Management: 
    • MuleSoft offers tools for monitoring and managing Mule applications and APIs. Anypoint Monitoring allows to track performance, detecting issues, and optimizing integrations.
  • Deployment Options: 
    • MuleSoft supports various deployment options, including on-premises, cloud-based, and hybrid deployments. Choose the deployment model that best suits the organization’s needs.
  • Runtime Fabric: 
    • For containerized deployment, MuleSoft offers Runtime Fabric, which allows running Mule applications on Kubernetes or Docker containers for scalability and flexibility.
  • Anypoint Exchange: 
    • Anypoint Exchange is a repository where organizations can discover, reuse, and share connectors, templates, and other assets. It facilitates collaboration among developers and promotes best practices.

Still in progress

Reference:

  1. https://docs.mulesoft.com/mule-runtime/3.9/mule-application-architecture?gad_source=1&gclid=Cj0KCQjwj4K5BhDYARIsAD1Ly2o9jZhCENPLsArBsvn8asq8Cj0fE3-gUONiGa8p2VOkvvnxj9YqPK4aAvfvEALw_wcB&gclsrc=aw.ds
  2. https://docs.mulesoft.com/cloudhub-2/

Friday, January 2, 2026

Agentic AI Framework - Autogen

Autogen Framework Overview

Autogen is an open-source framework from Microsoft, highlighting its asynchronous, event-driven architecture. The framework addresses issues such as observability, flexibility, control, and scalability in multi-agent systems.
  • Core Concepts and Components of Autogen: 
    • Autogen Core: 
      • Autogen Core serves as a generic, scalable runtime for multi-agent AI systems
      • Managing messaging and interactions between agents.
      • Generic framework for building a scalable Multi-Agentic system.
      • It can be distributed in different places.
      • An agent routine for running agents together 
      • Provides the essential Architecture for 
        • Agents
        • Messaging 
        • Memory
        • Orchstration
      • It is the backbone of Microsoft Agentic AI Framework
    • Autogen Agent Chat:
      • provides a lightweight abstraction for constructing agent-based workflows with LLMs and tool integrations.
      • Conversational Single and multi-agent application 
      • Similar to OpenAI SDK and to Crew AI
      • to use tools to allow them to interact with each other
      • This is built on the Autogen Core Platform
      • Core Components 
        • Assistant Agent: 
          • provides analysis, solution, and code
          • It represents an LLM‑powered autonomous agent whose job is to reason, respond, and collaborate with other agents 
          • Generates responses: It uses an LLM to produce messages, solutions, or reasoning steps.
          • Maintains conversation state: It tracks the dialogue history and context across turns.
          • Collaborates with other agents: User Agent for clarification, requests actions from a Tool Agent, coordinate with other Assistant Agents
          • Executes reasoning loops: selfreflect, revise answers, chain multiple reasoning steps, follow system rules defined in its configuration
          • 5. Enforces constraints: system prompts, tool access, termination conditions, memory behavior, etc
        • User Proxy Agent:
          • The human’s representative agent that sends user instructions into the AutoGen system and receives responses back. It is the “bridge” between the human and the multi‑agent system
          • Injects user messages into the agent conversation
          • Approves or rejects actions (if configured)
          • Acts as the human in multi‑agent workflows
        • Critic Agent:
          • Suggest Improvement. 
          • reviews another agent’s output and provides corrections, feedback, or refinements
          • Ensures the AssistantAgent’s answer is logically valid, accurate, or aligned with constraints.
        • Messenger Layer:
          • Handles back-and-forth communication between the agents. It is the communication channels between agents for AgentChat system 
          • A transport system that moves messages between agents in AgentChat.
          • Routes messages between UserProxyAgent, AssistantAgent, CriticAgent, ToolAgents, etc
        • Memory:
          • Stores conversation history and context 
          • stores, retrieves, and manages conversation memory so agents can remember past interactions and use them in future reasoning
          • It gives agents short‑term or long‑term memory.
    • Studio: 
      • Studio is a low-code/no-code visual app for building agent workflows.
      • Prototype and managing AI agents
      • A web-based UI for a quick prototype
        • Configuring 
        • Managing agents without writing code.
        • Built on AutoGen Chat, converstaional framework for single and multi-agent systems.
    • Magentic One CLI: 
      • Magentic One is a command-line application for running agents, both positioned as research tools rather than production-ready solutions.
      • A console based assitant 
      • Command-line tool
      • It can run multi-agents system through the command line/ terminal
      • Built on autogen Chat
      • Provide command line utility 
      • Run magnetic one agents directly from the local terminal.
    • Open Source and Research Focus: Autogen is developed as a Microsoft Research community project, with contributions from a broad base and a focus on open-source research rather than commercialization.
    • Key focus Areas: primarily work with Autogen Core and Agent Chat, avoiding the low-code/no-code tools.
  • Building Blocks: Models, Messages, and Agents: 
    • Model Abstraction: The model abstraction in Autogen wraps LLMs such as GPT-4 O mini or other models like Llama.
    • Example
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
 
from autogen_ext.models.ollama import OllamaChatCompletionClient
ollamamodel_client = OllamaChatCompletionClient(model="llama3.2")
    • Message Objects: Messages are core objects representing communication between agents, users, or internal tool calls, and can be simple text or multimodal (including images)
from autogen_agentchat.messages import TextMessage
message = TextMessage(content="I'd like to go to London", source="user")
    • Agent Creation: Agents are instantiated with a model client, a name, and a system message, and can be configured to stream results. The Assistant Agent class is the primary agent type used.
from autogen_agentchat.agents import AssistantAgent
 agent = AssistantAgent(
    name="airline_agent",
    model_client=model_client,
    system_message="act as a helpful assistant for an airline. give short, humorous answers.",
    model_client_stream=True
)
    • Agent Interaction via on_messages: The on_messages method is used to pass messages to agents asynchronously
from autogen_core import CancellationToken
response = await agent.on_messages([message],cancellation_token=CancellationToken())
response.chat_message.content
  • Tool Integration and Database Access: 
    • Database Setup and Query Tool: A SQLite database was created and populated with city and ticket price data. A Python function was implemented to query ticket prices by city, serving as a tool for the agent.
    • Tool Integration Simplicity: Autogen allows direct passing of Python functions as tools without decorators or wrappers, simplifying the process and reducing boilerplate.
    • Agent Tool Usage Example: An agent was configured with the ticket price lookup tool and demonstrated querying the database and returning a humorous, context-aware response to a user message.
    • Reflect on Tool Use Attribute: The reflect_on_tool_use attribute ensures that agents can process tool results and continue the conversation, rather than stopping after a tool call.
  • Advanced Features: Multimodal Messages and Structured Outputs: 
    • Multimodal Message Handling: Autogen supports multimodal messages, allowing users to send images alongside text.
    • Structured Output with Pydantic: Structured outputs are easily achieved by specifying a Pydantic model as the expected output type.
  • Langchain Tool Integration:
    • Langchain Tool Adapter Usage: The Langchain tool adapter allows any Langchain tool to be wrapped and used as an Autogen tool, facilitating seamless integration between the two ecosystems.
    • Agent Task Execution: An agent was tasked with finding flights, using the integrated tools to search online, write results to a file, and select the best option, demonstrating the practical utility of tool integration. 
  • Introduction to MCP Tools in Autogen: 
    • Integration with Autogen: Autogen provides wrappers that enable users to easily incorporate any MCP-compliant tool, such as MCP Server Fetch, into their workflows, allowing for seamless tool usage without requiring additional glue code.
    • MCP Server Fetch Example: The session included a practical example where the MCP Server Fetch tool, which runs a headless Playwright browser to scrape web pages, was run locally and used within Autogen to review and summarize a website, with the assistant replying in Markdown.
    • Open Ecosystem and Community Tools: MCP's open standard allows anyone to write and share tools, creating a large, public, and open-source ecosystem accessible from within Autogen.
  • Comparison of Microsoft Semantic Kernel and Autogen Core: 
    • Semantic Kernel Overview: Microsoft Semantic Kernel is a framework similar to Langchain, focusing on wrapping calls to large language models (LLMs), handling memory, tool calling, plugins, and prompt templating for business-oriented tasks.
    • Autogen Core Focus: Autogen Core is more agent-focused, designed for building autonomous agentic applications, and is distinct from Semantic Kernel; it orchestrates LLM calls for business logic.
    • Overlap and Use Cases: There is some overlap between Semantic Kernel and Autogen Core.
  • Interactions and Multi-Agent Workflows: 
    • Agent Roles: Multiple agents (e.g., Primary and Evaluator) were created with distinct roles and prompts, collaborating to find and evaluate flight options in a round-robin group chat.
    • Termination Conditions: set based on the evaluator agent replying with 'approve', which signals the end of the workflow.  noted that more robust conditions are advisable for production use.
    • Managing Conversation Flow: the risk of agents entering infinite loops or excessive back-and-forth, recommending prompt tuning and kernel restarts to manage runaway conversations, as Autogen lacks built-in recursion limits.
  • Introduction to Autogen Core and Its Architecture: 
    • Agent Interaction Framework: Autogen Core is a framework for managing interactions between agents, regardless of the underlying platform, programming language, or abstraction used to implement the agents.
    • Standalone vs Distributed Runtimes: Autogen Core supports two runtime types:
      • standalone (local, single-machine)
      • distributed (enabling remote, cross-process agent interactions).
    • Decoupling Logic and Messaging: The framework separates agent logic from message delivery, handling agent lifecycle and communication, while developers are responsible for the agent's internal logic.
    • Comparison between Autogen Core and LangGraph
      • Both manage agent interactions
      • LangGraph emphasizes robustness and replayability,
      • Autogen Core focuses on supporting diverse, distributed agent interactions.
  • Overview of Autogen Core Distributed Runtime: 
    • Distributed Runtime Architecture: The distributed runtime comprises a host service that manages connections and message delivery, and one or more worker runtimes that register and execute agents.
    • Session and Message Management: Direct messages are handled via GRPC sessions, with the framework managing the complexities of remote message delivery between processes, potentially in different languages.
    • Experimental Nature and Use Cases: emphasized that the distributed runtime is experimental, as an architectural preview, not for production use.
  • Autogen Core Distributed Runtime: 
    • It is not ready for production 
    • This is still in the conceptual model, which handles the processes.
    • Handles messaging across process boundaries.
    • It is not single-threaded running on a machine.
    • It does have two core capabilities 
      • Host Service:
        • Connected to the Worker Routine 
        • handles message delivery 
        • Create a session for direct messaging 
          • It handles through gRPC  (it manages the session)
          • Sending a message from one system to another system or from one process to another, this will be taken care of by the Autogen framework
        • It works as a central orchestrator 
        • Runs on one machine and knows all the agents in the system.
        • Keeps track of all the agents who all are registered, where they live, and how to route messages to any agents.
      • Worker Routine:
        • Advertise agents to the Host Service
        • handles executing agents' code 
        • Runtime in the single-threaded case 
        • Different agents that are registered with it
        • It can be a local machine, sperate machine, or a process
        • host one or more agents locally
        • connects back to the host service so it can send and recive message 
        • Worker Routine dont directly call another Worker Routine
          • They communicate through Host Service 
          • Host Service works as the central orchestrator 
        • Execute the code 
          • Ex 1: Search agent 
            • A worker's routine running is one container 
          • Ex 1: Summarize Agent 
            • Another worker's routine running is in a different container 
      • Agents:
        • The actual worker does the tasks
        • Each agent is registered either as a Host Service or a worker Routine
        • A Worker Routine handles communication

    •  Explanation:
      • There are 3 agents - Agent A, Agent B, and Agent C
        • Agent A - hosted in Worker Routine 1
        • Agent B  - hosted in Worker Routine 1
        • Agent C  - hosted in Worker Routine 2
      • Now Agent A wants to send a message to Agent C. The message goes from Worker Routine 1 to Host Service
      • Host Service looks up where Agent C is, and it forward message to Worker Routine 2 for Agent C
    • Notes:
      • If multiple agents live inside the same Worker Routine, all communication still goes through the Host Service.
      • Two agents are in the same Worker Routine,
      • Two agents are in different Worker Routines,
      • Or two agents are on different machines.
        • The communication path is always: AgentA → Worker Routine → Host Service → Worker Routine → Agent B
  • Autogen Documentation Confusionconfusion arising from differences between AG2 and Microsoft Autogen documentation for users.

 Still in progress 

Sunday, December 28, 2025

OpenAI Agents

OpenAI agents SDK Core features:


·        Introduction to Co-routines and other agents information: 

o   Co-routines Versus Functions:  when using 'async def' in Python, the resulting object is a coroutine, not a traditional function, and calling a coroutine returns a coroutine object rather than executing the code immediately.

o   Event Loop Mechanism: the event loop in the async IO library is a loop that schedules and executes coroutines, pausing them when they are waiting for IO and resuming others, enabling concurrency without traditional multithreading.

o   Awaiting Coroutines: to execute a coroutine, it must be awaited using the 'await' keyword, which schedules it for execution in the event loop and blocks until the coroutine completes.

o   Concurrent Execution with Gather: 'asyncio.gather' as a construct that allows multiple coroutines to be scheduled concurrently, with the event loop managing their execution and collecting their results as a list.

·        Introduction to OpenAI Agents SDK: 

o   Framework Characteristics: the OpenAI Agents SDK is lightweight, flexible, and not prescriptive, allowing users to choose their preferred working style while abstracting away repetitive tasks such as JSON handling.

o   Key Terminology in OpenAI Agents SDK- Agents Terminologythree core terms in the OpenAI Agents SDK—agents, handoffs, and guardrails.

§  Agents - agents as wrappers around LLMs with specific roles,

§  Handoffs - 'handoffs' as interactions between agents,

§  Guardrails - 'guardrails' as checks and controls to ensure agents behave within defined Guardrails.

·        Steps to Run an Agent in OpenAI Agents SDK:  three steps required to run an agent using the OpenAI Agents SDK:

o   Agent Instance Creation: Create an instance of an agent, which will represent a role in the solution.

o   Interaction Logging with Trace: using 'with trace' is recommended for logging all interactions with the agent, enabling monitoring through OpenAI's tools. Trace to track the agent.

o   Running the Agent Coroutine: the agent is executed by calling 'runner.run', which is a coroutine and must be awaited to actually run the agent.  Call runner.run to run the agent.

asyncio: 

asyncio is Python’s built‑in asynchronous programming framework. It performed multiple things at the same time without using threads or processes. asyncio will offer:

  • While task A is waiting, Python can run task B
  • While task B is waiting, Python can run task C

Asyncio, Python run many waiting tasks at the same time, making I/O-heavy programs dramatically faster. This makes the program much faster and more efficient. Use asyncio when the program does a lot of waiting

Don’t use for CPU-heavy tasks (math, ML, image processing). All methods and functions should start with async as keyword. During the calling of methods or functions, use await keyword

import asyncio
async def fetch_data():
    await asyncio.sleep(2)
     return "data"
 
async def main():
    results = await asyncio.gather(
        fetch_data(),
        fetch_data(),
        fetch_data()
    )
    print(results)
    result = await Runner.run(main())
 
INSTRUCTIONS = " some instruction as prompt."
 
first_agent = Agent(
    name="First agent",
    instructions=INSTRUCTIONS,
    tools=toolname,
    model="gpt-4o-mini",
)
message = "Welcome AI Agent frameworks in 2025"
with trace("Welcome"):
    result = await Runner.run(first_agent, message)
  
with trace("multiple agents"):
    results = await asyncio.gather(
        Runner.run(agent1, message),
        Runner.run(agent2, message),
        Runner.run(agent3, message),
            )

Key concepts in asyncio

Concept

Meaning

async def

Defines an asynchronous function

await

Pauses the function until the awaited task completes

event loop

The engine that schedules async tasks

asyncio.gather()

Runs multiple tasks at the same time

asyncio.sleep()

Non-blocking sleep (example of async I/O)

 Agent frameworks need to:

  • run multiple agents at the same time
  • handle many I/O operations (LLM calls, tools, APIs)
  • wait for responses without blocking the whole system
  • coordinate tasks, messages, and events
  • rely on asyncio.

Agent frameworks use asyncio, and the frameworks below are built around async execution:

1.        LangChain
a. Tool calls are async
b. LLM calls can be async
c. Agents run async loops
d. Many integrations require async def
2.        LangGraph
a. Entire architecture is async-first
b. Nodes, edges, and tool calls run concurrently
c.  Event-driven execution uses asyncio tasks
3.        CrewAI
a. Uses asyncio for parallel task execution
b. Agents can run concurrently
c. Async tool calls supported
4.        Microsoft Autogen (new version)
a. Fully async
b. Agents communicate via async message passing
5.        MCP (Model Context Protocol)
a. Server and client interactions use async
b. Streaming responses require async
c. Tool execution often async

Learning during this journey

  • Agent workflow
  • Use of tools to call functions
  • Agent collaboration via Tools and Handoffs

Example: Multi Agents

    Steps1: Define instruction, trying to run multiple agent as parallel. Define instruction, description, and tools 

instructions1 = "You are a agent1 working for ComplAI, define the action."
instructions2 = "You are a agent2 working for ComplAI, define the action."
instructions3 = "You are a agent3 working for ComplAI define the action."
 
description = "specific actions"
 
# Agent definition
agent1 = Agent(name="DeepSeek Sales Agent", instructions=instructions1, model=deepseek_model)
agent2 =  Agent(name="Gemini Sales Agent", instructions=instructions2, model=gemini_model)
agent3  = Agent(name="Llama3.3 Sales Agent",instructions=instructions3,model=llama3_3_model)
 
# tool definition
tool1 = agent1.as_tool(tool_name="agent1", tool_description=description)                    
tool2 = agent2.as_tool(tool_name="agent2", tool_description=description)
tool3 = agent3.as_tool(tool_name="agent3", tool_description=description)

 Steps2: Define the decorator function 

@function_tool
def do_action(sub: str, des: str) -> Dict[str, str]:
    """ Action definition"""
    return {"status": "success"}
 
Instruction to agent:              
    sub1_instructions = "write a detail for action perform by agent."
    sub2_instructions = "write a detail for action perform by agent."
            
            Define agents and convert them into tools: 
    agent4 = Agent(name="new_agent1", instructions= sub1_instructions, model="gpt-4o-mini")
    subject1_tool = Agent4.as_tool(tool_name=" Agent4", tool_description="tool description as action")
 
    agent5 = Agent(name="new_agent2", instructions= sub2, model="gpt-4o-mini")
    subject2_tool = html_converter.as_tool(tool_name=" Agent5",tool_description=" tool description as action")
 
    listOf_tools = [subject1_tool, Subject2_tool, do_action]
referencing a handoff agent:  

    agent5 = Agent(
        name="new_agent3",
        instructions=instructions,
        tools= listOf_tools,
        model="gpt-4o-mini",
        handoff_description="handoff agent will take care as instructed")
 
    tools = [tool1, tool2, tool3]
    handoffs = [handoff_agent]
Runs the entire multi‑agent system:
main_instructions = """
Role is ComplAI and goal is to do ….. using the agent_name tools.
Follow these steps carefully:
1. Generate Drafts: Use all agents agent_name tools ….
2. Handoff for Sending: Pass…. to the Handoff Manager' agent. The Handoff Manager will do….. """
 
agent_manager = Agent(
    name="agent_name",
    instructions=main_instructions,
    tools=tools,
    handoffs=handoffs,
    model="gpt-4o-mini")
 
message = "add some message"
with trace("Automated SDR"):
    result = await Runner.run(agent_manager, message)