Agentic AI Category - MarkTechPost https://www.marktechpost.com/category/editors-pick/agentic-ai/ An Artificial Intelligence News Platform Tue, 06 May 2025 17:45:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.marktechpost.com/wp-content/uploads/2022/04/cropped-Favicon-512-x-512-1-1-32x32.png Agentic AI Category - MarkTechPost https://www.marktechpost.com/category/editors-pick/agentic-ai/ 32 32 127842392 Implementing an AgentQL Model Context Protocol (MCP) Server https://www.marktechpost.com/2025/05/06/implementing-an-agentql-model-context-protocol-mcp-server/ https://www.marktechpost.com/2025/05/06/implementing-an-agentql-model-context-protocol-mcp-server/#respond Tue, 06 May 2025 17:45:14 +0000 https://www.marktechpost.com/?p=71141 AgentQL allows you to scrape any website with unstructured data by defining the exact shape of the information you want. It gives you consistent, structured results—even from pages with dynamic content or frequently changing layouts. In this tutorial, we’ll implement an AgentQL MCP server inside Claude Desktop, and use Claude’s built-in visualization capabilities to explore […]

The post Implementing an AgentQL Model Context Protocol (MCP) Server appeared first on MarkTechPost.

]]>
AgentQL allows you to scrape any website with unstructured data by defining the exact shape of the information you want. It gives you consistent, structured results—even from pages with dynamic content or frequently changing layouts.

In this tutorial, we’ll implement an AgentQL MCP server inside Claude Desktop, and use Claude’s built-in visualization capabilities to explore the data. Specifically, we’ll scrape an Amazon search results page for AI books, extracting details like price, rating, and number of reviews.

Step 1: Setting up dependencies

Node JS

We need npx to run the AgentQL server, which comes with Node.js.

  • Download the latest version of Node.js from nodejs.org
  • Run the installer.
  • Leave all settings as default and complete the installation

Claude Desktop

Download Claude using https://claude.ai/download.

AgentQL API

Create your AgentQL API key at dev.agentql.com/api-keys and store it securely — you’ll need it later in this tutorial.

Step 2: Installing the packages

Once Node.js is installed, open your terminal and run the following command:

npm install -g agentql-mcp

Step 3: Configuring the MCP Server

Next, configure Claude to connect to your MCP server. Open the claude_desktop_config.json file located in the Claude installation directory using any text editor. If the file doesn’t exist, you can create it manually. Once opened, enter the following code:

{
    "mcpServers": {
      "agentql": {
        "command": "npx",
        "args": ["-y", "agentql-mcp"],
        "env": {
          "AGENTQL_API_KEY": "<YOUR_API_KEY>"
        }
      }
    }
  }

Replace <YOUR_API_KEY> with the key you generated.

Step 4: Running the server

Once the MCP configuration is complete, your server should appear in Claude. The AgentQL server includes a single powerful tool — extract_web_data — which takes a URL and a natural language description of the data structure you want to extract.

You can use any URL you want to scrape. For this tutorial, I used an Amazon search results page for AI books and asked Claude to visualize the extracted data. Claude provides an interactive terminal where it generates code to process and visualize the data — and you can edit that code as needed. Once the code was finalized, Claude presented a bar chart with interactive options to explore prices, ratings, review counts, and even a price vs. rating scatter plot, along with key summary statistics.

AgentQL can be used to scrape websites, and we can connect it with other servers like Notion or GitHub to automatically send structured data for documentation, tracking, or further automation.

This makes AgentQL a powerful tool for turning unstructured web content into actionable insights — all within a simple, natural language workflow.


Here’s a brief overview of what we’re building at Marktechpost:

The post Implementing an AgentQL Model Context Protocol (MCP) Server appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/06/implementing-an-agentql-model-context-protocol-mcp-server/feed/ 0 71141
Google Releases 76-Page Whitepaper on AI Agents: A Deep Technical Dive into Agentic RAG, Evaluation Frameworks, and Real-World Architectures https://www.marktechpost.com/2025/05/06/google-releases-76-page-whitepaper-on-ai-agents-a-deep-technical-dive-into-agentic-rag-evaluation-frameworks-and-real-world-architectures/ https://www.marktechpost.com/2025/05/06/google-releases-76-page-whitepaper-on-ai-agents-a-deep-technical-dive-into-agentic-rag-evaluation-frameworks-and-real-world-architectures/#respond Tue, 06 May 2025 17:29:37 +0000 https://www.marktechpost.com/?p=71138 Google has published the second installment in its Agents Companion series—an in-depth 76-page whitepaper aimed at professionals developing advanced AI agent systems. Building on foundational concepts from the first release, this new edition focuses on operationalizing agents at scale, with specific emphasis on agent evaluation, multi-agent collaboration, and the evolution of Retrieval-Augmented Generation (RAG) into […]

The post Google Releases 76-Page Whitepaper on AI Agents: A Deep Technical Dive into Agentic RAG, Evaluation Frameworks, and Real-World Architectures appeared first on MarkTechPost.

]]>
Google has published the second installment in its Agents Companion series—an in-depth 76-page whitepaper aimed at professionals developing advanced AI agent systems. Building on foundational concepts from the first release, this new edition focuses on operationalizing agents at scale, with specific emphasis on agent evaluation, multi-agent collaboration, and the evolution of Retrieval-Augmented Generation (RAG) into more adaptive, intelligent pipelines.

Agentic RAG: From Static Retrieval to Iterative Reasoning

At the center of this release is the evolution of RAG architectures. Traditional RAG pipelines typically involve static queries to vector stores followed by synthesis via large language models. However, this linear approach often fails in multi-perspective or multi-hop information retrieval.

Agentic RAG reframes the process by introducing autonomous retrieval agents that reason iteratively and adjust their behavior based on intermediate results. These agents improve retrieval precision and adaptability through:

  • Context-Aware Query Expansion: Agents reformulate search queries dynamically based on evolving task context.
  • Multi-Step Decomposition: Complex queries are broken into logical subtasks, each addressed in sequence.
  • Adaptive Source Selection: Instead of querying a fixed vector store, agents select optimal sources contextually.
  • Fact Verification: Dedicated evaluator agents validate retrieved content for consistency and grounding before synthesis.

The net result is a more intelligent RAG pipeline, capable of responding to nuanced information needs in high-stakes domains such as healthcare, legal compliance, and financial intelligence.

Rigorous Evaluation of Agent Behavior

Evaluating the performance of AI agents requires a distinct methodology from that used for static LLM outputs. Google’s framework separates agent evaluation into three primary dimensions:

  1. Capability Assessment: Benchmarking the agent’s ability to follow instructions, plan, reason, and use tools. Tools like AgentBench, PlanBench, and BFCL are highlighted for this purpose.
  2. Trajectory and Tool Use Analysis: Instead of focusing solely on outcomes, developers are encouraged to trace the agent’s action sequence (trajectory) and compare it to expected behavior using precision, recall, and match-based metrics.
  3. Final Response Evaluation: Evaluation of the agent’s output through autoraters—LLMs acting as evaluators—and human-in-the-loop methods. This ensures that assessments include both objective metrics and human-judged qualities like helpfulness and tone.

This process enables observability across both the reasoning and execution layers of agents, which is critical for production deployments.

Scaling to Multi-Agent Architectures

As real-world systems grow in complexity, Google’s whitepaper emphasizes a shift toward multi-agent architectures, where specialized agents collaborate, communicate, and self-correct.

Key benefits include:

  • Modular Reasoning: Tasks are decomposed across planner, retriever, executor, and validator agents.
  • Fault Tolerance: Redundant checks and peer hand-offs increase system reliability.
  • Improved Scalability: Specialized agents can be independently scaled or replaced.

Evaluation strategies adapt accordingly. Developers must track not only final task success but also coordination quality, adherence to delegated plans, and agent utilization efficiency. Trajectory analysis remains the primary lens, extended across multiple agents for system-level evaluation.

Real-World Applications: From Enterprise Automation to Automotive AI

The second half of the whitepaper focuses on real-world implementation patterns:

AgentSpace and NotebookLM Enterprise

Google’s AgentSpace is introduced as an enterprise-grade orchestration and governance platform for agent systems. It supports agent creation, deployment, and monitoring, incorporating Google Cloud’s security and IAM primitives. NotebookLM Enterprise, a research assistant framework, enables contextual summarization, multimodal interaction, and audio-based information synthesis.

Automotive AI Case Study

A highlight of the paper is a fully implemented multi-agent system within a connected vehicle context. Here, agents are designed for specialized tasks—navigation, messaging, media control, and user support—organized using design patterns such as:

  • Hierarchical Orchestration: Central agent routes tasks to domain experts.
  • Diamond Pattern: Responses are refined post-hoc by moderation agents.
  • Peer-to-Peer Handoff: Agents detect misclassification and reroute queries autonomously.
  • Collaborative Synthesis: Responses are merged across agents via a Response Mixer.
  • Adaptive Looping: Agents iteratively refine results until satisfactory outputs are achieved.

This modular design allows automotive systems to balance low-latency, on-device tasks (e.g., climate control) with more resource-intensive, cloud-based reasoning (e.g., restaurant recommendations).


Check out the Full Guide here. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

The post Google Releases 76-Page Whitepaper on AI Agents: A Deep Technical Dive into Agentic RAG, Evaluation Frameworks, and Real-World Architectures appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/06/google-releases-76-page-whitepaper-on-ai-agents-a-deep-technical-dive-into-agentic-rag-evaluation-frameworks-and-real-world-architectures/feed/ 0 71138
NVIDIA Open Sources Parakeet TDT 0.6B: Achieving a New Standard for Automatic Speech Recognition ASR and Transcribes an Hour of Audio in One Second https://www.marktechpost.com/2025/05/05/nvidia-open-sources-parakeet-tdt-0-6b-achieving-a-new-standard-for-automatic-speech-recognition-asr-and-transcribes-an-hour-of-audio-in-one-second/ https://www.marktechpost.com/2025/05/05/nvidia-open-sources-parakeet-tdt-0-6b-achieving-a-new-standard-for-automatic-speech-recognition-asr-and-transcribes-an-hour-of-audio-in-one-second/#respond Tue, 06 May 2025 05:47:32 +0000 https://www.marktechpost.com/?p=71133 NVIDIA has unveiled Parakeet TDT 0.6B, a state-of-the-art automatic speech recognition (ASR) model that is now fully open-sourced on Hugging Face. With 600 million parameters, a commercially permissive CC-BY-4.0 license, and a staggering real-time factor (RTF) of 3386, this model sets a new benchmark for performance and accessibility in speech AI. Blazing Speed and Accuracy […]

The post NVIDIA Open Sources Parakeet TDT 0.6B: Achieving a New Standard for Automatic Speech Recognition ASR and Transcribes an Hour of Audio in One Second appeared first on MarkTechPost.

]]>
NVIDIA has unveiled Parakeet TDT 0.6B, a state-of-the-art automatic speech recognition (ASR) model that is now fully open-sourced on Hugging Face. With 600 million parameters, a commercially permissive CC-BY-4.0 license, and a staggering real-time factor (RTF) of 3386, this model sets a new benchmark for performance and accessibility in speech AI.

Blazing Speed and Accuracy

At the heart of Parakeet TDT 0.6B’s appeal is its unmatched speed and transcription quality. The model can transcribe 60 minutes of audio in just one second, a performance that’s over 50x faster than many existing open ASR models. On Hugging Face’s Open ASR Leaderboard, Parakeet V2 achieves a 6.05% word error rate (WER)—the best-in-class among open models.

This performance represents a significant leap forward for enterprise-grade speech applications, including real-time transcription, voice-based analytics, call center intelligence, and audio content indexing.

Technical Overview

Parakeet TDT 0.6B builds on a transformer-based architecture fine-tuned with high-quality transcription data and optimized for inference on NVIDIA hardware. Here are the key highlights:

  • 600M parameter encoder-decoder model
  • Quantized and fused kernels for maximum inference efficiency
  • Optimized for TDT (Transducer Decoder Transformer) architecture
  • Supports accurate timestamp formatting, numerical formatting, and punctuation restoration
  • Pioneers song-to-lyrics transcription, a rare capability in ASR models

The model’s high-speed inference is powered by NVIDIA’s TensorRT and FP8 quantization, enabling it to reach a real-time factor of RTF = 3386, meaning it processes audio 3386 times faster than real-time.

Benchmark Leadership

On the Hugging Face Open ASR Leaderboard—a standardized benchmark for evaluating speech models across public datasets—Parakeet TDT 0.6B leads with the lowest WER recorded among open-source models. This positions it well above comparable models like Whisper from OpenAI and other community-driven efforts.

Data based on May 5 2025

This performance makes Parakeet V2 not only a leader in quality but also in deployment readiness for latency-sensitive applications.

Beyond Conventional Transcription

Parakeet is not just about speed and word error rate. NVIDIA has embedded unique capabilities into the model:

  • Song-to-lyrics transcription: Unlocks transcription for sung content, expanding use cases into music indexing and media platforms.
  • Numerical and timestamp formatting: Improves readability and usability in structured contexts like meeting notes, legal transcripts, and health records.
  • Punctuation restoration: Enhances natural readability for downstream NLP applications.

These features elevate the quality of transcripts and reduce the burden on post-processing or human editing, especially in enterprise-grade deployments.

Strategic Implications

The release of Parakeet TDT 0.6B represents another step in NVIDIA’s strategic investment in AI infrastructure and open ecosystem leadership. With strong momentum in foundational models (e.g., Nemotron for language and BioNeMo for protein design), NVIDIA is positioning itself as a full-stack AI company—from GPUs to state-of-the-art models.

For the AI developer community, this open release could become the new foundation for building speech interfaces in everything from smart devices and virtual assistants to multimodal AI agents.

Getting Started

Parakeet TDT 0.6B is available now on Hugging Face, complete with model weights, tokenizer, and inference scripts. It runs optimally on NVIDIA GPUs with TensorRT, but support is also available for CPU environments with reduced throughput.

Whether you’re building transcription services, annotating massive audio datasets, or integrating voice into your product, Parakeet TDT 0.6B offers a compelling open-source alternative to commercial APIs.


Check out the Model on Hugging Face. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

The post NVIDIA Open Sources Parakeet TDT 0.6B: Achieving a New Standard for Automatic Speech Recognition ASR and Transcribes an Hour of Audio in One Second appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/05/nvidia-open-sources-parakeet-tdt-0-6b-achieving-a-new-standard-for-automatic-speech-recognition-asr-and-transcribes-an-hour-of-audio-in-one-second/feed/ 0 71133
OpenAI Releases a Strategic Guide for Enterprise AI Adoption: Practical Lessons from the Field https://www.marktechpost.com/2025/05/05/openai-releases-a-strategic-guide-for-enterprise-ai-adoption-practical-lessons-from-the-field/ https://www.marktechpost.com/2025/05/05/openai-releases-a-strategic-guide-for-enterprise-ai-adoption-practical-lessons-from-the-field/#respond Tue, 06 May 2025 03:29:27 +0000 https://www.marktechpost.com/?p=71129 OpenAI has published a comprehensive 24-page document titled AI in the Enterprise, offering a pragmatic framework for organizations navigating the complexities of large-scale AI deployment. Rather than focusing on abstract theories, the report presents seven implementation strategies based on field-tested insights from collaborations with leading companies including Morgan Stanley, Klarna, Lowe’s, and Mercado Libre. The […]

The post OpenAI Releases a Strategic Guide for Enterprise AI Adoption: Practical Lessons from the Field appeared first on MarkTechPost.

]]>
OpenAI has published a comprehensive 24-page document titled AI in the Enterprise, offering a pragmatic framework for organizations navigating the complexities of large-scale AI deployment. Rather than focusing on abstract theories, the report presents seven implementation strategies based on field-tested insights from collaborations with leading companies including Morgan Stanley, Klarna, Lowe’s, and Mercado Libre.

The document reads less like promotional material and more like an operational guidebook—emphasizing systematic evaluation, infrastructure readiness, and domain-specific integration.

1. Establish a Rigorous Evaluation Process

The first recommendation is to initiate AI adoption through well-defined evaluations (“evals”) that benchmark model performance against targeted use cases. Morgan Stanley applied this approach by assessing language translation, summarization, and knowledge retrieval in financial advisory contexts. The outcome was measurable: improved document access, reduced search latency, and broader AI adoption among advisors.

Evals not only validate models for deployment but also help refine workflows with empirical feedback loops, enhancing both safety and model alignment.

2. Integrate AI at the Product Layer

Rather than treating AI as an auxiliary function, the report stresses embedding it directly into user-facing experiences. For instance, Indeed utilized GPT-4o mini to personalize job matching, supplementing recommendations with contextual “why” statements. This increased user engagement and hiring success rates while maintaining cost-efficiency through fine-tuned, token-optimized models.

The key takeaway: model performance alone is insufficient—impact scales when AI is embedded into product logic and tailored to domain-specific needs.

3. Invest Early to Capture Compounding Returns

Klarna’s early investment in AI yielded substantial gains in operational efficiency. A GPT-powered assistant now handles two-thirds of support chats, reducing resolution times from 11 minutes to 2. The company also reports that 90% of employees are using AI in their workflows, a level of adoption that enables rapid iteration and organizational learning.

This illustrates how early engagement not only improves tooling but accelerates institutional adaptation and compound value capture.

4. Leverage Fine-Tuning for Contextual Precision

Generic models can deliver strong baselines, but domain adaptation often requires customization. Lowe’s achieved notable improvements in product search relevance by fine-tuning GPT models on their internal product data. The result: a 20% increase in tagging accuracy and a 60% improvement in error detection.

OpenAI highlights this approach as a low-latency pathway to achieve brand consistency, domain fluency, and efficiency across content generation and search tasks.

5. Empower Internal Experts, Not Just Technologists

BBVA exemplifies a decentralized AI adoption model by enabling non-technical employees to build custom GPT-based tools. In just five months, over 2,900 internal GPTs were created, addressing legal, compliance, and customer service needs without requiring engineering support.

This bottom-up strategy empowers subject-matter experts to iterate directly on their workflows, yielding more relevant solutions and reducing development cycles.

6. Streamline Developer Workflows with Dedicated Platforms

Engineering bandwidth remains a bottleneck in many organizations. Mercado Libre addressed this by building Verdi, a platform powered by GPT-4o mini, enabling 17,000 developers to prototype and deploy AI applications using natural language interfaces. The system integrates guardrails, APIs, and reusable components—allowing faster, standardized development.

The platform now supports high-value functions such as fraud detection, multilingual translation, and automated content tagging, demonstrating how internal infrastructure can accelerate AI velocity.

7. Automate Deliberately and Systematically

OpenAI emphasizes setting clear automation targets. Internally, they developed an automation platform that integrates with tools like Gmail to draft support responses and trigger actions. This system now handles hundreds of thousands of tasks monthly, reducing manual workload and enhancing responsiveness.

Their broader vision includes Operator, a browser-agent capable of autonomously interacting with web-based interfaces to complete multi-step processes—signaling a move toward agent-based, API-free automation.

Final Observations

The report concludes with a central theme: effective AI adoption requires iterative deployment, cross-functional alignment, and a willingness to refine strategies through experimentation. While the examples are enterprise-scale, the core principles—starting with evals, integrating deeply, and customizing with context—are broadly applicable.

Security and data governance are also addressed explicitly. OpenAI reiterates that enterprise data is not used for training, offers SOC 2 and CSA STAR compliance, and provides granular access control for regulated environments.

In an increasingly AI-driven landscape, OpenAI’s guide serves as both a mirror and a map—reflecting current best practices and helping enterprises chart a more structured, sustainable path forward.


Check out the Full Guide here. Also, don’t forget to follow us on Twitter.

Here’s a brief overview of what we’re building at Marktechpost:

The post OpenAI Releases a Strategic Guide for Enterprise AI Adoption: Practical Lessons from the Field appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/05/openai-releases-a-strategic-guide-for-enterprise-ai-adoption-practical-lessons-from-the-field/feed/ 0 71129
8 Comprehensive Open-Source and Hosted Solutions to Seamlessly Convert Any API into AI-Ready MCP Servers https://www.marktechpost.com/2025/05/05/8-comprehensive-open-source-and-hosted-solutions-to-seamlessly-convert-any-api-into-ai-ready-mcp-servers/ https://www.marktechpost.com/2025/05/05/8-comprehensive-open-source-and-hosted-solutions-to-seamlessly-convert-any-api-into-ai-ready-mcp-servers/#respond Mon, 05 May 2025 20:11:14 +0000 https://www.marktechpost.com/?p=71118 The Model Communication Protocol (MCP) is an emerging open standard that allows AI agents to interact with external services through a uniform interface. Instead of writing custom integrations for each API, an MCP server exposes a set of tools that a client AI can discover and invoke dynamically. This decoupling means API providers can evolve […]

The post 8 Comprehensive Open-Source and Hosted Solutions to Seamlessly Convert Any API into AI-Ready MCP Servers appeared first on MarkTechPost.

]]>
The Model Communication Protocol (MCP) is an emerging open standard that allows AI agents to interact with external services through a uniform interface. Instead of writing custom integrations for each API, an MCP server exposes a set of tools that a client AI can discover and invoke dynamically. This decoupling means API providers can evolve their back ends or add new operations without breaking existing AI clients. At the same time, AI developers gain a consistent protocol to call, inspect, and combine external capabilities. Below are eight solutions for converting existing APIs into MCP servers. This article explains each solution’s purpose, technical approach, implementation steps or requirements, unique features, deployment strategies, and suitability for different development workflows.

FastAPI-MCP: Native FastAPI Extension

FastAPI-MCP is an open-source library that integrates directly with Python’s FastAPI framework. All existing REST routes become MCP tools by instantiating a single class and mounting it on your FastAPI app. Input and output schemas defined via Pydantic models carry over automatically, and the tool descriptions derive from your route documentation. Authentication and dependency injection behave exactly as in normal FastAPI endpoints, ensuring that any security or validation logic you already have remains effective.

Under the hood, FastAPI-MCP hooks into the ASGI application and routes MCP protocol calls to the appropriate FastAPI handlers in-process. This avoids extra HTTP overhead and keeps performance high. Developers install it via pip, add a minimal snippet such as:

from fastapi import FastAPI
from fastapi_mcp import FastApiMCP

app = FastAPI()
mcp = FastApiMCP(app)
mcp.mount(path="/mcp")

The resulting MCP server can run on the same Uvicorn process or separately. Because it is fully open-source under the MIT license, teams can audit, extend, or customize it as needed.

RapidMCP: Zero-Code REST-to-MCP Conversion Service

RapidMCP provides a hosted, no-code pathway to transform existing REST APIs, particularly those with OpenAPI specifications, into MCP servers without changing backend code. After registering an account, a developer points RapidMCP at their API’s base URL or uploads an OpenAPI document. RapidMCP then spins up an MCP server in the cloud that proxies tool calls back to the original API.

Each route becomes an MCP tool whose arguments and return types reflect the API’s parameters and responses. Because RapidMCP sits in front of your service, it can supply usage analytics, live tracing of AI calls, and built-in rate limiting. The platform also plans self-hosting options for enterprises that require on-premises deployments. Teams who prefer a managed experience can go from API to AI-agent compatibility in under an hour, at the expense of trusting a third-party proxy.

MCPify: No-Code MCP Server Builder with AI Assistant

MCPify is a fully managed, no-code environment where users describe desired functionality in natural language, such as “fetch current weather for a given city”, and an AI assistant generates and hosts the corresponding MCP tools. The service hides all code generation, infrastructure provisioning, and deployment details. Users interact via a chat or form interface, review automatically generated tool descriptions, and deploy with a click.

Because MCPify leverages large language models to assemble integrations on the fly, it excels at rapid prototyping and empowers non-developers to craft AI-accessible services. It supports common third-party APIs, offers one-click sharing of created servers with other platform users, and automatically handles protocol details such as streaming responses and authentication. The trade-off is less direct control over the code and reliance on a closed-source hosted platform.

Speakeasy: OpenAPI-Driven SDK and MCP Server Generator

Speakeasy is known for generating strongly typed client SDKs from OpenAPI specifications, and it extends this capability to MCP by producing a fully functional TypeScript MCP server alongside each SDK. After supplying an OpenAPI 3.x spec to Speakeasy’s code generator, teams receive:

  • A typed client library for calling the API
  • Documentation derived directly from the spec
  • A standalone MCP server implementation in TypeScript

The generated server wraps each API endpoint as an MCP tool, preserving descriptions and models. Developers can run the server via a provided CLI or compile it to a standalone binary. Because the output is actual code, teams have full visibility and can customize behavior, add composite tools, enforce scopes or permissions, and integrate custom middleware. This approach is ideal for organizations with mature OpenAPI workflows that want to offer AI-ready access in a controlled, maintainable way.

Higress MCP Marketplace: Open-Source API Gateway at Scale

Higress is an open-source API gateway built atop Envoy and Istio, extended to support the MCP protocol. Its conversion tool takes an OpenAPI spec and generates a declarative YAML configuration that the gateway uses to host an MCP server. Each API operation becomes a tool with templates for HTTP requests and response formatting, all defined in configuration rather than code. Higress powers a public “MCP Marketplace” where multiple APIs are published as MCP servers, enabling AI clients to discover and consume them centrally. Enterprises can self-host the same infrastructure to expose hundreds of internal services via MCP. The gateway handles protocol version upgrades, rate limiting, authentication, and observability. It is particularly well suited for large-scale or multi-API environments, turning API-MCP conversions into a configuration-driven process that integrates seamlessly with infrastructure-as-code pipelines.

Django-MCP: Plugin for Django REST Framework

Django-MCP is an open-source plugin that brings MCP support to the Django REST Framework (DRF). By applying a mixin to your view sets or registering an MCP router, it automatically exposes DRF endpoints as MCP tools. It introspects serializers to derive input schemas and uses your existing authentication backends to secure tool invocations. Underneath, MCP calls are translated into normal DRF viewset actions, preserving pagination, filtering, and validation logic.

Installation requires adding the package to your requirements, including the Django-MCP application, and configuring a route:

from django.urls import path
from django_mcp.router import MCPRouter

router = MCPRouter()
router.register_viewset('mcp', MyModelViewSet)

urlpatterns = [
    path('api/', include(router.urls)),
]

This approach allows teams already invested in Django to add AI-agent compatibility without duplicating code. It also supports custom tool annotations via decorators for fine-tuned naming or documentation.

GraphQL-MCP: Converting GraphQL Endpoints to MCP

GraphQL-MCP is a community-driven library that wraps a GraphQL server and exposes its queries and mutations as individual MCP tools. It parses the GraphQL schema to generate tool manifests, mapping each operation to a tool name and input type. When an AI agent invokes a tool, GraphQL-MCP constructs and executes the corresponding GraphQL query or mutation, then returns the results in a standardized JSON format expected by MCP clients. This solution is valuable for organizations using GraphQL who want to leverage AI agents without settling on a REST convention or writing bespoke GraphQL calls. It supports features like batching, authentication via existing GraphQL context mechanisms, and schema stitching to combine GraphQL services under one MCP server.

gRPC-MCP: Bridging gRPC Services for AI Agents

gRPC-MCP focuses on exposing high-performance gRPC services to AI agents through MCP. It uses protocol buffers’ service definitions to generate an MCP server that accepts JSON-RPC-style calls, internally marshals them to gRPC requests, and streams responses. Developers include a small adapter in their gRPC server code:

import "google.golang.org/grpc"
import "grpc-mcp-adapter"

func main() {
  srv := grpc.NewServer()
  myService.RegisterMyServiceServer(srv, &MyServiceImpl{})
  mcpAdapter := mcp.NewAdapter(srv)
  http.Handle("/mcp", mcpAdapter.Handler())
  log.Fatal(http.ListenAndServe(":8080", nil))
}

This makes it easy to bring low-latency, strongly typed services into the MCP ecosystem, opening the door for AI agents to call business-critical gRPC methods directly.

Choosing the Right Tool

Selecting among these eight solutions depends on several factors:

  • Preferred development workflow: FastAPI-MCP and Django-MCP for code-first integration, Speakeasy for spec-driven code generation, GraphQL-MCP or gRPC-MCP for non-REST paradigms.
  • Control versus convenience: Libraries like FastAPI-MCP, Django-MCP, and Speakeasy give full code control, while hosted platforms like RapidMCP and MCPify trade off some control for speed and ease.
  • Scale and governance: Higress shines when converting and managing large numbers of APIs in a unified gateway, with built-in routing, security, and protocol upgrades.
  • Rapid prototyping: MCPify’s AI assistant allows non-developers to spin up MCP servers instantly, which is ideal for experimentation and internal automation.

All these tools adhere to the evolving MCP specification, ensuring interoperability among AI agents and services. By choosing the right converter, API providers can accelerate the adoption of AI-driven workflows and empower agents to orchestrate real-world capabilities safely and efficiently.

The post 8 Comprehensive Open-Source and Hosted Solutions to Seamlessly Convert Any API into AI-Ready MCP Servers appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/05/8-comprehensive-open-source-and-hosted-solutions-to-seamlessly-convert-any-api-into-ai-ready-mcp-servers/feed/ 0 71118
How the Model Context Protocol (MCP) Standardizes, Simplifies, and Future-Proofs AI Agent Tool Calling Across Models for Scalable, Secure, Interoperable Workflows Traditional Approaches to AI–Tool Integration https://www.marktechpost.com/2025/05/04/how-the-model-context-protocol-mcp-standardizes-simplifies-and-future-proofs-ai-agent-tool-calling-across-models-for-scalable-secure-interoperable-workflows-traditional-approaches-to-ai/ https://www.marktechpost.com/2025/05/04/how-the-model-context-protocol-mcp-standardizes-simplifies-and-future-proofs-ai-agent-tool-calling-across-models-for-scalable-secure-interoperable-workflows-traditional-approaches-to-ai/#respond Mon, 05 May 2025 05:56:54 +0000 https://www.marktechpost.com/?p=71110 Before MCP, LLMs relied on ad-hoc, model-specific integrations to access external tools. Approaches like ReAct interleave chain-of-thought reasoning with explicit function calls, while Toolformer trains the model to learn when and how to invoke APIs. Libraries such as LangChain and LlamaIndex provide agent frameworks that wrap LLM prompts around custom Python or REST connectors, and […]

The post How the Model Context Protocol (MCP) Standardizes, Simplifies, and Future-Proofs AI Agent Tool Calling Across Models for Scalable, Secure, Interoperable Workflows Traditional Approaches to AI–Tool Integration appeared first on MarkTechPost.

]]>
Before MCP, LLMs relied on ad-hoc, model-specific integrations to access external tools. Approaches like ReAct interleave chain-of-thought reasoning with explicit function calls, while Toolformer trains the model to learn when and how to invoke APIs. Libraries such as LangChain and LlamaIndex provide agent frameworks that wrap LLM prompts around custom Python or REST connectors, and systems like Auto-GPT decompose goals into sub-tasks by repeatedly calling bespoke services. Because each new data source or API requires its own wrapper, and the agent must be trained to use it, these methods produce fragmented, difficult-to-maintain codebases. In short, prior paradigms enable tool calling but impose isolated, non-standard workflows, motivating the search for a unified solution.

Model Context Protocol (MCP): An Overview  

The Model Context Protocol (MCP) was introduced to standardize how AI agents discover and invoke external tools and data sources. MCP is an open protocol that defines a common JSON-RPC-based API layer between LLM hosts and servers. In effect, MCP acts like a “USB-C port for AI applications”, a universal interface that any model can use to access tools. MCP enables secure, two-way connections between an organization’s data sources and AI-powered tools, replacing the piecemeal connectors of the past. Crucially, MCP decouples the model from the tools. Instead of writing model-specific prompts or hard-coding function calls, an agent simply connects to one or more MCP servers, each of which exposes data or capabilities in a standardized way. The agent (or host) retrieves a list of available tools, including their names, descriptions, and input/output schemas, from the server. The model can then invoke any tool by name. This standardization and reuse are a core advantage over prior approaches.

MCP’s open specification defines three core roles:

  • Host – The LLM application or user interface (e.g., a chat UI, IDE, or agent orchestration engine) that the user interacts with. The host embeds the LLM and acts as an MCP client.
  • Client – The software module within the host that implements the MCP protocol (typically via SDKs). The client handles messaging, authentication, and marshalling model prompts and responses.
  • Server – A service (local or remote) that provides context and tools. Each MCP server may wrap a database, API, codebase, or other system, and it advertises its capabilities to the client.

MCP was explicitly inspired by the Language Server Protocol (LSP) used in IDEs: just as LSP standardizes how editors query language features, MCP standardizes how LLMs query contextual tools. By using a common JSON-RPC 2.0 message format, any client and server that adheres to MCP can interoperate, regardless of the programming language or LLM used.

Technical Design and Architecture of MCP  

MCP relies on JSON-RPC 2.0 to carry three types of messages, requests, responses, and notifications, allowing agents to perform both synchronous tool calls and receive asynchronous updates. In local deployments, the client often spawns a subprocess and communicates over stdin/stdout (the stdio transport). In contrast, remote servers typically use HTTP with Server-Sent Events (SSE) to stream messages in real-time. This flexible messaging layer ensures that tools can be invoked and results delivered without blocking the host application’s main workflow. 

Under the MCP specification, every server exposes three standardized entities: resources, tools, and prompts. Resources are fetchable pieces of context, such as text files, database tables, or cached documents, that the client can retrieve by ID. Tools are named functions with well-defined input and output schemas, whether that’s a search API, a calculator, or a custom data-processing routine. Prompts are optional, higher-level templates or workflows that guide the model through multi-step interactions. By providing JSON schemas for each entity, MCP enables any capable large language model (LLM) to interpret and invoke these capabilities without requiring bespoke parsing or hard-coded integrations. 

The MCP architecture cleanly separates concerns across three roles. The host embeds the LLM and orchestrates conversation flow, passing user queries into the model and handling its outputs. The client implements the MCP protocol itself, managing all message marshalling, authentication, and transport details. The server advertises available resources and tools, executes incoming requests (for example, listing tools or performing a query), and returns structured results. This modular design, encompassing AI and UI in the host, protocol logic in the client, and execution in the server, ensures that systems remain maintainable, extensible, and easy to evolve.

Interaction Model and Agent Workflows  

Using MCP in an agent follows a simple pattern of discovery and execution. When the agent connects to an MCP server, it first calls the ‘list_tools()’ method to retrieve all available tools and resources. The client then integrates these descriptions into the LLM’s context (e.g., by formatting them into the prompt). The model now knows that these tools exist and what parameters they take. When the agent decides to use a tool (often prompted by a user’s query), the LLM emits a structured call (e.g., a JSON object with ‘”call”: “tool_name”, “args”: {…}’). The host recognizes this as a tool invocation, and the client issues a corresponding ‘call_tool()’ request to the server. The server executes the tool and sends back the result. The client then feeds this result into the model’s next prompt, making it appear as additional context.

This workflow replaces brittle ad-hoc parsing. The Agents SDK will call ‘list_tools()’ on MCP servers each time the agent is run, making the LLM aware of the server’s tools. When the LLM calls a tool, the SDK calls the ‘call_tool()’ function on the server behind the scenes. This protocol transparently handles the loop of discover→prompt→tool→respond. Furthermore, MCP supports composable workflows. Servers can define multi-step prompt templates, where the output of one tool serves as the input for another, enabling the agent to execute complex sequences. Future versions of MCP and related SDKs will already be adding features such as long-running sessions, stateful interactions, and scheduled tasks.

Implementations and Ecosystem  

MCP is implementation-agnostic. The official specification is maintained on GitHub, and multiple language SDKs are available, including TypeScript, Python, Java, Kotlin, and C#. Developers can write MCP clients or servers in their preferred stack. For example, the OpenAI Agents SDK includes classes that enable easy connection to standard MCP servers from Python. InfraCloud’s tutorial demonstrates setting up a Node.js-based file-system MCP server to allow an LLM to browse local files.

A growing number of MCP servers have been published as open source. Anthropic has released connectors for many popular services, including Google Drive, Slack, GitHub, Postgres, MongoDB, and web browsing with Puppeteer, among others. Once one team builds a server for Jira or Salesforce, any compliant agent can use it without rework. On the client/host side, many agent platforms have integrated MCP support. Claude Desktop can attach to MCP servers. Google’s Agent Development Kit treats MCP servers as tool providers for Gemini models. Cloudflare’s Agents SDK added an McpAgent class so that any FogLAMP can become an MCP client with built-in auth support. Even auto-agents like Auto-GPT can plug into MCP: instead of coding a specific function for each API, the agent uses an MCP client library to call tools. This trend toward universal connectors promises a more modular autonomous agent architecture.

In practice, this ecosystem enables any given AI assistant to connect to multiple data sources simultaneously. One can imagine an agent that, in one session, uses an MCP server for corporate docs, another for CRM queries, and yet another for on-device file search. MCP even handles naming collisions gracefully: if two servers each have a tool called ‘analyze’, clients can namespace them (e.g., ‘ImageServer.analyze’ vs ‘CodeServer.analyze’) so both remain available without conflict.

Advantages of MCP Over Prior Paradigms  

MCP brings several key benefits that earlier methods lack:

  • Standardized Integration: MCP provides a single protocol for all tools. Whereas each framework or model previously had its way of defining tools, MCP means that the tool servers and clients agree on JSON schemas. This eliminates the need for separate connectors per model or per agent, streamlining development and eliminating the need for custom parsing logic for each tool’s output.
  • Dynamic Tool Discovery: Agents can discover tools at runtime by calling ‘list_tools()’ and dynamically learning about available capabilities. There is no need to restart or reprogram the model when a new tool is added. This flexibility stands in contrast to frameworks where available tools are hardcoded at startup.
  • Interoperability and Reuse: Because MCP is model-agnostic, the same tool server can serve multiple LLM clients. With MCP, an organization can implement a single connector for a service and have it work with any compliant LLM, thereby avoiding vendor lock-in and reducing duplicate engineering efforts.
  • Scalability and Maintenance: MCP dramatically reduces duplicated work. Rather than writing ten different file-search functions for ten models, developers write one MCP file-search server. Updates and bug fixes to that server benefit all agents across all models.
  • Composable Ecosystem: MCP enables a marketplace of independently developed servers. Companies can publish MCP connectors for their software, allowing any AI to integrate with their data. This encourages an open ecosystem of connectors analogous to web APIs.
  • Security and Control: The protocol supports clear authorization flows. MCP servers describe their tools and required scopes, and hosts must obtain user consent before exposing data. This explicit approach improves auditability and security compared to free-form prompting.

Industry Impact and Real-World Applications  

MCP adoption is growing rapidly. Major vendors and frameworks have publicly invested in MCP or related agent standards. Organizations are exploring MCP to integrate internal systems, such as CRM, knowledge bases, and analytics platforms, into AI assistants.

Concrete use cases include:

  • Developer Tools: Code editors and search platforms (e.g., Zed, Replit, Sourcegraph) utilize MCP to enable assistants to query code repositories, documentation, and commit history, resulting in richer code completion and refactoring suggestions.
  • Enterprise Knowledge & Chatbots: Helpdesk bots can access Zendesk or SAP data via MCP servers, answering questions about open tickets or generating reports based on real-time enterprise data, all with built-in authorization and audit trails.
  • Enhanced Retrieval-Augmented Generation: RAG agents can combine embedding-based retrieval with specialized MCP tools for database queries or graph searches, thereby overcoming the limitations of LLMs in terms of factual accuracy and arithmetic.
  • Proactive Assistants: Event-driven agents monitor email or task streams and autonomously schedule meetings or summarize action items by calling calendar and note-taking tools through MCP.

In each scenario, MCP enables agents to scale across diverse systems without requiring the rewriting of integration code, delivering maintainable, secure, and interoperable AI solutions.

Comparisons with Prior Paradigms  

  • Versus ReAct: ReAct-style prompting embeds action instructions directly into free text, requiring developers to parse model outputs and manually handle each action. MCP provides the model with a formal interface using JSON schemas, enabling clients to manage execution seamlessly.
  • Versus Toolformer: Toolformer ties tool knowledge to the model’s training data, necessitating retraining for new tools. MCP externalizes tool interfaces entirely from the model, enabling zero-shot support for any registered tool without retraining.
  • Versus Framework Libraries: Libraries like LangChain simplify building agent loops but still require hardcoded connectors. MCP shifts integration logic into a reusable protocol, making agents more flexible and reducing code duplication.
  • Versus Autonomous Agents: Auto-GPT agents typically bake tool wrappers and loop logic into Python scripts. By using MCP clients, such agents need no bespoke code for new services, instead relying on dynamic discovery and JSON-RPC calls.
  • Versus Function-Calling APIs: While modern LLM APIs offer function-calling capabilities, they remain model-specific and are limited to single turns. MCP generalizes function calling across any client and server, with support for streaming, discovery, and multiplexed services.

MCP thus unifies and extends previous approaches, offering dynamic discovery, standardized schemas, and cross-model interoperability in a single protocol.

Limitations and Challenges  

Despite its promise, MCP is still maturing:

  • Authentication and Authorization: The spec leaves auth schemes to implementations. Current solutions require layering OAuth or API keys externally, which can complicate deployments without a unified auth standard.
  • Multi-step Workflows: MCP focuses on discrete tool calls. Orchestrating long-running, stateful workflows often still relies on external schedulers or prompt chaining, as the protocol lacks a built-in session concept.
  • Discovery at Scale: Managing many MCP server endpoints can be burdensome in large environments. Proposed solutions include well-known URLs, service registries, and a central connector marketplace, but these are not yet standardized.
  • Ecosystem Maturity: MCP is new, so not every tool or data source has an existing connector. Developers may need to build custom servers for niche systems, although the protocol’s simplicity keeps that effort relatively low.
  • Development Overhead: For single, simple tool calls, the MCP setup can feel heavyweight compared to a quick, direct API call. MCP’s benefits accrue most in multi-tool, long-lived production systems rather than short experiments.

Many of these gaps are already being addressed by contributors and vendors, with plans to add standardized auth extensions, session management, and discovery infrastructure.

In conclusion, the Model Context Protocol represents a significant milestone in AI agent design, offering a unified, extensible, and interoperable approach for LLMs to access external tools and data sources. By standardizing discovery, invocation, and messaging, MCP eliminates the need for custom connectors per model or framework, enabling agents to integrate diverse services seamlessly. Early adopters across development tools, enterprise chatbots, and proactive assistants are already reaping the benefits of maintainability, scalability, and security that MCP offers. As MCP evolves, adding richer auth, session support, and registry services, it is poised to become the universal standard for AI connectivity, much like HTTP did for the web. For researchers, developers, and technology leaders alike, MCP opens the door to more powerful, flexible, and future-proof AI solutions.

Sources

The post How the Model Context Protocol (MCP) Standardizes, Simplifies, and Future-Proofs AI Agent Tool Calling Across Models for Scalable, Secure, Interoperable Workflows Traditional Approaches to AI–Tool Integration appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/04/how-the-model-context-protocol-mcp-standardizes-simplifies-and-future-proofs-ai-agent-tool-calling-across-models-for-scalable-secure-interoperable-workflows-traditional-approaches-to-ai/feed/ 0 71110
Building AI Agents Using Agno’s Multi-Agent Teaming Framework for Comprehensive Market Analysis and Risk Reporting https://www.marktechpost.com/2025/05/04/building-ai-agents-using-agnos-multi-agent-teaming-framework-for-comprehensive-market-analysis-and-risk-reporting/ https://www.marktechpost.com/2025/05/04/building-ai-agents-using-agnos-multi-agent-teaming-framework-for-comprehensive-market-analysis-and-risk-reporting/#respond Sun, 04 May 2025 20:27:40 +0000 https://www.marktechpost.com/?p=71099 In today’s fast-paced financial landscape, leveraging specialized AI agents to handle discrete aspects of analysis is key to delivering timely, accurate insights. Agno’s lightweight, model-agnostic framework empowers developers to rapidly spin up purpose-built agents, such as our Finance Agent for structured market data and Risk Assessment Agent for volatility and sentiment analysis, without boilerplate or […]

The post Building AI Agents Using Agno’s Multi-Agent Teaming Framework for Comprehensive Market Analysis and Risk Reporting appeared first on MarkTechPost.

]]>
In today’s fast-paced financial landscape, leveraging specialized AI agents to handle discrete aspects of analysis is key to delivering timely, accurate insights. Agno’s lightweight, model-agnostic framework empowers developers to rapidly spin up purpose-built agents, such as our Finance Agent for structured market data and Risk Assessment Agent for volatility and sentiment analysis, without boilerplate or complex orchestration code. By defining clear instructions and composing a multi-agent “Finance-Risk Team,” Agno handles the coordination, tool invocation, and context management behind the scenes, enabling each agent to focus on its domain expertise while seamlessly collaborating to produce a unified report.

!pip install -U agno google-genai duckduckgo-search yfinance

We install and upgrade the core Agno framework, Google’s GenAI SDK for Gemini integration, the DuckDuckGo search library for querying live information, and YFinance for seamless access to stock market data. By running it at the start of our Colab session, we ensure all necessary dependencies are available and up to date for building and running your finance and risk assessment agents.

from getpass import getpass
import os


os.environ["GOOGLE_API_KEY"] = getpass("Enter your Google API key: ")

The above code securely prompts you to enter your Google API key in Colab without echoing it to the screen, and then it is stored in the GOOGLE_API_KEY environment variable. Agno’s Gemini model wrapper and the Google GenAI SDK can automatically authenticate subsequent API calls by setting this variable.

from agno.agent import Agent
from agno.models.google import Gemini
from agno.tools.reasoning import ReasoningTools
from agno.tools.yfinance import YFinanceTools


agent = Agent(
    model=Gemini(id="gemini-1.5-flash"),  
    tools=[
        ReasoningTools(add_instructions=True),
        YFinanceTools(
            stock_price=True,
            analyst_recommendations=True,
            company_info=True,
            company_news=True
        ),
    ],
    instructions=[
        "Use tables to display data",
        "Only output the report, no other text",
    ],
    markdown=True,
)


agent.print_response(
    "Write a report on AAPL",
    stream=True,
    show_full_reasoning=True,
    stream_intermediate_steps=True
)

We initialize an Agno agent powered by Google’s Gemini (1.5 Flash) model, equip it with reasoning capabilities and YFinance tools to fetch stock data, analyst recommendations, company information, and news, and then stream a step-by-step, fully transparent report on AAPL, complete with chained reasoning and intermediate tool calls, directly to the Colab output.

finance_agent = Agent(
    name="Finance Agent",
    model=Gemini(id="gemini-1.5-flash"),
    tools=[
        YFinanceTools(
            stock_price=True,
            analyst_recommendations=True,
            company_info=True,
            company_news=True
        )
    ],
    instructions=[
        "Use tables to display stock price, analyst recommendations, and company info.",
        "Only output the financial report without additional commentary."
    ],
    markdown=True
)


risk_agent = Agent(
    name="Risk Assessment Agent",
    model=Gemini(id="gemini-1.5-flash"),
    tools=[
        YFinanceTools(
            stock_price=True,
            company_news=True
        ),
        ReasoningTools(add_instructions=True)
    ],
    instructions=[
        "Analyze recent price volatility and news sentiment to provide a risk assessment.",
        "Use tables where appropriate and only output the risk assessment section."
    ],
    markdown=True
)

These definitions create two specialized Agno agents using Google’s Gemini (1.5 Flash) model: the Finance Agent fetches and tabulates stock prices, analyst recommendations, company info, and news to deliver a concise financial report, while the Risk Assessment Agent analyzes price volatility and news sentiment, leveraging reasoning tools where needed, to generate a focused risk assessment section.

from agno.team.team import Team
from textwrap import dedent


team = Team(
    name="Finance-Risk Team",
    mode="coordinate",
    model=Gemini(id="gemini-1.5-flash"),
    members=[finance_agent, risk_agent],
    tools=[ReasoningTools(add_instructions=True)],
    instructions=[
        "Delegate financial analysis requests to the Finance Agent.",
        "Delegate risk assessment requests to the Risk Assessment Agent.",
        "Combine their outputs into one comprehensive report."
    ],
    markdown=True,
    show_members_responses=True,
    enable_agentic_context=True
)


task = dedent("""
1. Provide a financial overview of AAPL.
2. Provide a risk assessment for AAPL based on volatility and recent news.
""")


response = team.run(task)
print(response.content)

We assemble a coordinated “Finance-Risk Team” using Agno and Google Gemini. It delegates financial analyses to the Finance Agent and volatility/news assessments to the Risk Assessment Agent, then synthesizes their outputs into a single, comprehensive report. By calling team.run on a two-part AAPL task, it transparently orchestrates each expert agent and prints the unified result.

team.print_response(
    task,
    stream=True,
    stream_intermediate_steps=True,
    show_full_reasoning=True
)

We instruct the Finance-Risk Team to execute the AAPL task in real time, streaming each agent’s internal reasoning, tool invocations, and partial outputs as they happen. By enabling stream_intermediate_steps and show_full_reasoning, we’ll see exactly how Agno coordinates the Finance and Risk Assessment Agents step-by-step before delivering the final, combined report.

In conclusion, harnessing Agno’s multi-agent teaming capabilities transforms what would traditionally be a monolithic AI workflow into a modular, maintainable system of experts. Each agent in the team can specialize in fetching financial metrics, parsing analyst sentiment, or evaluating risk factors. At the same time, Agno’s Team API orchestrates delegation, context-sharing, and final synthesis. The result is a robust, extensible architecture ranging from simple two-agent setups to complex ensembles with minimal code changes and maximal clarity.


Check out the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post Building AI Agents Using Agno’s Multi-Agent Teaming Framework for Comprehensive Market Analysis and Risk Reporting appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/04/building-ai-agents-using-agnos-multi-agent-teaming-framework-for-comprehensive-market-analysis-and-risk-reporting/feed/ 0 71099
A Step-by-Step Tutorial on Connecting Claude Desktop to Real-Time Web Search and Content Extraction via Tavily AI and Smithery using Model Context Protocol (MCP) https://www.marktechpost.com/2025/05/03/a-step-by-step-tutorial-on-connecting-claude-desktop-to-real-time-web-search-and-content-extraction-via-tavily-ai-and-smithery-using-model-context-protocol-mcp/ https://www.marktechpost.com/2025/05/03/a-step-by-step-tutorial-on-connecting-claude-desktop-to-real-time-web-search-and-content-extraction-via-tavily-ai-and-smithery-using-model-context-protocol-mcp/#respond Sun, 04 May 2025 03:53:01 +0000 https://www.marktechpost.com/?p=71084 In this hands-on tutorial, we’ll learn how to seamlessly connect Claude Desktop to real-time web search and content-extraction capabilities using Tavily AI’s Model Context Protocol (MCP) server and the Smithery client. We’ll begin by reviewing the Tavily homepage and dashboard, where you’ll generate your Developer API key. Next, we’ll explore the Tavily MCP server in […]

The post A Step-by-Step Tutorial on Connecting Claude Desktop to Real-Time Web Search and Content Extraction via Tavily AI and Smithery using Model Context Protocol (MCP) appeared first on MarkTechPost.

]]>
In this hands-on tutorial, we’ll learn how to seamlessly connect Claude Desktop to real-time web search and content-extraction capabilities using Tavily AI’s Model Context Protocol (MCP) server and the Smithery client. We’ll begin by reviewing the Tavily homepage and dashboard, where you’ll generate your Developer API key. Next, we’ll explore the Tavily MCP server in Smithery’s interface, install and configure the tavily-mcp package for Claude via the Smithery “Add Server” flow, and verify the installation with a simple PowerShell command. Finally, you’ll see how Claude can invoke Tavily tools, tavily-search and tavily-extract, to fetch and parse live content from sites. By the end of this tutorial, we’ll have a fully integrated pipeline that empowers your AI workflows with up-to-the-minute information directly from the web.

Step 01: Go to the Tavily AI Homepage to sign up and access the Tavily API to set up the MCP server on the Claude desktop.

Step 2: Here you see the Tavily dashboard under the “Researcher” plan, with an API usage bar (0/1,000 credits) and the generated dev key (tvly-dev-…) ready to be copied for authenticating your requests.

Step 3: In Smithery’s server list, the Tavily MCP Server appears as a remote, scanned integration, with its two primary tools, tavily-search and tavily-extract, detailed under the Tools section.

Step 4: Clicking “Add Server” opens Smithery’s client selector in Auto mode, listing supported integrations such as Claude Desktop, Cursor, VS Code, and more.

Step 5: The Claude Desktop configuration modal shows the “Personal” profile selected by default and prompts you to enter your Tavily API key to enable the MCP connection.

Step 6: A Windows PowerShell window confirms successful resolution and installation of the Tavily MCP package for the Claude client, indicating you can now trust and use this server integration.

Step 7: Now, Tavily MCP would have been set up in Claude. Just close and exit the Claude desktop and restart to see it in settings.

Step 8: The tool-toggle menu in Claude lets you enable or disable tavily-search and tavily-extract on the fly, offering granular control over which MCP tools the assistant may call.

Step 9: Within Claude’s chat UI, you can observe the assistant invoking the tavily-search and tavily-extract tool calls inline as it searches marktechpost.com for recent AI articles and extracts their content.

In conclusion, Integrating Tavily’s MCP server with Claude Desktop via Smithery has unlocked a powerful synergy of real-time web search and content extraction within your AI workflows. This setup doesn’t just keep your models up to date, it empowers them to source, analyze, and synthesize fresh information on the fly, whether you’re conducting market research, fueling a RAG pipeline, or automating domain-specific insights. To take full advantage, revisit the Tavily dashboard and Smithery tool configuration to fine-tune query parameters, combine tavily-search and tavily-extract in your prompts, and explore advanced features like custom filters or scheduled queries.


Don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post A Step-by-Step Tutorial on Connecting Claude Desktop to Real-Time Web Search and Content Extraction via Tavily AI and Smithery using Model Context Protocol (MCP) appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/03/a-step-by-step-tutorial-on-connecting-claude-desktop-to-real-time-web-search-and-content-extraction-via-tavily-ai-and-smithery-using-model-context-protocol-mcp/feed/ 0 71084
Implementing An Airbnb and Excel MCP Server https://www.marktechpost.com/2025/05/02/implementing-an-airbnb-and-excel-mcp-server/ https://www.marktechpost.com/2025/05/02/implementing-an-airbnb-and-excel-mcp-server/#respond Sat, 03 May 2025 05:42:49 +0000 https://www.marktechpost.com/?p=71059 In this tutorial, we’ll build an MCP server that integrates Airbnb and Excel, and connect it with Cursor IDE. Using natural language, you’ll be able to fetch Airbnb listings for a specific date range and location, and automatically store them in an Excel file. Step 1: Installing the dependencies To run the Airbnb MCP server […]

The post Implementing An Airbnb and Excel MCP Server appeared first on MarkTechPost.

]]>
In this tutorial, we’ll build an MCP server that integrates Airbnb and Excel, and connect it with Cursor IDE. Using natural language, you’ll be able to fetch Airbnb listings for a specific date range and location, and automatically store them in an Excel file.

Step 1: Installing the dependencies

To run the Airbnb MCP server and connect it to Excel, we’ll need to install a few tools: Node.js, uv package manager, Git, and Cursor IDE, since Claude desktop does not support SSE-based MCP servers.

Node JS

We need npx to run the Airbnb MCP server, which comes with Node.js.

  • Download the latest version of Node.js from nodejs.org
  • Run the installer.
  • Leave all settings as default and complete the installation

UV package manager

To install the uv package manager, use the following commands based on your operating system:

For Mac/Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

For windows (Powershell):

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

Git

Git is required to clone the Excel MCP server repository.

Download Git from https://git-scm.com/downloads and complete the installation.

Open your terminal, navigate to your desired directory, and run:

git clone https://github.com/haris-musa/excel-mcp-server.git
cd excel-mcp-server

If you prefer not to use Git, you can download the repository manually:Go to https://github.com/haris-musa/excel-mcp-server, click the “Code” button, and choose “Download ZIP”. Once downloaded, extract the folder to your working directory.

Cursor IDE

  • Download Cursor IDE from https://cursor.com.
  • It’s free to download and comes with a 14-day free trial.

Cursor is an AI-powered development environment built on top of VS Code, and it will help us connect to the MCP servers and generate code using natural language prompts.

Python dependencies

Once you are in the excel-mcp-server directory (the one you cloned using git or downloaded), run the following command

Step 2: Configuring mcp.json file

  1. Open Cursor IDE.
  2. Go to the menu and navigate to: File > Preferences > Cursor Settings > MCP
  3. Click on “Add a new global MCP server.”
  4. This will open the mcp.json configuration file. Paste the following code there:
{
    "mcpServers": {
      "airbnb": {
        "command": "npx",
        "args": [
          "-y",
          "@openbnb/mcp-server-airbnb",
          "--ignore-robots-txt"
        ]
      },
      "excel": {
        "url": "http://localhost:8000/sse"
      }
    }
}

Step 3: Running the MCP Servers

The Excel MCP server is an SSE-based (Server-Sent Events) server, which means it needs to be running in your terminal for Cursor IDE to interact with it. If the server is stopped or the terminal is closed, the connection will no longer work.

To start the server:

  • Open your terminal.
  • Navigate to the excel-mcp-server directory (if you’re not already there).
  • Run the following command:

Once running, both the servers should be visible in Cursor settings:

Step 4: Using the Setup in Cursor

You can now use the chat panel in Cursor IDE to interact with the server using natural language. Simply ask for Airbnb listings for a specific date range and location, and request the data to be pasted into Excel for your analysis.

For example:

“Get me Airbnb listings in Bengaluru for the first week of June and add them to an Excel sheet.”

Note:

All Excel files generated through the MCP server will be saved in the excel_files folder located inside the excel-mcp-server directory.

The Excel MCP server also supports running basic data analysis on the Excel file directly through chat prompts. However, we won’t be covering that part in this tutorial.

Troubleshooting

If the Airbnb server isn’t responding correctly or fails to fetch listings, the issue is likely related to the ignoreRobotsText setting.

To resolve this, simply include the following argument in your natural language prompt:

Example:

“Get Airbnb listings for Bengaluru from 5th May to 10th May for 2 adults. Use “ignoreRobotsText”: true.”

This allows the server to bypass website restrictions that might otherwise block automated access.


Don’t forget to follow us on Twitter and join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post Implementing An Airbnb and Excel MCP Server appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/02/implementing-an-airbnb-and-excel-mcp-server/feed/ 0 71059
Building a Zapier AI-Powered Cursor Agent to Read, Search, and Send Gmail Messages using Model Context Protocol (MCP) Server https://www.marktechpost.com/2025/05/02/building-a-zapier-ai-powered-cursor-agent-to-read-search-and-send-gmail-messages-using-model-context-protocol-mcp-server/ https://www.marktechpost.com/2025/05/02/building-a-zapier-ai-powered-cursor-agent-to-read-search-and-send-gmail-messages-using-model-context-protocol-mcp-server/#respond Fri, 02 May 2025 21:13:19 +0000 https://www.marktechpost.com/?p=71053 In this tutorial, we’ll learn how to harness the power of the Model Context Protocol (MCP) alongside Zapier AI to build a responsive email agent directly on Cursor, no complex coding required. We’ll walk through configuring MCP connectors to bridge Cursor and Zapier AI, connecting your Gmail account, defining intents for reading, searching, and sending […]

The post Building a Zapier AI-Powered Cursor Agent to Read, Search, and Send Gmail Messages using Model Context Protocol (MCP) Server appeared first on MarkTechPost.

]]>
In this tutorial, we’ll learn how to harness the power of the Model Context Protocol (MCP) alongside Zapier AI to build a responsive email agent directly on Cursor, no complex coding required. We’ll walk through configuring MCP connectors to bridge Cursor and Zapier AI, connecting your Gmail account, defining intents for reading, searching, and sending messages, and training the agent to recognize and act on your commands via MCP’s unified interface. By the end of this guide, you’ll have a fully functional MCP-enabled Cursor AI agent that can automatically draft replies, fetch important threads, and dispatch emails on your behalf, streamlining your day-to-day communication so you can focus on what truly matters.

Step 1: Download and install the cursor application on your desktop.

Step 3: Go to the left pane on the cursor and click on MCP.

Step 4: Then, click on Add new global MCP Server.

Step 5: Add the copied code from the Zapier site and save the file.

{
  "mcpServers": {
    "Zapier MCP": {
      "url": "Add your URL here"
    }
  }
}

Code Sample

Step 6: Now, go to my actions on Zapier’s action page and click on edit actions of MCP.

Step 7: Add the action you want your MCP server to perform here.

Step 8: Select the options from the drop-down menu to add the action and provide the permissions for these actions by giving access to the Google account.

Step 9: Refresh your MCP server on the cursor to see the added actions on Zapier that your Agent can perform.

Step 10: Finally, type into the chat the cursor whatever action you want your MCP server to perform from the added ones. In our case, we sent an email.

In conclusion, by integrating MCP into your Zapier AI and Cursor setup, you’ve created an email agent that speaks the same protocol language across all services, ensuring reliable, scalable automation. With your MCP-powered agent in place, you’ll enjoy greater efficiency, faster response times, and seamless communication, all without lifting a finger. Keep refining your MCP triggers and Zapier workflows to adapt to evolving needs, and watch as your email management becomes smarter, more consistent, and entirely hands-off.


Don’t forget to follow us on Twitter and join our 90k+ ML SubReddit. For Promotion and Partnerships, please talk us.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post Building a Zapier AI-Powered Cursor Agent to Read, Search, and Send Gmail Messages using Model Context Protocol (MCP) Server appeared first on MarkTechPost.

]]>
https://www.marktechpost.com/2025/05/02/building-a-zapier-ai-powered-cursor-agent-to-read-search-and-send-gmail-messages-using-model-context-protocol-mcp-server/feed/ 0 71053