Documentation
ServAgent API โ tool discovery, smart retrieval, and multi-protocol deployment.
Overview
ServAgent is an AI agent tool discovery and optimization platform. It indexes 2000+ open-source tools from GitHub and provides smart retrieval, multi-protocol format conversion, and a browsable marketplace.
Base URL: https://servagent.ai/api/v1
All responses follow the envelope format: { success, data, meta? }
// Success
{ "success": true, "data": { ... }, "meta": { "total": 2125 } }
// Error
{ "success": false, "error": { "code": "TOOL_NOT_FOUND", "message": "..." } }Quick Start
Find the right tool for your task and get a ready-to-use definition in 2 calls:
# 1. Recommend tools for a task
curl -X POST https://servagent.ai/api/v1/tools/recommend \
-H "Content-Type: application/json" \
-d '{"task": "search the web for news", "maxResults": 3}'
# 2. Get the OpenAI function-calling definition
curl https://servagent.ai/api/v1/deploy/{toolId}/openaiHealth Check
/healthcurl https://servagent.ai/api/v1/health
// Response
{
"status": "healthy",
"version": "0.1.0",
"timestamp": "2026-04-15T11:00:00.000Z",
"uptime": 3600,
"services": { "registry": "up", "recommendations": "up" }
}List Tools
/toolsReturns a paginated list of tools. Supports full-text search and multi-dimensional filtering.
Query Parameters
| search | string | Full-text search on name, description, tags | |
| category | string | Filter by category (see categories below) | |
| protocol | string | Filter by protocol: mcp | openai-functions | google-agent | langchain | |
| tags | string | Comma-separated tags to filter by | |
| pricing | string | free | freemium | pay-per-use | subscription | |
| page | number | Page number, default 1 | |
| pageSize | number | Results per page, default 20, max 100 |
curl "https://servagent.ai/api/v1/tools?search=github&category=devops&page=1&pageSize=10"
// Response
{
"success": true,
"data": [
{
"id": "skill-gh-...",
"name": "act",
"description": "Run GitHub Actions locally",
"category": "devops",
"tags": ["github", "ci", "actions"],
"provider": { "name": "nektos", "url": "https://github.com/nektos/act", "verified": false },
"pricing": { "model": "free" },
"protocols": ["mcp", "openai-functions"]
}
],
"meta": { "total": 47, "page": 1, "pageSize": 10 }
}Categories
code-analysisweb-scrapingdata-processingtranslationimage-generationsearchcommunicationstoragemonitoringsecurityai-mldevopsGet Tool
/tools/:idReturns the full tool definition including parameters, endpoints, GitHub source, and quality scores.
curl https://servagent.ai/api/v1/tools/skill-gh-8b7ed1aa
// Response (abridged)
{
"success": true,
"data": {
"id": "skill-gh-8b7ed1aa",
"name": "JeecgBoot",
"description": "...",
"version": "1.0.0",
"category": "code-execution",
"parameters": [],
"protocols": ["mcp", "openai-functions", "langchain"],
"source": "github",
"github": {
"url": "https://github.com/jeecgboot/JeecgBoot",
"owner": "jeecgboot",
"repo": "JeecgBoot",
"stars": 45848,
"license": "Apache-2.0"
},
"quality_score": 94
}
}Recommend Tools
/tools/recommendAI-powered tool recommendation based on task description. Uses token-level bidirectional matching with intent expansion โ reduces token usage by 401ร vs. sending the full registry.
Request Body
| task | string | required | Natural language description of what you need to do |
| context | string | Additional context about your use case | |
| maxResults | number | Max tools to return, default 5 | |
| preferredProtocols | string[] | Prioritize tools supporting these protocols | |
| excludeCategories | string[] | Exclude entire categories from results |
curl -X POST https://servagent.ai/api/v1/tools/recommend \
-H "Content-Type: application/json" \
-d '{
"task": "send a Slack message when a GitHub PR is merged",
"maxResults": 5,
"preferredProtocols": ["mcp"]
}'
// Response
{
"success": true,
"data": [
{
"tool": { "id": "...", "name": "slack-mcp", ... },
"relevanceScore": 87,
"reason": "Direct match on communication + slack intent"
}
],
"meta": { "total": 5 }
}Deploy Tool
Single Tool
/deploy/:id/:protocolReturns the tool definition converted to the target protocol format plus a ready-to-paste usage snippet. Supported protocols: openai anthropic google mcp
curl https://servagent.ai/api/v1/deploy/skill-gh-8b7ed1aa/openai
// Response
{
"success": true,
"data": {
"protocol": "openai",
"toolId": "skill-gh-8b7ed1aa",
"toolName": "JeecgBoot",
"definition": {
"name": "JeecgBoot",
"description": "...",
"parameters": { "type": "object", "properties": {} }
},
"installSnippet": "const response = await openai.chat.completions.create({...})"
}
}Batch Deploy
/deploy/batchConvert up to 50 tools at once to the same protocol. Useful for building a tool registry for your agent.
curl -X POST https://servagent.ai/api/v1/deploy/batch \
-H "Content-Type: application/json" \
-d '{
"toolIds": ["skill-gh-abc", "skill-gh-def"],
"protocol": "anthropic"
}'Protocols
Each protocol produces a different schema format that matches the respective AI SDK:
| Protocol | Format | Used by | |
|---|---|---|---|
| openai | function | OpenAI chat completions tools array | |
| anthropic | tool | Anthropic messages API tools array | |
| functionDeclaration | Gemini function calling | ||
| mcp | tool | Model Context Protocol ListToolsRequestSchema |
Bulk Import
/tools/bulk-importImport your own tools into the registry. Max 500 per request. Data is persisted to SQLite and survives restarts.
curl -X POST https://servagent.ai/api/v1/tools/bulk-import \
-H "Content-Type: application/json" \
-d '{
"tools": [{
"id": "my-tool-1",
"name": "my-tool",
"description": "Does something useful",
"version": "1.0.0",
"category": "search",
"tags": ["api", "data"],
"provider": { "name": "acme", "url": "https://acme.com", "verified": false },
"parameters": [],
"pricing": { "model": "free" },
"endpoints": { "base": "https://acme.com/api" },
"protocols": ["openai-functions"],
"createdAt": "2026-01-01T00:00:00Z",
"updatedAt": "2026-01-01T00:00:00Z"
}]
}'MCP Gateway
ServAgent exposes a standard Model Context Protocol (MCP) server at https://servagent.ai/api/mcp. Connect any MCP-compatible client to access all 2000+ indexed tools without installing them individually.
The gateway uses JSON-RPC 2.0 over HTTP POST. No auth required.
Supported Methods
| initialize | Handshake โ returns server capabilities | ||
| tools/list | All tools in MCP format, paginated (cursor-based, 100/page) | ||
| tools/call | Route a tool call โ returns setup instructions + metadata | ||
| ping | Health check |
# Step 1: Handshake
curl -X POST https://servagent.ai/api/mcp -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}'
# Step 2: List tools
curl -X POST https://servagent.ai/api/mcp -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list","params":{}}'
# โ { "tools": [...100 tools], "nextCursor": "100" }
# Step 3: Next page
curl -X POST https://servagent.ai/api/mcp -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/list","params":{"cursor":"100"}}'
# Step 4: Call a tool
curl -X POST https://servagent.ai/api/mcp -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":4,"method":"tools/call","params":{"name":"skill-gh-8b7ed1aa","arguments":{}}}'Agent Integration
ServAgent works as a discovery + format relay between your agent and the tools. Two integration patterns:
Pattern 1 โ Dynamic Discovery (Recommended)
Call ServAgent at runtime to find the right tool for each task, then inject the schema into your LLM call:
// Node.js example โ OpenAI function calling via ServAgent
const BASE = "https://servagent.ai/api/v1";
async function runWithServAgent(userTask: string) {
// 1. Find the best tool for this task
const rec = await fetch(`${BASE}/tools/recommend`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ task: userTask, maxResults: 3 }),
}).then(r => r.json());
const toolId = rec.data[0].tool.id;
// 2. Get the OpenAI-compatible schema
const deploy = await fetch(`${BASE}/deploy/${toolId}/openai`).then(r => r.json());
const { definition, installSnippet } = deploy.data;
// 3. Inject into your LLM call
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userTask }],
tools: [{ type: "function", function: definition }],
});
return response;
}Pattern 2 โ MCP Client
Connect any MCP client directly to ServAgent. The gateway exposes all tools as a single MCP server. Compatible with Claude Desktop, Cursor, Cline, and any MCP SDK.
// Claude Desktop config (~/.claude_desktop_config.json or equivalent)
{
"mcpServers": {
"servagent": {
"url": "https://servagent.ai/api/mcp",
"transport": "http"
}
}
}
// Or with MCP TypeScript SDK:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
const client = new Client({ name: "my-agent", version: "1.0.0" });
await client.connect(new StreamableHTTPClientTransport(
new URL("https://servagent.ai/api/mcp")
));
const { tools } = await client.listTools();
console.log(`Connected to ServAgent: ${tools.length} tools available`);Pattern 3 โ Anthropic Tool Use
import Anthropic from "@anthropic-ai/sdk";
const BASE = "https://servagent.ai/api/v1";
const anthropic = new Anthropic();
// Fetch top tools for the task and get Anthropic-format schemas
const { data: recs } = await fetch(`${BASE}/tools/recommend`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ task: "search the web for news", maxResults: 5 }),
}).then(r => r.json());
const tools = await Promise.all(
recs.map((r: { tool: { id: string } }) =>
fetch(`${BASE}/deploy/${r.tool.id}/anthropic`)
.then(res => res.json())
.then(d => d.data.definition)
)
);
const response = await anthropic.messages.create({
model: "claude-opus-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "What's happening in tech news today?" }],
tools,
});Pipeline โ Smart Tool Resolution
The pipeline is the high-performance hot path for agents. It returns the most relevant tools for a query using TF-IDF + inverted index search with LRU caching. Cache hit: <1ms. Cache miss: <5ms (3000+ tools).
Resolve
/pipeline/resolve| query | string | required | Natural language task description (max 2000 chars) |
| topK | number | Max tools to return (default 10) | |
| protocol | string | Output format: openai | anthropic | google | mcp | |
| compress | boolean | Strip verbose fields to reduce token usage (default true) | |
| sessionId | string | Session ID for feedback correlation | |
| categories | string[] | Restrict results to specific categories |
curl -X POST https://servagent.ai/api/v1/pipeline/resolve \
-H "Content-Type: application/json" \
-d '{"query": "send email via Gmail", "topK": 5, "protocol": "anthropic"}'
// Response
{
"success": true,
"data": {
"tools": [ { "name": "gmail-mcp", "description": "...", ... } ],
"tokens": { "fullPayload": 165000, "optimized": 412 },
"cacheHit": false,
"latencyMs": 2.4,
"sessionId": "sess_abc123"
}
}Feedback
/pipeline/feedbackRecord whether a resolved tool was successfully used. Closes the data loop for quality scoring.
curl -X POST https://servagent.ai/api/v1/pipeline/feedback \
-H "Content-Type: application/json" \
-d '{"sessionId": "sess_abc123", "toolId": "skill-gh-xyz", "success": true, "latencyMs": 320}'Stats
/pipeline/statscurl https://servagent.ai/api/v1/pipeline/stats
// Response
{
"success": true,
"data": {
"cache": { "size": 412, "hits": 8931, "misses": 204, "hitRate": 0.978 },
"telemetry": { "totalCalls": 9135, "circuitsBroken": 0 },
"toolScores": [ { "toolId": "...", "successRate": 0.97, "totalCalls": 340 } ]
}
}Built-in Executable Tools
No installation required. These tools run server-side and return structured results directly. Private/internal hosts are blocked (SSRF protection).
Web Search
/exec/search| q | string | required | Search query |
| limit | number | Max results (default 5, max 10) |
curl "https://servagent.ai/api/v1/exec/search?q=model+context+protocol&limit=3"
// Response
{
"success": true,
"data": {
"query": "model context protocol",
"results": [
{ "title": "Wikipedia", "url": "https://en.wikipedia.org/...", "snippet": "..." }
]
},
"meta": { "total": 3 }
}URL Fetch
/exec/fetch| url | string | required | Public URL to fetch (http/https only) |
curl "https://servagent.ai/api/v1/exec/fetch?url=https://httpbin.org/json"
// Response
{
"success": true,
"data": {
"url": "https://httpbin.org/json",
"status": 200,
"contentType": "application/json",
"body": { ... }
}
}Free Public API Proxy
/exec/callProxy to free public APIs โ no API key required. Supported api values: open-meteo (weather), frankfurter (currency), ip-api (geolocation), worldtime (timezone), numbersapi (number facts), qrcode (QR generation).
| api | string | required | API name (see list above) |
| params | string | required | URL-encoded JSON params forwarded to the API |
curl "https://servagent.ai/api/v1/exec/call?api=open-meteo¶ms=%7B%22latitude%22%3A51.5%2C%22longitude%22%3A-0.1%7D"
// Response
{
"success": true,
"api": "open-meteo",
"data": { "current_weather": { "temperature": 18.4, ... } }
}OpenAPI Spec
Download the full OpenAPI 3.1 spec for ChatGPT GPT Actions or any OpenAPI-compatible client.
curl https://servagent.ai/api/v1/openapi.jsonShadow SDK Telemetry
The Shadow SDK intercepts agent tool calls without changing your code and forwards usage events to ServAgent. Data is aggregated into per-tool success rates, p95 latency, and quality scores visible in the dashboard.
Ingest Event
/telemetry/ingestcurl -X POST https://servagent.ai/api/v1/telemetry/ingest \
-H "Content-Type: application/json" \
-d '{
"tool": "skill-gh-8b7ed1aa",
"success": true,
"latencyMs": 240,
"query": "analyze code for bugs",
"paramKeys": ["source_code", "language"]
}'Report
/telemetry/reportcurl https://servagent.ai/api/v1/telemetry/report
// Response
{
"success": true,
"data": {
"pipeline": {
"totalResolves": 9135,
"avgTokensSaved": 67199,
"p95ResolveLatencyMs": 4.1,
"unmetDemandRate": 0.02,
"recentUnmetQueries": ["...", "..."]
},
"tools": [
{
"id": "skill-gh-8b7ed1aa",
"calls": 340,
"successRate": 0.97,
"avgLatencyMs": 180,
"p95LatencyMs": 410
}
],
"topChains": [ { "pair": "searchโsummarize", "count": 47 } ]
}
}