MCP (Model Context Protocol) is a standard for connecting AI agents to external tool servers. Instead of building every tool into Hermes, you can connect to any MCP-compatible server and instantly gain its capabilities.
What MCP Enables
- GitHub integration — Issues, PRs, repo management
- Database access — SQL queries, schema exploration
- File system tools — Beyond Hermes's built-in file ops
- Browser automation — Playwright, Puppeteer servers
- Custom APIs — Your internal tools and services
How MCP Works
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Hermes │────▶│ MCP Server │────▶│ Service │
│ Agent │◀────│ (GitHub) │◀────│ (GitHub │
│ │ │ │ │ API) │
└─────────────┘ └──────────────┘ └─────────────┘
Hermes connects to MCP servers via stdio (subprocess) or HTTP. The server exposes tools that Hermes can call like native tools.
Adding an MCP Server
Example: GitHub MCP Server
# ~/.hermes/config.yaml
mcp_servers:
github:
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: ghp_xxx
After restart, Hermes gains GitHub tools: create issues, manage PRs, search repos.
Server Types
Stdio Servers (Subprocess)
Most common. Hermes spawns the server as a subprocess.
mcp_servers:
filesystem:
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
HTTP Servers (Network)
Connect to remote MCP servers.
mcp_servers:
notion:
url: https://mcp.notion.com/mcp
headers:
Authorization: Bearer ntn_xxx
Security: Environment Isolation
Critical: MCP servers don't receive your full shell environment.
mcp_servers:
github:
env:
GITHUB_PERSONAL_ACCESS_TOKEN: ghp_xxx
# Only these vars are passed to the server
This prevents accidental secret leakage to third-party servers.
Popular MCP Servers
| Server | Package | Purpose |
|---|---|---|
| GitHub | @modelcontextprotocol/server-github |
Repo management |
| Filesystem | @modelcontextprotocol/server-filesystem |
File operations |
| Postgres | @modelcontextprotocol/server-postgres |
Database queries |
| Slack | @modelcontextprotocol/server-slack |
Message workspace |
| Memory | @modelcontextprotocol/server-memory |
Persistent KV store |
Sampling: Server-Initiated LLM Calls
MCP servers can request LLM responses from Hermes:
mcp_servers:
analysis:
command: npx
args: ["-y", "analysis-server"]
sampling:
enabled: true
model: gemini-3-flash
max_tokens_cap: 4096
timeout: 30
Timeouts and Limits
mcp_servers:
slow-server:
command: python
args: ["slow_mcp.py"]
timeout: 180 # Tool call timeout
connect_timeout: 60 # Initial connection
Tool Filtering
Restrict which tools are exposed:
mcp_servers:
github:
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
allowed_tools:
- create_issue
- list_issues
# Other tools hidden from Hermes
Building Custom MCP Servers
MCP is an open protocol. Build servers in any language:
# Python MCP server example
from mcp import Server, Tool
server = Server("my-tools")
@server.tool()
def my_custom_tool(param: str) -> str:
return f"Processed: {param}"
server.run()