Nous ResearchHermes Agent
Deploy Now

LLM Provider Setup

·hermes agent api keysapiproviderskeys

Configure LLM providers for Hermes Agent — OpenAI, Anthropic, OpenRouter, Kimi, DeepSeek, and how to set up provider fallback chains.

Want to try Hermes Agent yourself?

Try Hermes Free → Deploy in 60 seconds

Hermes flexibility comes from supporting many LLM providers. Here is which keys you need, where to get them, and how to configure each.

You Need One (or More)

You need exactly one LLM provider configured. Choose based on your budget, privacy needs, and capability requirements.

Provider Options

1. OpenRouter (Recommended for Flexibility)

  • 200+ models, single API key
  • Pay-per-use
  • Get key: openrouter.ai
hermes model
# Select OpenRouter
# Enter key when prompted

2. Nous Portal (Recommended for Beginners)

  • OAuth login, zero config
  • Subscription-based
  • Run: hermes model then select Nous Portal

3. Anthropic (Claude)

  • Direct to Anthropic API
  • Requires account at anthropic.com
  • Pricing: $3-15/M tokens
hermes model
# Select Anthropic
# Enter key from dashboard

4. OpenAI (GPT-4)

  • Direct to OpenAI
  • More expensive
  • Get key: platform.openai.com

5. Ollama (Local, Free)

  • No API key needed
  • Runs local models
  • Install Ollama separately, configure in Hermes as custom endpoint

6. MiniMax ($10/mo flat)

  • Fixed monthly, predictable cost
  • Has dedicated Hermes setup page
  • $10/month unlimited on M2.7

7. Kimi/Moonshot (Cheap, Popular)

  • Very affordable
  • Community favorite for cost-performance
  • Get credits at platform.moonshot.cn

8. DeepSeek ($0.30/M)

  • Among the cheapest
  • 90% cache discount
  • ~$2/month typical use

Setting Keys

hermes model

Select provider, enter key. Keys stored in config, encrypted.

Switching Providers

hermes model

Change anytime. No code changes needed.

Fallback Chain

v0.6.0+ supports multiple providers as fallback:

model:
  provider: openrouter
  fallback_providers:
    - anthropic
    - minimax

If primary fails, automatically tries next.

Cost Estimates by Provider

Provider Typical Monthly
Ollama Hardware only
DeepSeek ~$2
Kimi ~$3-5
MiniMax $10 flat
OpenRouter (mixed) $5-25
Anthropic $3-15
OpenAI $5-20

Model agnostic BYOK — 200+ providers

Cost calculator Setup guide


FAQ

Can I use multiple providers? Yes — configure fallback chain or switch manually.

Which provider is best? Kimi or MiniMax for cost. Claude or GPT-4 for capability. DeepSeek for budget.

Use Hermes with OpenRouter

flyhermes.ai

Frequently Asked Questions

Which LLM provider should I choose for the best cost-capability balance?

For most users: Kimi K2.5 from Moonshot or MiniMax as a daily driver — both are fast, capable, and inexpensive. Use Claude Sonnet or GPT-4 only for complex reasoning tasks where the extra capability is worth the significantly higher per-token cost.

How do I set up a fallback provider chain so Hermes doesn't get stuck?

In ~/.hermes/config.yaml, configure multiple providers: set your primary (e.g., Kimi K2.5) and add fallback providers (e.g., Claude Sonnet, MiniMax). When the primary returns errors or rate limits, Hermes automatically tries the next in chain. This feature requires v0.6.0 or later.

Why was my API key rejected even though I copied it correctly?

Two common causes: trailing whitespace from copy/paste (re-copy directly from the provider dashboard), or URL-encoded characters at the end of the key (e.g., %3D instead of =). Also check that you're setting the key for the correct provider — OPENAI_API_KEY and ANTHROPIC_API_KEY are stored separately.

Can I use multiple providers simultaneously in Hermes?

Yes. Configure a fallback provider chain in config.yaml so when your primary fails, Hermes automatically switches to the next. You can also switch manually with `hermes model` whenever you want to try a different provider for a specific task.

What is OpenRouter and why is it recommended for flexibility?

OpenRouter provides a single API key that gives access to 200+ models from dozens of providers — OpenAI, Anthropic, Meta, Mistral, and many more. You manage one billing relationship and one API key while having the flexibility to switch models with a single command or config change.

Ready to Run Your Own AI Agent?

Self-host Hermes in 60 seconds. No credit card, no cloud lock-in.

Deploy Hermes Free →

Related Posts