Fix Hermes Agent Not Using Tools (Acting Like Basic LLM)

Troubleshoot when Hermes acts like a basic chatbot instead of an agent — not executing tools, commands, or actions.

When Hermes responds like a basic chatbot — answering questions but never executing tools, running commands, or taking actions — the issue is almost always model-related. Not all models support tool/function calling, and smaller models may not have enough context to process the tool definitions.

Skip the setup → FlyHermes ($29.50/first month)

Managed cloud · API costs included · Cancel anytime

Before you start:

  • Hermes Agent installed and running
  • A model that supports function/tool calling

Steps

  1. 1

    Check your model

    Verify your model supports function/tool calling — not all models do

  2. 2

    Test with a known-good model

    Try 'hermes model hermes3' or 'gpt-4o' to confirm tools work with a supported model

  3. 3

    Check context length

    Ensure your model has enough context for tools — small models may not fit the tool definitions

  4. 4

    Verify tool loading

    Run 'hermes tools list' to see what tools are available to the agent

  5. 5

    Check model configuration

    Verify 'model: provider:' and 'model: name:' are correctly set in config.yaml

Pro Tips

  • 💡Hermes 3 (hermes3), GPT-4o, Claude, and Gemini Pro all support tool calling — start with these if debugging
  • 💡Very small models (7B and under) often struggle with tool calling, especially quantized versions
  • 💡If using Ollama, try 'ollama pull hermes3' — it's specifically optimized for tool use with Hermes

Troubleshooting

Model responds with 'I don't have access to tools'

Your model doesn't support function calling. Switch to a model that does: hermes3, gpt-4o, claude-3, or gemini-pro.

Tools work sometimes but not consistently

You may be hitting context limits. Large tool sets exceed small model context windows. Try reducing active tools or using a model with larger context.

Model says it has tools but refuses to use them

Some models have safety refusals. Try a less restricted model, or rephrase your request to be more specific about what action you want.

Related Guides