NAME
Langertha::Role::Tools - Role for MCP tool calling support
VERSION
version 0.202
SYNOPSIS
use IO::Async::Loop;
use Net::Async::MCP;
use Future::AsyncAwait;
my $loop = IO::Async::Loop->new;
# Set up an MCP server with tools
my $mcp = Net::Async::MCP->new(server => $my_mcp_server);
$loop->add($mcp);
await $mcp->initialize;
# Create engine with MCP servers
my $engine = Langertha::Engine::Anthropic->new(
api_key => $ENV{ANTHROPIC_API_KEY},
model => 'claude-sonnet-4-6',
mcp_servers => [$mcp],
);
# Async tool-calling chat loop
my $response = await $engine->chat_with_tools_f(
'Use the available tools to answer my question'
);
# Hermes-native tool calling (for models without native tool support)
my $engine = Langertha::Engine::NousResearch->new(
api_key => $ENV{NOUSRESEARCH_API_KEY},
hermes_tools => 1,
mcp_servers => [$mcp],
);
DESCRIPTION
This role adds MCP (Model Context Protocol) tool calling support to Langertha engines. It provides the "chat_with_tools_f" method which implements the full async tool-calling loop:
- 1. Gather available tools from all configured MCP servers
- 2. Send a chat request with tool definitions to the LLM
- 3. If the LLM returns tool calls, execute them via MCP
- 4. Feed tool results back to the LLM and repeat
- 5. When the LLM returns final text, return it
Engines composing this role must implement five methods to handle engine-specific tool format conversion: format_tools, response_tool_calls, extract_tool_call, format_tool_results, and response_text_content.
For models and APIs that do not support a native tools parameter (such as Nous Research Hermes models), set hermes_tools => 1 to enable Hermes-native tool calling via XML tags. When enabled, tools are injected into the system prompt as <tools> XML and <tool_call> tags are parsed from the model's text output instead.
mcp_servers
mcp_servers => [$mcp1, $mcp2]
ArrayRef of Net::Async::MCP instances to use as tool providers. Defaults to an empty ArrayRef. At least one server must be configured before calling "chat_with_tools_f".
tool_max_iterations
tool_max_iterations => 20
Maximum number of tool-calling round trips before aborting with an error. Defaults to 10. Increase for complex multi-step tool workflows.
hermes_tools
hermes_tools => 1
Enable Hermes-native tool calling via <tool_call> XML tags. When true, tools are injected into the system prompt and parsed from the model's text output instead of using the API's native tool parameter. Defaults to 0 (disabled).
hermes_call_tag
hermes_call_tag => 'function_call'
The XML tag name used for tool calls in the model's output. Both the prompt template and the response parser use this tag. Defaults to tool_call.
hermes_response_tag
hermes_response_tag => 'function_response'
The XML tag name used when sending tool results back to the model. Defaults to tool_response.
hermes_tool_instructions
hermes_tool_instructions => 'You are a helpful assistant that can call functions.'
The instruction text prepended to the Hermes tool system prompt. Customize this to change the model's behavior without altering the structural XML template. The default instructs the model to call functions without making assumptions about argument values.
hermes_tool_prompt
The full system prompt template used for Hermes tool calling. Must contain a %s placeholder where the tools JSON will be inserted. Built automatically from "hermes_tool_instructions" and "hermes_call_tag". Override this only if you need full control over the prompt structure.
hermes_extract_content
my $content = $self->hermes_extract_content($data);
Extracts raw text content from a parsed LLM response for Hermes tool call parsing. Defaults to OpenAI response format (choices[0].message.content). Override this method in engines with non-OpenAI response structures.
chat_with_tools_f
my $response = await $engine->chat_with_tools_f(@messages);
Async tool-calling chat loop. Accepts the same message arguments as "simple_chat" in Langertha::Role::Chat. Gathers tools from all "mcp_servers", sends the request, executes any tool calls returned by the LLM, and repeats until the LLM returns a final text response or "tool_max_iterations" is exceeded. Returns a Future that resolves to the final text response.
SEE ALSO
Langertha::Role::Chat - Chat role this is built on top of
Langertha::Raider - Autonomous agent with persistent history using tools
Net::Async::MCP - MCP client used as tool provider
Langertha::Engine::Anthropic - Engine with native tool support
Langertha::Engine::NousResearch - Engine using Hermes tool calling
SUPPORT
Issues
Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha/issues.
CONTRIBUTING
Contributions are welcome! Please fork the repository and submit a pull request.
AUTHOR
Torsten Raudssus <torsten@raudssus.de> https://raudss.us/
COPYRIGHT AND LICENSE
This software is copyright (c) 2026 by Torsten Raudssus.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.