Revision history for Langertha
0.200 2026-02-22 21:53:36Z
- Add Langertha::Response: metadata container wrapping LLM text content
with id, model, finish_reason, usage (token counts), timing, and created
fields. Uses overload stringification for backward compatibility —
existing code treating responses as strings continues to work.
- All chat_response methods now return Langertha::Response objects:
- Role::OpenAICompatible: extracts id, model, created, finish_reason, usage
- Engine::Anthropic: extracts id, model, stop_reason, input/output_tokens
- Engine::Gemini: extracts modelVersion, finishReason, usageMetadata
(normalized to prompt_tokens/completion_tokens/total_tokens)
- Engine::Ollama: extracts model, done_reason, eval counts, timing fields
- Engine::AKI: extracts model_name, total_duration
- Add Langertha::Raider: autonomous agent with conversation history and
MCP tool calling. Features mission (system prompt), persistent history
across raids, cumulative metrics (raids, iterations, tool_calls, time_ms),
clear_history and reset methods. Supports Hermes tool calling.
Auto-instruments raids with Langfuse traces and per-iteration
generation events when Langfuse is enabled on the engine.
- Add Langertha::Role::Langfuse: observability integration with Langfuse
REST API. Composed into Role::Chat — every engine has Langfuse support
built in. Auto-instruments simple_chat with trace and generation events.
Batched ingestion via POST /api/public/ingestion with Basic Auth.
Disabled by default — active when langfuse_public_key and
langfuse_secret_key are set (via constructor or LANGFUSE_PUBLIC_KEY /
LANGFUSE_SECRET_KEY / LANGFUSE_URL env vars).
- Add ex/response.pl: Response metadata showcase (tokens, model, timing)
- Add ex/raider.pl: autonomous file explorer agent example
- Add ex/langfuse.pl: Langfuse observability example
- Add ex/langfuse-k8s.yaml: Kubernetes manifest for self-hosted Langfuse
with pre-configured project and API keys (zero setup)
- Add t/70_response.t: Response unit tests across all engine formats
- Add t/72_langfuse.t: Langfuse integration tests with mock HTTP
- Add t/82_live_raider.t: live Raider integration test
- Add Langertha::Role::OpenAICompatible: extracted OpenAI API format
methods into a reusable role. Engines that use the OpenAI-compatible
API format now compose this role instead of duplicating methods.
Engine::OpenAI and all subclasses continue to work unchanged.
- Add Langertha::Engine::OllamaOpenAI: first-class engine for Ollama's
OpenAI-compatible /v1 endpoint. Ollama's openai() method now returns
this engine instead of a raw Engine::OpenAI instance.
- Add Langertha::Engine::AKI for AKI.IO native API
(chat completions with key-in-body auth, synchronous mode,
dynamic endpoint listing via list_models and endpoint_details)
- Add Langertha::Engine::AKIOpenAI for AKI.IO via OpenAI-compatible API
(chat, streaming, tool calling via Role::OpenAICompatible)
- Add Langertha::Engine::NousResearch for Nous Research Inference API
with Hermes-native tool calling via <tool_call> XML tags
- Add Langertha::Engine::Perplexity for Perplexity Sonar API
(chat and streaming only, no tool calling)
- Add hermes_tools feature flag to Langertha::Role::Tools for
Hermes-native tool calling via <tool_call>/<tool_response> XML tags;
enables MCP tool calling on any model that supports the Hermes
prompt format, even without API-level tool support
- Add hermes_call_tag, hermes_response_tag attributes for custom
XML tag names (default: tool_call, tool_response)
- Add hermes_tool_instructions attribute for customizing the
instruction text without changing the structural XML template
- Add hermes_tool_prompt attribute for full system prompt override
- Add hermes_extract_content() method for engines to override
response content extraction in Hermes mode
- MCP tool calling now supported on ALL engines:
- OpenAI (inherited by Groq, vLLM, Mistral, DeepSeek)
- Anthropic (with Anthropic-native tool format)
- Gemini (with Gemini-native functionDeclarations format)
- Ollama (OpenAI-compatible tool format)
- NousResearch (Hermes-native via <tool_call> XML tags)
- Add extract_tool_call() to Role::Tools for engine-agnostic
tool call parsing across all provider formats
- Fix Gemini tool calling: pass-through native message formats,
convert MCP tool results to Gemini's functionResponse object
- Fix Gemini chat_request to preserve native parts in messages
from tool result round-trips
- Remove hardcoded all_models() lists from all engines; model
discovery is now exclusively dynamic via list_models()
- Update default models:
- Anthropic: claude-sonnet-4-6 (short alias)
- Gemini: gemini-2.5-flash (2.0-flash deprecated for new users)
- Add Hermes tool calling unit test with mock round-trip
(t/66_tool_calling_hermes.t)
- Add vLLM tool calling unit test (t/65_tool_calling_vllm.t)
- Add live integration test for all engines including Ollama, vLLM,
and NousResearch (t/80_live_tool_calling.t) with multi-model support
- Add mock round-trip test for Ollama tool calling
(t/64_tool_calling_ollama_mock.t) using fixture data
- Add shared Test::MockAsyncHTTP test helper (t/lib/)
for mocking async HTTP in engine tests
- Normalize test API key env vars to TEST_LANGERTHA_*_API_KEY
prefix to prevent accidental use of production keys
- Add TEST_LANGERTHA_OLLAMA_URL and TEST_LANGERTHA_OLLAMA_MODELS
env vars for Ollama live testing
- Add TEST_LANGERTHA_VLLM_URL, TEST_LANGERTHA_VLLM_MODEL, and
TEST_LANGERTHA_VLLM_TOOL_CALL_PARSER env vars for vLLM live testing
- Add AKI.IO native API unit test (t/25_aki_requests.t) with mock
response parsing for chat, list_models, and endpoint_details
- Add AKI.IO live integration test (t/81_live_aki.t) for
list_models, endpoint_details, and simple_chat
- Add AKI.IO to live tool calling test (t/80_live_tool_calling.t)
via OpenAI-compatible API
- Add TEST_LANGERTHA_AKI_API_KEY and TEST_LANGERTHA_AKI_MODEL
env vars for AKI.IO live testing
- Use RFC 2606 test.invalid domain for dummy URLs in unit tests
- Add ex/hermes_tools.pl example for Hermes-native tool calling
- Rewrite all POD to inline style across all 37 modules —
=attr directly after has, =method directly after sub.
Add POD to 18 previously undocumented modules.
0.100 2026-02-20 05:33:44Z
- Add MCP (Model Context Protocol) tool calling support
- New Langertha::Role::Tools for engine-agnostic tool calling
- Anthropic engine: full tool calling support (format_tools,
response_tool_calls, format_tool_results, response_text_content)
- Async chat_with_tools_f() method for automatic multi-round
tool-calling loop with configurable max iterations
- Requires Net::Async::MCP for MCP server communication
- Add Future::AsyncAwait support for async/await syntax
- All _f methods (simple_chat_f, simple_chat_stream_f, etc.)
- Streaming with real-time async callbacks
- Add streaming support
- Synchronous callback, iterator, and Future-based APIs
- SSE parsing for OpenAI/Anthropic/Groq/Mistral/DeepSeek
- NDJSON parsing for Ollama
- Add Gemini engine (Google AI Studio)
- Add dynamic model listing via provider APIs with caching
- Add Anthropic extended parameters (effort, inference_geo)
- Improve POD documentation across all modules
0.008 2025-03-30 04:55:38Z
- Add Mistral engine integration
- Adapt Mistral OpenAPI spec for our parser
0.007 2025-01-25 19:29:51Z
- Add DeepSeek engine
0.006 2024-09-30 14:07:25Z
- Add Structured Output support
- Add Groq engine and Groq Whisper support
- Add TEST_WITHOUT_STRUCTURED_OUTPUT env variable
0.005 2024-08-22 13:43:31Z
- Fix data type on keep_alive and remove POSIX round usage
0.004 2024-08-13 23:10:57Z
- Fix interpretation of max_tokens on Anthropic (response size, not context)
0.003 2024-08-11 00:21:01Z
- Add context size and temperature controls
0.002 2024-08-10 02:22:12Z
- Add Whisper Transcription API
- Add more engines
- Fix encoding issues
0.001 2024-08-03 22:47:33Z
- Initial release
- Unified Perl interface for LLM APIs
- Engines: OpenAI, Anthropic, Ollama
- Role-based architecture (Chat, HTTP, Models, JSON, Embedding)
- OpenAPI spec-driven request generation
- Embedding support