Revision history for Langertha
0.100 2026-02-20 05:33:44Z
- Add MCP (Model Context Protocol) tool calling support
- New Langertha::Role::Tools for engine-agnostic tool calling
- Anthropic engine: full tool calling support (format_tools,
response_tool_calls, format_tool_results, response_text_content)
- Async chat_with_tools_f() method for automatic multi-round
tool-calling loop with configurable max iterations
- Requires Net::Async::MCP for MCP server communication
- Add Future::AsyncAwait support for async/await syntax
- All _f methods (simple_chat_f, simple_chat_stream_f, etc.)
- Streaming with real-time async callbacks
- Add streaming support
- Synchronous callback, iterator, and Future-based APIs
- SSE parsing for OpenAI/Anthropic/Groq/Mistral/DeepSeek
- NDJSON parsing for Ollama
- Add Gemini engine (Google AI Studio)
- Add dynamic model listing via provider APIs with caching
- Add Anthropic extended parameters (effort, inference_geo)
- Improve POD documentation across all modules
0.008 2025-03-30 04:55:38Z
- Add Mistral engine integration
- Adapt Mistral OpenAPI spec for our parser
0.007 2025-01-25 19:29:51Z
- Add DeepSeek engine
0.006 2024-09-30 14:07:25Z
- Add Structured Output support
- Add Groq engine and Groq Whisper support
- Add TEST_WITHOUT_STRUCTURED_OUTPUT env variable
0.005 2024-08-22 13:43:31Z
- Fix data type on keep_alive and remove POSIX round usage
0.004 2024-08-13 23:10:57Z
- Fix interpretation of max_tokens on Anthropic (response size, not context)
0.003 2024-08-11 00:21:01Z
- Add context size and temperature controls
0.002 2024-08-10 02:22:12Z
- Add Whisper Transcription API
- Add more engines
- Fix encoding issues
0.001 2024-08-03 22:47:33Z
- Initial release
- Unified Perl interface for LLM APIs
- Engines: OpenAI, Anthropic, Ollama
- Role-based architecture (Chat, HTTP, Models, JSON, Embedding)
- OpenAPI spec-driven request generation
- Embedding support