NAME
Langertha - The clan of fierce vikings with 🪓 and 🛡️ to AId your rAId
VERSION
version 0.201
SYNOPSIS
my $system_prompt = 'You are a helpful assistant.';
# Local models via Ollama
use Langertha::Engine::Ollama;
my $ollama = Langertha::Engine::Ollama->new(
url => 'http://127.0.0.1:11434',
model => 'llama3.1',
system_prompt => $system_prompt,
);
print $ollama->simple_chat('Do you wanna build a snowman?');
# OpenAI
use Langertha::Engine::OpenAI;
my $openai = Langertha::Engine::OpenAI->new(
api_key => $ENV{OPENAI_API_KEY},
model => 'gpt-4o-mini',
system_prompt => $system_prompt,
);
print $openai->simple_chat('Do you wanna build a snowman?');
# Anthropic Claude
use Langertha::Engine::Anthropic;
my $claude = Langertha::Engine::Anthropic->new(
api_key => $ENV{ANTHROPIC_API_KEY},
model => 'claude-sonnet-4-6',
);
print $claude->simple_chat('Generate Perl Moose classes to represent GeoJSON data.');
# Google Gemini
use Langertha::Engine::Gemini;
my $gemini = Langertha::Engine::Gemini->new(
api_key => $ENV{GEMINI_API_KEY},
model => 'gemini-2.5-flash',
);
print $gemini->simple_chat('Explain the difference between Moose and Moo.');
DESCRIPTION
Langertha provides a unified Perl interface for interacting with various Large Language Model (LLM) APIs. It abstracts away provider-specific differences, giving you a consistent API whether you're using OpenAI, Anthropic Claude, Ollama, Groq, Mistral, or other providers.
THIS API IS WORK IN PROGRESS.
Key Features
Unified API across multiple LLM providers
Streaming support via callback, iterator, or Future
Async/await syntax via Future::AsyncAwait
Role-based architecture for easy extensibility
JSON response handling
Temperature, response size, and other parameter controls
Engine Modules
Langertha::Engine::Anthropic - Claude models (Sonnet, Opus, Haiku)
Langertha::Engine::OpenAI - GPT-4, GPT-4o, GPT-3.5, o1, embeddings
Langertha::Engine::Ollama - Local LLM hosting via https://ollama.com/
Langertha::Engine::Groq - Fast inference API
Langertha::Engine::Mistral - Mistral AI models
Langertha::Engine::DeepSeek - DeepSeek models
Langertha::Engine::MiniMax - MiniMax AI models (M2.5, M2.1)
Langertha::Engine::Gemini - Google Gemini models (Flash, Pro)
Langertha::Engine::vLLM - vLLM inference server
Langertha::Engine::Perplexity - Perplexity AI models
Langertha::Engine::NousResearch - Nous Research models
Langertha::Engine::OllamaOpenAI - Ollama via OpenAI-compatible API
Langertha::Engine::AKI - AKI engine
Langertha::Engine::AKIOpenAI - AKI via OpenAI-compatible API
Langertha::Engine::Whisper - OpenAI Whisper speech-to-text
Roles
Roles provide composable functionality to engines:
Langertha::Role::Chat - Synchronous and async chat methods
Langertha::Role::HTTP - HTTP request/response handling
Langertha::Role::Streaming - Streaming response processing
Langertha::Role::JSON - JSON encode/decode
Langertha::Role::OpenAICompatible - OpenAI-compatible API behaviour
Langertha::Role::SystemPrompt - System prompt attribute
Langertha::Role::Temperature - Temperature parameter
Langertha::Role::ResponseSize - Max response size parameter
Langertha::Role::ResponseFormat - Response format (JSON mode)
Langertha::Role::ContextSize - Context window size parameter
Langertha::Role::Seed - Deterministic seed parameter
Langertha::Role::Models - Model listing
Langertha::Role::Embedding - Embedding generation
Langertha::Role::Transcription - Audio transcription
Langertha::Role::Tools - Tool/function calling
Langertha::Role::Langfuse - Langfuse observability integration
Langertha::Role::OpenAPI - OpenAPI spec support
Data Objects
Langertha::Response - LLM response with content and metadata
Langertha::Stream - Iterator over streaming chunks
Langertha::Stream::Chunk - A single chunk from a streaming response
Langertha::Request::HTTP - Internal HTTP request object
Langertha::Raider - Raider orchestration object
Streaming
All engines that implement Langertha::Role::Chat support streaming. There are several ways to consume a stream:
Synchronous with callback:
$engine->simple_chat_stream(sub {
my ($chunk) = @_;
print $chunk->content;
}, 'Tell me a story');
Synchronous with iterator (Langertha::Stream):
my $stream = $engine->simple_chat_stream_iterator('Tell me a story');
while (my $chunk = $stream->next) {
print $chunk->content;
}
Async with Future (traditional):
my $future = $engine->simple_chat_f('Hello');
my $response = $future->get;
my $future = $engine->simple_chat_stream_f('Tell me a story');
my ($content, $chunks) = $future->get;
Async with Future::AsyncAwait (recommended):
use Future::AsyncAwait;
async sub chat_with_ai {
my ($engine) = @_;
my $response = await $engine->simple_chat_f('Hello');
say "AI says: $response";
return $response;
}
async sub stream_chat {
my ($engine) = @_;
my ($content, $chunks) = await $engine->simple_chat_stream_realtime_f(
sub { print shift->content },
'Tell me a story',
);
say "\nReceived ", scalar(@$chunks), " chunks";
return $content;
}
chat_with_ai($engine)->get;
stream_chat($engine)->get;
The _f methods use IO::Async and Net::Async::HTTP internally, loaded lazily only when you call them. See examples/async_await_example.pl for complete working examples.
Using with Mojolicious:
use Mojo::Base -strict;
use Future::Mojo;
use Langertha::Engine::OpenAI;
my $openai = Langertha::Engine::OpenAI->new(
api_key => $ENV{OPENAI_API_KEY},
model => 'gpt-4o-mini',
);
my $future = $openai->simple_chat_stream_realtime_f(
sub { print shift->content },
'Hello!',
);
$future->on_done(sub {
my ($content, $chunks) = @_;
say "Done: $content";
});
Mojo::IOLoop->start;
Extensions
The LangerthaX namespace is reserved for third-party extensions. See LangerthaX.
SUPPORT
Issues
Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha/issues.
CONTRIBUTING
Contributions are welcome! Please fork the repository and submit a pull request.
AUTHOR
Torsten Raudssus <torsten@raudssus.de> https://raudss.us/
COPYRIGHT AND LICENSE
This software is copyright (c) 2026 by Torsten Raudssus.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.