NAME

Langertha::Role::Chat - Role for APIs with normal chat functionality

VERSION

version 0.500

SYNOPSIS

# Synchronous chat
my $response = $engine->simple_chat('Hello, how are you?');

# Streaming with callback
$engine->simple_chat_stream(sub {
    my ($chunk) = @_;
    print $chunk->content;
}, 'Tell me a story');

# Streaming with iterator
my $stream = $engine->simple_chat_stream_iterator('Tell me a story');
while (my $chunk = $stream->next) {
    print $chunk->content;
}

# Async with Future (traditional style)
my $future = $engine->simple_chat_f('Hello');
my $response = $future->get;

# Async with Future::AsyncAwait (recommended)
use Future::AsyncAwait;

async sub chat_example {
    my ($engine) = @_;
    my $response = await $engine->simple_chat_f('Hello');
    say $response;
}

# Async streaming with real-time callback
async sub stream_example {
    my ($engine) = @_;
    my ($content, $chunks) = await $engine->simple_chat_stream_realtime_f(
        sub { print shift->content },
        'Tell me a story'
    );
    say "\nTotal chunks: ", scalar @$chunks;
}

DESCRIPTION

This role provides chat functionality for LLM engines. It includes both synchronous and asynchronous (Future-based) methods for chat and streaming.

The Future-based _f methods are implemented using Future::AsyncAwait and Net::Async::HTTP. These modules are loaded lazily only when you call a _f method, so synchronous-only usage does not require them.

chat_model

The model name used for chat requests. Lazily defaults to default_chat_model if the engine provides it, otherwise falls back to the general model attribute from Langertha::Role::Models.

chat

my $request = $engine->chat(@messages);

Builds and returns a chat HTTP request object. Messages may be plain strings (treated as user role) or HashRefs with role and content keys. A system prompt from Langertha::Role::SystemPrompt is prepended automatically.

chat_messages

my $messages = $engine->chat_messages(@messages);

Normalises @messages into the canonical ArrayRef-of-HashRef format expected by chat_request. Plain strings become { role => 'user', content => $string }. If the engine has a system_prompt set it is prepended as a system message.

simple_chat

my $response = $engine->simple_chat(@messages);
my $response = $engine->simple_chat('Hello, how are you?');

Sends a synchronous chat request and returns the response text. Blocks until the request completes.

chat_stream

my $request = $engine->chat_stream(@messages);

Builds and returns a streaming chat HTTP request object. Croaks if the engine does not implement chat_stream_request. Use "simple_chat_stream" or "simple_chat_stream_iterator" to execute the request.

simple_chat_stream

my $content = $engine->simple_chat_stream($callback, @messages);

$engine->simple_chat_stream(sub {
    my ($chunk) = @_;
    print $chunk->content;
}, 'Tell me a story');

Sends a synchronous streaming chat request. Calls $callback with each Langertha::Stream::Chunk as it arrives. Returns the complete concatenated content string when done. Blocks until the stream completes.

simple_chat_stream_iterator

my $stream = $engine->simple_chat_stream_iterator(@messages);
while (my $chunk = $stream->next) {
    print $chunk->content;
}

Returns a Langertha::Stream iterator. The full response is fetched synchronously and buffered; iteration yields each Langertha::Stream::Chunk in order.

simple_chat_f

# Traditional Future style
my $response = $engine->simple_chat_f(@messages)->get;

# With async/await (recommended)
use Future::AsyncAwait;
async sub my_chat {
    my $response = await $engine->simple_chat_f(@messages);
    return $response;
}

Async version of "simple_chat". Returns a Future that resolves to the response text. Uses Net::Async::HTTP internally; loaded lazily on first call.

For requests that need named arguments (tools, tool_choice, response_format, etc.) use "chat_f"; simple_chat_f delegates to it.

chat_f

my $response = await $engine->chat_f(
  messages       => [ ... ],
  tools          => [ $tool, ... ],
  tool_choice    => { type => 'tool', name => 'extract' },
  response_format => { ... },
  # any other engine-specific extras pass straight through
);

Async single-turn chat with named arguments. Returns a Future resolving to a Langertha::Response. The caller is responsible for acting on any tool_calls the engine emits — chat_f does not loop. For the multi-turn MCP tool-calling loop use "chat_with_tools_f" in Langertha::Role::Tools instead.

tools in chat_f can be a mix of provider-shape HashRefs (OpenAI, Anthropic, MCP, Gemini); the engine's chat_request handles the per-provider serialization. The Langertha::Tool value object is the canonical normalizer (from_hash accepts every shape, the to_PROVIDER methods produce the wire payload).

When the caller asks for a forced named tool on an engine that cannot do native named-tool-forcing but supports json_schema response_format (currently Langertha::Engine::Perplexity), the request is automatically rewritten to use the JSON Schema path and the response is loose-parsed; the resulting Langertha::Response exposes the parsed arguments via "tool_call_args" in Langertha::Response with synthetic => 1 on the synthesized tool_call entry.

simple_chat_stream_f

my ($content, $chunks) = $engine->simple_chat_stream_f(@messages)->get;

Async streaming without a real-time callback. Convenience wrapper around "simple_chat_stream_realtime_f" with undef as the callback. Returns a Future that resolves to ($content, \@chunks).

aggregate_tool_calls

my $tool_calls = $engine->aggregate_tool_calls( $chunks );

Walks an ArrayRef of Langertha::Stream::Chunk objects and returns the flat list of Langertha::ToolCall objects collected from any chunks that carry tool_calls. Returns an empty ArrayRef if none of the chunks emitted tool calls.

This is the streaming counterpart to "tool_calls" in Langertha::Response. Engines that need to assemble fragmented tool-call deltas (OpenAI's delta.tool_calls stream, Anthropic's input_json_delta) are expected to do that assembly inside parse_stream_chunk and attach the finished Langertha::ToolCall to the relevant chunk; this helper just collects them.

simple_chat_stream_realtime_f

# With async/await (recommended)
use Future::AsyncAwait;
async sub my_stream {
    my ($content, $chunks) = await $engine->simple_chat_stream_realtime_f(
        sub { print shift->content },
        @messages
    );
    return $content;
}

# Traditional Future style
my $future = $engine->simple_chat_stream_realtime_f($callback, @messages);
my ($content, $chunks) = $future->get;

Async streaming with real-time callback. $callback is called with each Langertha::Stream::Chunk as it arrives from the server. Returns a Future that resolves to ($content, \@chunks) where $content is the full concatenated text.

This is the recommended method for real-time streaming in async applications. Pass undef as the callback (or use "simple_chat_stream_f") if you only need the final result.

content_format

my $fmt = $engine->content_format;  # 'openai' | 'anthropic' | 'gemini'

Wire format for multimodal content blocks. Controls how Langertha::Content objects embedded in a message's content arrayref are serialized during "chat_messages". Defaults to 'openai'; overridden by Langertha::Engine::AnthropicBase and Langertha::Engine::Gemini.

engine_capabilities

my $caps = $engine->engine_capabilities;
if ( $caps->{tool_choice_named} ) { ... }

Returns a HashRef of capability flags so callers can avoid passing parameters the engine cannot honour.

The base implementation reports only what Langertha::Role::Chat itself provides (chat). Every other capability-bearing role (Langertha::Role::Tools, Langertha::Role::ResponseFormat, Langertha::Role::Streaming, Langertha::Role::Embedding, Langertha::Role::Transcription, Langertha::Role::ImageGeneration, Langertha::Role::HermesTools, Langertha::Role::Temperature, Langertha::Role::Seed, Langertha::Role::ContextSize, Langertha::Role::ResponseSize, Langertha::Role::SystemPrompt, Langertha::Role::ParallelToolUse) hangs its own contribution into this method via around engine_capabilities. Engines override (also via around) when the wire reality differs from the role inventory — for example to clear tool_choice_named on providers that only accept string forms.

Common keys produced by the bundled roles:

  • chatsimple_chat/simple_chat_f work

  • streamingchat_stream_request is wired up

  • tools_native — engine accepts a tools array on the wire

  • tools_hermes — tools are injected via Hermes-style XML prompt rather than (or in addition to) the native API

  • tool_choice_auto / tool_choice_any / tool_choice_none — which string-form tool_choice values are accepted

  • tool_choice_named{type => 'tool', name => '...'} forcing works (possibly translated internally — Gemini routes named tools through allowed_function_names, for example)

  • response_format_json_object{type => 'json_object'}

  • response_format_json_schema — JSON Schema structured output

  • embedding, transcription, image_generation — auxiliary capabilities matching the corresponding roles

  • temperature, seed, context_size, response_size, system_prompt, parallel_tool_use — generation-parameter knobs the engine will honour

Callers should treat the hash as advisory — a missing key means "unknown / unsupported", a true value means "the engine claims it will honour this".

SEE ALSO

SUPPORT

Issues

Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha/issues.

IRC

Join #langertha on irc.perl.org or message Getty directly.

CONTRIBUTING

Contributions are welcome! Please fork the repository and submit a pull request.

AUTHOR

Torsten Raudssus <getty@cpan.org>

COPYRIGHT AND LICENSE

This software is copyright (c) 2026 by Torsten Raudssus https://raudssus.de/.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.