NAME

Langertha::Response - LLM response with metadata

VERSION

version 0.202

SYNOPSIS

my $response = $engine->simple_chat('Hello');

# Stringifies to content (backward compatible)
print $response;
print "Response: $response\n";

# Access metadata
say $response->model;
say $response->id;
say $response->finish_reason;

# Token usage
say "Prompt tokens: ", $response->prompt_tokens;
say "Completion tokens: ", $response->completion_tokens;
say "Total tokens: ", $response->total_tokens;

# Full raw response
use Data::Dumper;
print Dumper($response->raw);

DESCRIPTION

Wraps LLM response text content together with all available metadata from the API response. Uses overload for string context so existing code treating responses as plain strings continues to work.

content

The text content of the response. Required.

raw

The full parsed API response as a HashRef.

id

Provider-specific response ID.

model

The actual model used for the response.

finish_reason

Why the response ended: stop, end_turn, length, tool_calls, etc. Provider-specific values are preserved as-is.

usage

Token usage counts as a HashRef. Keys vary by provider but are normalized by the convenience methods.

timing

Timing information as a HashRef. Currently only populated by Ollama.

created

Unix timestamp of when the response was created.

thinking

Chain-of-thought reasoning content. Populated automatically from native API fields (DeepSeek reasoning_content, Anthropic thinking blocks, Gemini thought parts) or from <think> tag filtering when "think_tag_filter" in Langertha::Role::ThinkTag is enabled.

clone_with

my $new = $response->clone_with(content => $filtered, thinking => $thought);

Returns a new Response with the same attributes as the original, except for the overrides provided. Used by Langertha::Role::ThinkTag to produce a filtered response while preserving metadata.

prompt_tokens

Returns the number of prompt/input tokens. Checks prompt_tokens and input_tokens keys in usage.

completion_tokens

Returns the number of completion/output tokens. Checks completion_tokens and output_tokens keys in usage.

total_tokens

Returns the total token count. Uses total_tokens from usage if available, otherwise sums prompt and completion tokens.

SEE ALSO

SUPPORT

Issues

Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha/issues.

CONTRIBUTING

Contributions are welcome! Please fork the repository and submit a pull request.

AUTHOR

Torsten Raudssus <torsten@raudssus.de> https://raudss.us/

COPYRIGHT AND LICENSE

This software is copyright (c) 2026 by Torsten Raudssus.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.