NAME

AI::Ollama::GenerateCompletionResponse -

SYNOPSIS

my $obj = AI::Ollama::GenerateCompletionResponse->new();
...

PROPERTIES

context

An encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory.

created_at

Date on which a model was created.

done

Whether the response has completed.

eval_count

Number of tokens the response.

eval_duration

Time in nanoseconds spent generating the response.

load_duration

Time spent in nanoseconds loading the model.

model

The model name.

Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.

prompt_eval_count

Number of tokens in the prompt.

prompt_eval_duration

Time spent in nanoseconds evaluating the prompt.

response

The response for a given prompt with a provided model.

total_duration

Time spent generating the response.