NAME
Langertha::Engine::HuggingFace - HuggingFace Inference Providers API
VERSION
version 0.302
SYNOPSIS
use Langertha::Engine::HuggingFace;
my $hf = Langertha::Engine::HuggingFace->new(
api_key => $ENV{HF_TOKEN},
model => 'Qwen/Qwen2.5-7B-Instruct',
);
print $hf->simple_chat('Hello from Perl!');
# Access many models through one API
my $llama = Langertha::Engine::HuggingFace->new(
api_key => $ENV{HF_TOKEN},
model => 'meta-llama/Llama-3.3-70B-Instruct',
);
DESCRIPTION
Provides access to HuggingFace Inference Providers, a unified API gateway for open-source models hosted on the HuggingFace Hub. The endpoint at https://router.huggingface.co/v1 is 100% OpenAI-compatible.
Model names use org/model format (e.g., Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.3-70B-Instruct). No default model is set; model must be specified explicitly.
Supports chat, streaming, and MCP tool calling. Embeddings and transcription are not supported.
Get your API token at https://huggingface.co/settings/tokens and set LANGERTHA_HUGGINGFACE_API_KEY in your environment.
THIS API IS WORK IN PROGRESS
hub_url
Base URL for the HuggingFace Hub API. Default: https://huggingface.co. Used by list_models to query available inference models.
list_models_request
my $request = $engine->list_models_request(%opts);
Generates an HTTP GET request for the HuggingFace Hub API models endpoint with inference provider filtering. Accepts options: search, pipeline_tag (default: text-generation), inference_provider (default: all), limit (default: 50).
list_models_response
my $models = $engine->list_models_response($http_response);
Parses the Hub API response. Returns an ArrayRef of model objects with id, pipeline_tag, inferenceProviderMapping, etc.
list_models
# All text-generation models with inference providers
my $ids = $hf->list_models;
# Search for specific models
my $ids = $hf->list_models(search => 'llama');
# Filter by pipeline tag
my $ids = $hf->list_models(pipeline_tag => 'text-to-image');
# Full model objects with provider details
my $models = $hf->list_models(full => 1);
Queries the HuggingFace Hub API for models available via inference providers. Only returns models that have at least one active inference provider. Results are cached for models_cache_ttl seconds (search results are not cached).
SEE ALSO
https://huggingface.co/docs/inference-providers/index - HuggingFace Inference Providers docs
https://huggingface.co/models - Browse available models
https://status.huggingface.co/ - HuggingFace service status
Langertha::Role::OpenAICompatible - OpenAI API format role
SUPPORT
Issues
Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha/issues.
CONTRIBUTING
Contributions are welcome! Please fork the repository and submit a pull request.
AUTHOR
Torsten Raudssus <torsten@raudssus.de> https://raudss.us/
COPYRIGHT AND LICENSE
This software is copyright (c) 2026 by Torsten Raudssus.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.