Changes for version 1.001 - 2026-04-12

  • Raw passthrough: unconfigured models are now piped 1:1 as raw HTTP bytes to the upstream API. All protocol metadata (tool_use, usage, cache_control, stop_reason, content block indices) is preserved. Client auth headers are forwarded transparently.
  • Langfuse tracing for passthrough requests: every raw passthrough call creates a Langfuse trace with model, protocol, and timing.
  • 'knarr container' is now a deprecated alias for 'knarr start --from-env'. The --from-env flag builds config from environment variables when no config file is found.
  • --port is now repeatable: -p 8080 -p 11434 binds multiple ports. --host defaults to 0.0.0.0.
  • KNARR_DEBUG=1 environment variable enables verbose logging (equivalent to --verbose / -v).
  • Logging always active: Log::Any::Adapter::Stderr is now always set (warning level by default, trace level with --verbose). Errors in request handling are logged to stderr.
  • Anthropic system prompt: handle array-of-content-blocks format sent by Claude Code (was: only plain string).
  • Removed route_model from Handler role — decorator chain (Tracing, RequestLog) is no longer bypassed.
  • Dockerfile rewritten: cpm instead of cpanm, cpanfile-based installation, non-root user, Docker-cache-friendly layers.
  • Bump Langertha floor to 0.401.
  • Added IO::Async::SSL to cpanfile (required for HTTPS passthrough).
  • Test suite expanded to 343 tests. New coverage:
    • Raw passthrough with mock Langfuse tracing
    • Passthrough with and without tracing enabled
  • Documentation updated: CLAUDE.md, README.md, bin/knarr POD reflect new architecture, --from-env, repeatable --port, KNARR_DEBUG, raw passthrough behavior.

Documentation

Langertha LLM Proxy with Langfuse Tracing

Modules

Universal LLM hub — proxy, server, and translator across OpenAI/Anthropic/Ollama/A2A/ACP/AG-UI
CLI entry point for Knarr LLM Proxy
Validate Knarr configuration file
Alias for 'knarr start --from-env' (Docker mode)
Scan environment and generate Knarr configuration
List configured models and their backends
Start the Knarr proxy server
YAML configuration loader and validator
Role for Knarr backend handlers (Raider, Engine, Code, ...)
Steerboard handler that consumes a remote A2A (Agent2Agent) agent
Steerboard handler that consumes a remote ACP (BeeAI) agent
Coderef-backed Knarr handler for fakes, tests, and custom logic
Knarr handler that proxies directly to a Langertha engine
Knarr handler that forwards requests verbatim to an upstream HTTP API
Knarr handler that backs each session with a Langertha::Raider
Decorator handler that writes per-request JSON logs via Knarr::RequestLog
Knarr handler that resolves model names via Langertha::Knarr::Router and dispatches to engines
Decorator handler that records every request as a Langfuse trace
PSGI adapter for Langertha::Knarr (buffered, no streaming)
Role for Knarr wire protocols (OpenAI, Anthropic, Ollama, A2A, ACP, AG-UI)
Google Agent2Agent (A2A) wire protocol for Knarr
BeeAI/IBM Agent Communication Protocol (ACP) for Knarr
AG-UI (Agent-UI) event protocol for Knarr
Anthropic-compatible wire protocol (/v1/messages) for Knarr
Ollama-compatible wire protocol (/api/chat, /api/tags) for Knarr
OpenAI-compatible wire protocol (chat/completions, models) for Knarr
Normalized chat request shared across all Knarr protocols
Local disk logging of proxy requests
Model name to Langertha engine routing with caching
Per-conversation state for a Knarr server
Async chunk iterator returned by streaming Knarr handlers
Automatic Langfuse tracing per proxy request

Provides

in lib/Langertha/Knarr/PSGI.pm