Changes for version 0.12 - 2026-01-23

  • LoRA retraining on an existing model
  • Full Model Weight Training from scratch
  • Autograd - Automatic differentiation for Lugh tensors
  • Optimizers
    • Lugh::Optimizer::AdamW with momentum, adaptive learning rates
    • Lugh::Optimizer::SGD with momentum support
    • Lugh::Optimizer::LRScheduler with cosine annealing, warmup, step decay
  • New tests and examples
  • Note: the next release will be a refactor to use XSOne

Modules

Pure C LLM Inference Engine for Perl (built on ggml)
Automatic differentiation for Lugh tensors
Differentiable operations for automatic differentiation
Tensor with automatic differentiation support
Memory Context for Tensor Allocation
Computation Graph for Tensor Operations
Transformer Forward Pass and Token Generation
KV Cache for efficient incremental decoding
Low-Rank Adaptation (LoRA) adapter support for Lugh
Reusable compute resources for efficient inference
GGUF Model Loading and Tensor Access
Tensor Operations for Neural Network Computation
Optimization algorithms for Lugh training
Adam optimizer with decoupled weight decay
Learning rate scheduling for optimizers
Stochastic Gradient Descent optimizer
Chat Template Formatting for LLM Conversations
Quantization utilities for Lugh tensors
RoPE (Rotary Position Embedding) Scaling Configuration
Speculative decoding for faster LLM inference
N-Dimensional Tensor with ggml Backend
BPE Tokenizer for Text Encoding and Decoding
High-level training API for Lugh