NAME
PAGI::Server - PAGI Reference Server Implementation
SYNOPSIS
use IO::Async::Loop;
use PAGI::Server;
my $loop = IO::Async::Loop->new;
my $server = PAGI::Server->new(
app => \&my_pagi_app,
host => '127.0.0.1',
port => 5000,
);
$loop->add($server);
$server->listen->get; # Start accepting connections
DESCRIPTION
PAGI::Server is a reference implementation of a PAGI-compliant HTTP server. It supports HTTP/1.1, WebSocket, and Server-Sent Events (SSE) as defined in the PAGI specification.
This is NOT a production server - it prioritizes spec compliance and code clarity over performance optimization. It serves as the canonical reference for how PAGI servers should behave.
PROTOCOL SUPPORT
Currently supported:
HTTP/1.1 (full support including chunked encoding, trailers, keepalive)
WebSocket (RFC 6455)
Server-Sent Events (SSE)
Not yet implemented:
HTTP/2 - Planned for a future release
HTTP/3 (QUIC) - Under consideration
For HTTP/2 support today, run PAGI::Server behind a reverse proxy like nginx or Caddy that handles HTTP/2 on the frontend and speaks HTTP/1.1 to PAGI.
WINDOWS SUPPORT
PAGI::Server does not support Windows.
The server relies on Unix-specific features that are not available on Windows:
Unix signals - SIGTERM, SIGINT, SIGHUP for graceful shutdown and worker management
fork() - Multi-worker mode requires real process forking, not thread emulation
IO::Async internals - The event loop has Unix-specific optimizations
For Windows development, consider using WSL (Windows Subsystem for Linux) to run PAGI::Server in a Linux environment. The PAGI specification and middleware components can still be developed and unit-tested on Windows, but the reference server implementation requires a Unix-like operating system.
CONSTRUCTOR
new
my $server = PAGI::Server->new(%options);
Creates a new PAGI::Server instance. Options:
- app => \&coderef (required)
-
The PAGI application coderef with signature: async sub ($scope, $receive, $send)
- host => $host
-
Bind address (IP address or hostname). Default:
'127.0.0.1'The default binds only to the loopback interface, accepting connections only from localhost. This is intentionally secure by default - development servers won't accidentally be exposed to the network.
Common values:
'127.0.0.1' - Localhost only (default, secure for development) '0.0.0.0' - All IPv4 interfaces (required for remote access) '::' - All IPv6 interfaces (may also accept IPv4) '192.168.1.100' - Specific interface onlyFor headless servers or production deployments where remote clients need to connect, bind to all interfaces:
my $server = PAGI::Server->new( app => $app, host => '0.0.0.0', port => 8080, );Security note: When binding to
0.0.0.0, ensure appropriate firewall rules are in place. For production, consider a reverse proxy (nginx, etc.) - port => $port
-
Bind port. Default: 5000
- ssl => \%config
-
Optional TLS/HTTPS configuration. Requires additional modules - see "ENABLING TLS SUPPORT" below.
Configuration keys:
- cert_file => $path
-
Path to the SSL certificate file (PEM format).
- key_file => $path
-
Path to the SSL private key file (PEM format).
- ca_file => $path
-
Optional path to CA certificate for client verification.
- verify_client => $bool
-
If true, require and verify client certificates.
- min_version => $version
-
Minimum TLS version. Default:
'TLSv1_2'. Options:'TLSv1_2','TLSv1_3'. - cipher_list => $string
-
OpenSSL cipher list. Default uses modern secure ciphers.
Example:
my $server = PAGI::Server->new( app => $app, ssl => { cert_file => '/path/to/server.crt', key_file => '/path/to/server.key', }, ); - disable_tls => $bool
-
Force-disable TLS even if ssl config is provided. Useful for testing TLS configuration parsing without actually enabling TLS. Default: false.
- extensions => \%extensions
-
Extensions to advertise (e.g., { fullflush => {} })
- on_error => \&callback
-
Error callback receiving ($error)
- access_log => $filehandle | undef
-
Access log filehandle. Default: STDERR
Set to
undefto disable access logging entirely. This eliminates per-request I/O overhead, improving throughput by 5-15% depending on workload. Useful for benchmarking or when access logs are handled externally (e.g., by a reverse proxy).# Disable access logging my $server = PAGI::Server->new( app => $app, access_log => undef, ); - log_level => $level
-
Controls the verbosity of server log messages. Default: 'info'
Valid levels (from least to most verbose):
error - Only errors (application errors, fatal conditions)
warn - Warnings and errors (connection issues, timeouts)
info - Informational messages and above (startup, shutdown, worker spawning)
debug - Everything (verbose diagnostics, frame-level details)
my $server = PAGI::Server->new( app => $app, log_level => 'debug', # Very verbose );CLI:
--log-level debug - workers => $count
-
Number of worker processes for multi-worker mode. Default: 0 (single process mode).
When set to a value greater than 0, the server uses a pre-fork model:
- listener_backlog => $number
-
Value for the listener queue size. Default: 2048
When in multi worker mode, the queue size for those workers inherits from this value.
- reuseport => $bool
-
Enable SO_REUSEPORT mode for multi-worker servers. Default: 0 (disabled).
When enabled, each worker process creates its own listening socket with SO_REUSEPORT, allowing the kernel to load-balance incoming connections across workers. This can reduce accept() contention and improve p99 latency under high concurrency.
Traditional mode (reuseport=0): Parent creates one socket before forking, all workers inherit and share that socket. Workers compete on a single accept queue (potential thundering herd).
Reuseport mode (reuseport=1): Each worker creates its own socket with SO_REUSEPORT. The kernel distributes connections across sockets, each worker has its own accept queue (reduced contention).
Platform notes:
Linux 3.9+: Full kernel-level load balancing. Recommended for high concurrency workloads.
macOS/BSD: SO_REUSEPORT allows multiple binds but does NOT provide kernel load balancing. May actually decrease performance compared to shared socket mode. Use with caution - benchmark before deploying.
- max_receive_queue => $count
-
Maximum number of messages that can be queued in the WebSocket receive queue before the connection is closed. This is a DoS protection mechanism.
Unit: Message count (not bytes). Each WebSocket text or binary frame counts as one message regardless of size.
Default: 1000 messages
When exceeded: The server sends a WebSocket close frame with code 1008 (Policy Violation) and reason "Message queue overflow", then closes the connection.
Tuning guidelines:
Memory impact: Each queued message holds the full message payload. With default of 1000 messages and average 1KB messages, worst case is ~1MB per slow connection.
Workers: Total memory risk = workers × max_connections × max_receive_queue × avg_message_size. For 4 workers, 100 connections each, 1000 queue, 1KB average = 400MB worst case.
Fast consumers: If your app processes messages quickly, the queue rarely grows. Default of 1000 is generous for most applications.
Slow consumers: If your app does expensive processing per message, consider lowering to 100-500 to limit memory exposure.
High throughput: If you have trusted clients sending rapid bursts, you may increase to 5000-10000, but monitor memory usage.
CLI:
--max-receive-queue 500 - max_ws_frame_size => $bytes
-
Maximum size in bytes for a single WebSocket frame payload. When a client sends a frame larger than this limit, the connection is closed with a protocol error.
Unit: Bytes
Default: 65536 (64KB) - matches Protocol::WebSocket default
When exceeded: The server closes the connection. The error is logged as "PAGI connection error: Payload is too big."
Tuning guidelines:
Small messages: For chat apps or control messages, default 64KB is plenty.
File uploads: For binary data transfer via WebSocket, increase to 1MB-16MB depending on expected file sizes.
Memory impact: Each connection can buffer up to max_ws_frame_size bytes during frame parsing. High values increase memory per connection.
DoS protection: Lower values limit memory exhaustion from malicious clients sending oversized frames.
CLI:
--max-ws-frame-size 1048576 - max_connections => $count
-
Maximum number of concurrent connections before returning HTTP 503. Default: 0 (auto-detect from ulimit - 50).
When at capacity, new connections receive a 503 Service Unavailable response with a Retry-After header. This prevents file descriptor exhaustion crashes under heavy load.
The auto-detected limit uses:
ulimit -nminus 50 for headroom (file operations, logging, database connections, etc.).Example:
my $server = PAGI::Server->new( app => $app, max_connections => 200, # Explicit limit );CLI:
--max-connections 200Monitoring: Use
$server->connection_countand$server->effective_max_connectionsto monitor usage. - max_body_size => $bytes
-
Maximum request body size in bytes. Default: 10,000,000 (10MB). Set to 0 for unlimited (not recommended for public-facing servers).
Requests with Content-Length exceeding this limit receive HTTP 413 (Payload Too Large). Chunked requests are also checked as data arrives.
Example:
my $server = PAGI::Server->new( app => $app, max_body_size => 50_000_000, # 50MB for file uploads ); # Unlimited (use with caution) my $server = PAGI::Server->new( app => $app, max_body_size => 0, );CLI:
--max-body-size 50000000Security note: Without a body size limit, attackers can exhaust server memory with large requests. The 10MB default balances security with common use cases (file uploads, JSON payloads). Increase for specific needs, or use 0 only behind a reverse proxy that enforces its own limit.
A listening socket is created before forking
Worker processes are spawned using
$loop->fork()which properly handles IO::Async's$ONE_TRUE_LOOPsingletonEach worker gets a fresh event loop and runs lifespan startup independently
Workers that exit are automatically respawned via
$loop->watch_process()SIGTERM/SIGINT triggers graceful shutdown of all workers
- disable_sendfile => $bool
-
Disable the sendfile() syscall for file responses. Default: 0 (use sendfile if available).
When
Sys::Sendfileis installed and this option is not set, the server uses the sendfile() syscall for zero-copy file transfers. This is faster and uses less memory than reading files through userspace.Set this to 1 to force the server to use the worker pool fallback for file I/O, which reads files in chunks through IO::Async::Function workers.
Reasons to disable sendfile:
Testing worker pool behavior
Working around buggy OS sendfile implementations
Debugging file transfer issues
Using file systems that don't support sendfile (some network mounts)
CLI:
--disable-sendfileStartup banner: Shows sendfile status:
on,off (Sys::Sendfile not installed),disabled, orn/a (disabled). - sync_file_threshold => $bytes
-
Threshold in bytes for synchronous file reads. Files smaller than this value are read synchronously in the event loop; larger files use async I/O via worker pool or sendfile.
Default: 65536 (64KB)
Set to 0 for fully async file reads. This is recommended for:
Network filesystems (NFS, SMB, cloud storage)
High-latency storage (spinning disks under load)
Docker volumes with overlay filesystem
The default (64KB) is optimized for local SSDs where small synchronous reads are faster than the overhead of async I/O.
CLI:
--sync-file-threshold NUM - max_requests => $count
-
Maximum number of requests a worker process will handle before restarting. After serving this many requests, the worker gracefully shuts down and the parent spawns a replacement.
Default: 0 (disabled - workers run indefinitely)
When to use:
Long-running deployments where gradual memory growth is a concern
Applications with known memory leaks that can't be easily fixed
Defense against slow memory growth (~6.5 bytes/request observed in PAGI)
Note: Only applies in multi-worker mode (
workers> 0). In single-worker mode, this setting is ignored.CLI:
--max-requests 10000Example: With 4 workers and max_requests=10000, total capacity before any restart is 40,000 requests. Workers restart individually without downtime.
METHODS
listen
my $future = $server->listen;
Starts listening for connections. Returns a Future that completes when the server is ready to accept connections.
shutdown
my $future = $server->shutdown;
Initiates graceful shutdown. Returns a Future that completes when shutdown is complete.
port
my $port = $server->port;
Returns the bound port number. Useful when port => 0 is used.
is_running
my $bool = $server->is_running;
Returns true if the server is accepting connections.
connection_count
my $count = $server->connection_count;
Returns the current number of active connections.
effective_max_connections
my $max = $server->effective_max_connections;
Returns the effective maximum connections limit. If max_connections was set explicitly, returns that value. Otherwise returns the auto-detected limit (ulimit - 50).
FILE RESPONSE STREAMING
PAGI::Server supports efficient file streaming via the file and fh keys in http.response.body events:
# Stream entire file
await $send->({
type => 'http.response.body',
file => '/path/to/file.mp4',
more => 0,
});
# Stream partial file (for Range requests)
await $send->({
type => 'http.response.body',
file => '/path/to/file.mp4',
offset => 1000,
length => 5000,
more => 0,
});
# Stream from filehandle
open my $fh, '<:raw', $file;
await $send->({
type => 'http.response.body',
fh => $fh,
length => $size,
more => 0,
});
close $fh;
The server streams files in 64KB chunks to avoid memory bloat. When Sys::Sendfile is available and conditions permit (non-TLS, non-chunked), the server uses sendfile() for zero-copy I/O. Otherwise, a worker pool handles file I/O asynchronously to avoid blocking the event loop.
Sendfile Caveats and Production Recommendations
Warning: The sendfile() syscall behaves differently across operating systems (Linux, FreeBSD, macOS, etc.). While we've implemented workarounds for known issues (such as FreeBSD's non-blocking socket behavior), edge cases may exist on untested platforms or kernel versions.
If you plan to serve static files in production, we strongly recommend:
- 1. Delegate static file serving to nginx or a CDN
-
For production deployments, place nginx (or another reverse proxy) in front of PAGI::Server and let it handle static files directly. This provides:
Battle-tested sendfile implementation
Efficient caching and compression
Protection from slow client attacks
HTTP/2 and HTTP/3 support
Example nginx configuration:
location /static/ { alias /var/www/static/; expires 30d; } location / { proxy_pass http://127.0.0.1:5000; } - 2. Test thoroughly on your target platform
-
If you must serve files directly from PAGI::Server, test extensively under realistic load conditions on your exact production OS and kernel version.
- 3. Consider disabling sendfile
-
If you encounter issues, use
disable_sendfile => 1to fall back to the worker pool method, which is more portable but slightly slower:my $server = PAGI::Server->new( app => $app, disable_sendfile => 1, );
ENABLING TLS SUPPORT
PAGI::Server supports HTTPS/TLS connections, but requires additional modules that are not installed by default. This keeps the base installation minimal for users who don't need TLS.
When You Need TLS
You need TLS if you want to:
Serve HTTPS traffic directly from PAGI::Server
Test TLS locally during development
Use client certificate authentication
You don't need TLS if you:
Use a reverse proxy (nginx, Apache) that handles TLS termination
Only serve HTTP traffic on localhost for development
Deploy behind a load balancer that provides TLS
Production recommendation: Use a reverse proxy (nginx, HAProxy, etc.) for TLS termination. They offer better performance, easier certificate management, and battle-tested security. PAGI::Server's TLS support is primarily for development and testing.
Installing TLS Modules
To enable TLS support, install the required modules:
Using cpanm:
cpanm IO::Async::SSL IO::Socket::SSL
Using system packages (Debian/Ubuntu):
apt-get install libio-socket-ssl-perl
Using system packages (RHEL/CentOS):
yum install perl-IO-Socket-SSL
Verifying installation:
perl -MIO::Async::SSL -MIO::Socket::SSL -e 'print "TLS modules installed\n"'
Basic TLS Configuration
Once the modules are installed, configure TLS with certificate and key files:
my $server = PAGI::Server->new(
app => $app,
host => '0.0.0.0',
port => 5000,
ssl => {
cert_file => '/path/to/server.crt',
key_file => '/path/to/server.key',
},
);
Generating Self-Signed Certificates (Development)
For local development and testing, you can generate a self-signed certificate:
Quick self-signed certificate (1 year validity):
openssl req -x509 -newkey rsa:4096 -nodes \
-keyout server.key -out server.crt -days 365 \
-subj "/CN=localhost"
With Subject Alternative Names (recommended):
# Create config file
cat > ssl.conf <<EOF
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
CN = localhost
[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = *.localhost
IP.1 = 127.0.0.1
EOF
# Generate certificate
openssl req -x509 -newkey rsa:4096 -nodes \
-keyout server.key -out server.crt -days 365 \
-config ssl.conf -extensions v3_req
Testing your TLS configuration:
# Start server
pagi-server --app myapp.pl --ssl-cert server.crt --ssl-key server.key
# Test with curl (ignore self-signed cert warning)
curl -k https://localhost:5000/
Production certificates:
For production, use certificates from a trusted CA (Let's Encrypt, etc.):
# Let's Encrypt with certbot
certbot certonly --standalone -d yourdomain.com
# Then configure PAGI::Server
my $server = PAGI::Server->new(
app => $app,
ssl => {
cert_file => '/etc/letsencrypt/live/yourdomain.com/fullchain.pem',
key_file => '/etc/letsencrypt/live/yourdomain.com/privkey.pem',
},
);
Advanced TLS Configuration
See the ssl option in "CONSTRUCTOR" for details on:
Client certificate verification (
verify_client,ca_file)TLS version requirements (
min_version)Custom cipher suites (
cipher_list)
PERFORMANCE
PAGI::Server is designed as a reference implementation prioritizing spec compliance and code clarity, yet delivers competitive performance suitable for production workloads.
Benchmark Results
Tested on a 2.4 GHz 8-Core Intel Core i9 Mac with 8 workers, using hey against a PAGI hello world application:
Peak Performance (100 concurrent, 10 seconds):
Endpoint Req/sec p50 p99 Response
----------------------------------------------------------------
/ (text) 12,455 7.7ms 13.2ms 13 bytes
/html 10,932 8.4ms 19.3ms 143 bytes
/json 9,806 8.8ms 28.2ms 50 bytes
/greet/:name 10,722 8.9ms 15.4ms 17 bytes (path params)
Concurrency Scaling:
Concurrent Req/sec p50 p99
-----------------------------------------
10 9,757 0.9ms 2.1ms
100 12,100 7.8ms 14.1ms
500 11,299 43.3ms 63.7ms
Sustained Load (30 seconds, 200 concurrent):
Requests/sec: 9,934
Total requests: 298,171
Latency p99: 39.5ms
Errors: 0
Comparison
Server Req/sec p99 Latency Notes
---------------------------------------------------------------
PAGI (8 workers) 10-12k 13-40ms Async, zero errors
Uvicorn (Python) 10-15k varies ASGI reference
Hypercorn (Python) 8-12k varies ASGI
Starman (Perl) 8-10k 2-3ms* Sync prefork
* Starman shows lower latency at low concurrency but experiences
request timeouts under high concurrent load (500+ connections)
due to its synchronous prefork model.
Key Findings
Keep-alive is essential - Without it, throughput drops 6x and port exhaustion errors occur under load.
Zero errors under sustained load - 298k requests over 30 seconds with no failures when using keep-alive connections.
Consistent tail latency - p99 is typically only 2x p50, indicating predictable performance without major outliers.
JSON overhead - JSON serialization adds ~20% overhead vs plain text.
PAGI's async architecture handles high concurrency gracefully without queueing or timeouts, making it well-suited for WebSocket, SSE, and bursty traffic patterns that would overwhelm traditional prefork servers.
Worker Tuning
For optimal performance, set workers equal to your CPU core count:
# Recommended production configuration
my $server = PAGI::Server->new(
app => $app,
workers => 16, # Set to number of CPU cores
);
Guidelines:
CPU-bound workloads: workers = CPU cores
I/O-bound workloads: workers = 2 × CPU cores
Development: workers = 0 (single process)
Exceeding 2× CPU cores typically degrades performance due to context switching overhead.
System Tuning
For high-concurrency production deployments, ensure adequate system limits:
# File descriptors (run before starting server)
ulimit -n 65536
# Listen backlog (Linux)
sudo sysctl -w net.core.somaxconn=2048
# Listen backlog (macOS)
sudo sysctl -w kern.ipc.somaxconn=2048
PAGI::Server defaults to a listen backlog of 2048, matching Uvicorn's default. This can be adjusted via the listen_backlog option.
Event Loop Selection
PAGI::Server works with any IO::Async compatible event loop. If you are on Linux, its recommended to install IO::Async::Loop::EPoll because that is the best choice for Linux and if installed will be automatically used.
For other systems I recommend testing the various backend loop options and find what works best. Your notes and updates appreciated.
RECOMMENDED MIDDLEWARE
For production deployments, consider enabling these middleware components:
SecurityHeaders
Adds important security headers to all responses. Addresses common security scanner findings (e.g., nikto, OWASP ZAP).
use PAGI::Middleware::Builder;
my $app = builder {
enable 'SecurityHeaders',
x_frame_options => 'DENY', # Clickjacking protection
x_content_type_options => 'nosniff', # MIME sniffing protection
content_security_policy => "default-src 'self'", # XSS protection
strict_transport_security => 'max-age=31536000'; # HSTS (HTTPS only)
$my_app;
};
Default headers (enabled automatically):
X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Referrer-Policy: strict-origin-when-cross-origin
See PAGI::Middleware::SecurityHeaders for full documentation.
Other Recommended Middleware
PAGI::Middleware::ContentLength - Ensures Content-Length header
PAGI::Middleware::AccessLog - Request logging (if not using server's built-in)
PAGI::Middleware::RateLimit - Protection against abuse
PAGI::Middleware::CORS - Cross-origin resource sharing
PAGI::Middleware::GZip - Response compression
SEE ALSO
PAGI::Server::Connection, PAGI::Server::Protocol::HTTP1
AUTHOR
John Napiorkowski <jjnapiork@cpan.org>
LICENSE
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.