Developer Dashboard
A local home for development work.
Introduction
Developer Dashboard gives a developer one place to organize the moving parts of day-to-day work.
Without it, local development usually ends up spread across shell history, ad-hoc scripts, browser bookmarks, half-remembered file paths, one-off health checks, and project-specific Docker commands. With it, those pieces can live behind one entrypoint: a browser home, a prompt status layer, and a CLI toolchain that all read from the same runtime.
It brings together browser pages, saved notes, helper actions, collectors, prompt indicators, path aliases, open-file shortcuts, data query tools, and Docker Compose helpers so local development can stay centered around one consistent home instead of a pile of disconnected scripts and tabs.
When the current project contains ./.developer-dashboard, that tree becomes the first runtime lookup root for dashboard-managed files. The home runtime under ~/.developer-dashboard stays as the fallback base, so project-local bookmarks, config, CLI hooks, helper users, sessions, and isolated docker service folders can override home defaults without losing shared fallback data that is not redefined locally.
Release tarballs contain installable runtime artifacts only; local Dist::Zilla release-builder configuration is kept out of the shipped archive.
Frequently used built-in commands such as of, open-file, pjq, pyq, ptomq, and pjp are also installed as standalone executables so they can run directly without loading the full dashboard runtime.
Before publishing a release, the built tarball should be smoke-tested with cpanm from the artifact itself so the shipped archive matches the fixed source tree.
Repository metadata should also keep explicit repository links, shipped module
provides, and root SECURITY.md / CONTRIBUTING.md policy files aligned for
CPAN and Kwalitee consumers.
It provides a small ecosystem for:
- saved and transient dashboard pages built from the original bookmark-file shape
- legacy bookmark syntax compatibility using the original
:--------------------------------------------------------------------------------:separator plus directives such asTITLE:,STASH:,HTML:,FORM.TT:,FORM:, andCODE1: - Template Toolkit rendering for
HTML:andFORM.TT:, with access tostash,ENV, andSYSTEM - legacy
CODE*execution with capturedSTDOUTrendered into the page and capturedSTDERRrendered as visible errors - legacy-style per-page sandpit isolation so one bookmark run can share runtime variables across
CODE*blocks without leaking them into later page runs - old-style root editor behavior with a free-form bookmark textarea when no path is provided
- file-backed collectors and indicators
- prompt rendering for
PS1 - project and path discovery helpers
- a lightweight local web interface
- action execution with trusted and safer page boundaries
- config-backed providers, path aliases, and compose overlays
- update scripts and release packaging for CPAN distribution
Developer Dashboard is meant to become the developer's working home:
- a local dashboard page that can hold links, notes, forms, actions, and rendered output
- a prompt layer that shows live status for the things you care about
- a command surface for opening files, jumping to known paths, querying data, and running repeatable local tasks
- a configurable runtime that can adapt to each codebase without losing one familiar entrypoint
What You Get
- a browser interface on port
7890for pages, status, editing, and helper access - a shell entrypoint for file navigation, page operations, collectors, indicators, auth, and Docker Compose
- saved runtime state that lets the browser, prompt, and CLI all see the same prepared information
- a place to collect project-specific shortcuts without rebuilding your daily workflow for every repo
Web Interface And Access Model
Run the web interface with:
dashboard serve
By default it listens on 0.0.0.0:7890, so you can open it in a browser at:
http://127.0.0.1:7890/
Run dashboard serve --ssl to enable HTTPS with a generated self-signed
certificate under ~/.developer-dashboard/certs/, then open:
https://127.0.0.1:7890/
The access model is deliberate:
- exact numeric loopback admin access on
127.0.0.1does not require a password - helper access is for everyone else, including
localhost, other hosts, and other machines on the network - helper logins let you share the dashboard safely without turning every browser request into full local-admin access
In practice that means the developer at the machine gets friction-free local admin access, while shared or forwarded access is forced through explicit helper accounts.
When a saved index bookmark exists, opening / now redirects straight to
/app/index so the saved home page becomes the default browser entrypoint.
When no saved index bookmark exists yet, / still opens the free-form
bookmark editor.
If a user opens an unknown saved route such as /app/foobar, the browser now
opens the bookmark editor with a prefilled blank bookmark for that requested
path instead of showing a 404 error page.
When helper access is sent to /login, the login form now keeps the original
requested path and query string in a hidden redirect target. After a successful
helper login, the browser is sent back to that saved route, such as
/app/index, instead of being dropped at /.
Collectors, Indicators, And PS1
Collectors are background or on-demand jobs that prepare state for the rest of the dashboard. A collector can run a shell command or a Perl snippet, then store stdout, stderr, exit code, and timestamps as file-backed runtime data.
That prepared state drives indicators. Indicators are the short status records used by:
- the shell prompt rendered by
dashboard ps1 - the top-right status strip in the web interface
- CLI inspection commands such as
dashboard indicator list
This matters because prompt and browser status should be cheap to render. Instead of re-running a Docker check, VPN probe, or project health command every time the prompt draws, a collector prepares the answer once and the rest of the system reads the cached result.
Why It Works As A Developer Home
The pieces are designed to reinforce each other:
- pages give you a browser home for links, notes, forms, and actions
- collectors prepare state for indicators and prompt rendering
- indicators summarize that state in both the browser and the shell
- path aliases, open-file helpers, and data query commands shorten the jump from “I know what I need” to “I am at the file or value now”
- Docker Compose helpers keep recurring container workflows behind the same
dashboardentrypoint
That combination makes the dashboard useful as a real daily base instead of just another utility script.
Not Just For Perl
Developer Dashboard is implemented in Perl, but it is not only for Perl developers.
It is useful anywhere a developer needs:
- a local browser home
- repeatable health checks and status indicators
- path shortcuts and file-opening helpers
- JSON, YAML, TOML, or properties inspection from the CLI
- a consistent Docker Compose wrapper
The toolchain already understands Perl module names, Java class names, direct files, structured-data formats, and project-local compose flows, so it suits mixed-language teams and polyglot repositories as well as Perl-heavy work.
Project-specific behavior is added through configuration, saved pages, and user CLI extensions.
Documentation
Main Concepts
-
Developer::Dashboard::PathRegistryResolves the runtime roots that everything else depends on, such as dashboards, config, collectors, indicators, CLI hooks, logs, and cache. -
Developer::Dashboard::FileRegistryResolves stable file locations on top of the path registry so the rest of the system can read and write well-known runtime files without duplicating path logic. -
Developer::Dashboard::PageDocumentandDeveloper::Dashboard::PageStoreImplement the saved and transient page model, including bookmark-style source documents, encoded transient pages, and persistent bookmark storage. -
Developer::Dashboard::PageResolverResolves saved pages and provider pages so browser pages and actions can come from both built-in and config-backed sources. -
Developer::Dashboard::ActionRunnerExecutes built-in actions and trusted local command actions with cwd, env, timeout, background support, and encoded action transport, letting pages act as operational dashboards instead of static documents. -
Developer::Dashboard::CollectorandDeveloper::Dashboard::CollectorRunnerImplement file-backed prepared-data jobs with managed loop metadata, timeout/env handling, interval and cron-style scheduling, process-title validation, duplicate prevention, and collector inspection data. This is the prepared-state layer that feeds indicators, prompt status, and operational pages. -
Developer::Dashboard::IndicatorStoreandDeveloper::Dashboard::PromptExpose cached state to shell prompts and dashboards, including compact versus extended prompt rendering, stale-state marking, generic built-in indicator refresh, and page-header status payloads for the web UI. -
Developer::Dashboard::Web::DancerApp,Developer::Dashboard::Web::App, andDeveloper::Dashboard::Web::ServerProvide the browser interface on port7890, with Dancer2 owning the HTTP route table while the web-app service handles page rendering, login/logout, helper sessions, and the exact-loopback admin trust model. -
dashboard ofanddashboard open-fileResolve direct files,file:linereferences, Perl module names, Java class names, and recursive file-pattern matches under a resolved scope so the dashboard can shorten navigation work across different stacks. -
dashboard pjq,dashboard pyq,dashboard ptomq, anddashboard pjpParse JSON, YAML, TOML, and Java properties input, then optionally extract a dotted path and print a scalar or canonical JSON, giving the CLI a small data-inspection toolkit that fits naturally into shell workflows. -
standalone
of,open-file,pjq,pyq,ptomq, andpjpProvide the same behavior directly, without proxying through the maindashboardcommand, for lighter-weight shell usage. -
Developer::Dashboard::RuntimeManagerManages the background web service and collector lifecycle with process-title validation,pkill-style fallback shutdown, and restart orchestration, tying the browser and prepared-state loops together as one runtime. -
Developer::Dashboard::UpdateManagerRuns ordered update scripts and restarts validated collector loops when needed, giving the runtime a controlled bootstrap and upgrade path. -
Developer::Dashboard::DockerComposeResolves project-aware compose files, explicit overlay layers, services, addons, modes, env injection, and the finaldocker composecommand so container workflows can live inside the same dashboard ecosystem instead of in separate wrapper scripts.
Environment Variables
The distribution supports these compatibility-style customization variables:
-
DEVELOPER_DASHBOARD_BOOKMARKSOverride the saved page root. -
DEVELOPER_DASHBOARD_CHECKERSFilter enabled collector or checker names. -
DEVELOPER_DASHBOARD_CONFIGSOverride the config root. -
DEVELOPER_DASHBOARD_ALLOW_TRANSIENT_URLSAllow browser execution of transient/?token=...,/action?atoken=..., and legacy/ajax?token=...payloads. The default is off, so the web UI only executes saved bookmark files unless this is set to a truthy value such as1,true,yes, oron.
Transient Web Token Policy
Transient page tokens still exist for CLI workflows such as dashboard page encode
and dashboard page decode, but browser routes that execute a transient payload
from token= or atoken= are disabled by default.
That means links such as:
http://127.0.0.1:7890/?token=...http://127.0.0.1:7890/action?atoken=...http://127.0.0.1:7890/ajax?token=...
return a 403 unless DEVELOPER_DASHBOARD_ALLOW_TRANSIENT_URLS is enabled.
Saved bookmark-file routes such as /app/index and
/app/index/action/... continue to work without that flag.
Saved bookmark editor pages also stay on their named /app/<id>/edit and
/app/<id> routes when you save from the browser, so editing an existing
bookmark file does not fall back to transient token= URLs under the default
deny policy.
Legacy Ajax helper calls inside saved bookmark CODE* blocks should use an
explicit file => 'name.json' argument. When a saved page supplies that name,
the helper stores the Ajax Perl code under the saved dashboard ajax tree and emits a stable
saved-bookmark endpoint such as /ajax/name.json?type=text.
Those saved Ajax handlers run the stored file as a real process, defaulting to
Perl unless the file starts with a shebang, and stream both stdout and
stderr back to the browser as they happen. That keeps bookmark Ajax
workflows usable even while transient token URLs stay disabled by default, and
it means bookmark Ajax code can rely on normal print, warn, die,
system, and exec process behaviour instead of a buffered JSON wrapper.
Saved bookmark Ajax handlers also default to text/plain when no explicit
type => ... argument is supplied, and the generated Perl wrapper now enables
autoflush on both STDOUT and STDERR so long-running handlers show
incremental output in the browser instead of stalling behind process buffers.
If a saved handler also needs refresh-safe process reuse, pass
singleton => 'NAME' in the Ajax helper. The generated url then carries
that singleton name, the Perl worker runs as dashboard ajax: NAME, and the
runtime terminates any older matching Perl Ajax worker before starting the
replacement stream for the refreshed browser request. Singleton-managed Ajax
workers are also terminated by dashboard stop and dashboard restart, and
the bookmark page now registers a pagehide cleanup beacon against
/ajax/singleton/stop?singleton=NAME so closing the browser tab also tears
down the matching worker instead of leaving it behind.
If code => ... is omitted, Ajax(file => 'name') targets the existing
executable at dashboards/ajax/name instead of rewriting it.
Static files referenced by saved bookmarks are resolved from the effective
runtime public tree first and then from the saved bookmark root. The web layer
also provides a built-in /js/jquery.js compatibility shim, so bookmark pages
that expect a local jQuery-style helper still have $, $(document).ready,
$.ajax, and selector .text(...) support even when no runtime file has been
copied into dashboard/public/js yet.
Saved bookmark editor and view-source routes also protect literal inline script
content from breaking the browser bootstrap. If a bookmark body contains HTML
such as </script>, the editor now escapes the inline JSON assignment used to
reload the source text, so the browser keeps the full bookmark source inside
the editor instead of spilling raw text below the page. Legacy bookmark
rendering also loads set_chain_value() before bookmark body HTML, so
Ajax jvar => ... helpers can bind saved /ajax/... endpoints without
throwing a play-route JavaScript ReferenceError.
User CLI Extensions
Unknown top-level subcommands can be provided by executable files under
./.developer-dashboard/cli first and then ~/.developer-dashboard/cli.
For example, dashboard foobar a b will exec the first matching
cli/foobar with a b as argv, while preserving stdin, stdout, and stderr.
Shared Nav Fragments
If nav/*.tt files exist under the saved bookmark root, every non-nav page
render includes them between the top chrome and the main page body.
For the default runtime that means files such as:
~/.developer-dashboard/dashboards/nav/foo.tt~/.developer-dashboard/dashboards/nav/bar.tt
And with route access such as:
/app/nav/foo.tt/app/nav/foo.tt/edit/app/nav/foo.tt/source
The bookmark editor can save those nested ids directly, for example
BOOKMARK: nav/foo.tt. On a page like /app/index, the direct nav/*.tt
files are loaded in sorted filename order, rendered through the normal page
runtime, and inserted above the page body. Non-.tt files and subdirectories
under nav/ are ignored by that shared-nav renderer.
Shared nav fragments and normal bookmark pages both render through Template
Toolkit with env.current_page set to the active request path, such as
/app/index. The same path is also available as
env.runtime_context.current_page, alongside the rest of the request-time
runtime context. Token play renders for named bookmarks also reuse that saved
/app/<id> path for nav context, so shared nav/*.tt fragments do not
disappear just because the browser reached the page through a transient
/?mode=render&token=... URL.
Shared nav markup now wraps horizontally by default and inherits the page
theme through CSS variables such as --panel, --line, --text, and
--accent, so dark bookmark themes no longer force a pale nav box or hide nav
link text against the background.
Open File Commands
dashboard of is the shorthand name for dashboard open-file.
These commands support:
- direct file paths
file:linereferences- Perl module names such as
My::Module - Java class names such as
com.example.App - recursive pattern searches inside a resolved directory alias or path
If VISUAL or EDITOR is set, dashboard of and dashboard open-file will exec that editor unless --print is used.
Data Query Commands
These built-in commands parse structured text and optionally extract a dotted path:
dashboard pjq [path] [file]for JSONdashboard pyq [path] [file]for YAMLdashboard ptomq [path] [file]for TOMLdashboard pjp [path] [file]for Java properties
If the selected value is a hash or array, the command prints canonical JSON. If the selected value is a scalar, it prints the scalar plus a trailing newline.
The file path and query path are order-independent, and $d selects the whole parsed document. For example, cat file.json | dashboard pjq '$d' and dashboard pjq file.json '$d' return the same result. The same contract applies to pyq, ptomq, and pjp.
Manual
Installation
Install from CPAN with:
cpanm Developer::Dashboard
Or install from a checkout with:
perl Makefile.PL
make
make test
make install
Local Development
Build the distribution:
rm -rf Developer-Dashboard-* Developer-Dashboard-*.tar.gz
dzil build
Run the CLI directly from the repository:
perl -Ilib bin/dashboard init
perl -Ilib bin/dashboard auth add-user <username> <password>
perl -Ilib bin/dashboard version
perl -Ilib bin/dashboard of --print My::Module
perl -Ilib bin/dashboard open-file --print com.example.App
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta
printf 'alpha:\n beta: 3\n' | perl -Ilib bin/dashboard pyq alpha.beta
mkdir -p ~/.developer-dashboard/cli/update
printf '#!/bin/sh\necho runtime-update\n' > ~/.developer-dashboard/cli/update/01-runtime
chmod +x ~/.developer-dashboard/cli/update/01-runtime
perl -Ilib bin/dashboard update
perl -Ilib bin/dashboard serve
perl -Ilib bin/dashboard stop
perl -Ilib bin/dashboard restart
User CLI extensions can be tested from the repository too:
mkdir -p ~/.developer-dashboard/cli
printf '#!/bin/sh\ncat\n' > ~/.developer-dashboard/cli/foobar
chmod +x ~/.developer-dashboard/cli/foobar
printf 'hello\n' | perl -Ilib bin/dashboard foobar
mkdir -p ~/.developer-dashboard/cli/pjq
printf '#!/usr/bin/env perl\nprint "seed\\n";\n' > ~/.developer-dashboard/cli/pjq/00-seed.pl
chmod +x ~/.developer-dashboard/cli/pjq/00-seed.pl
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta
Per-command hook files can live under either
./.developer-dashboard/cli/<command>/ or
./.developer-dashboard/cli/<command>.d/ first, then the same paths under
~/.developer-dashboard/cli/. Executable files in the selected directory
are run in sorted filename order before the real command runs,
non-executable files are skipped, and each hook now streams its own stdout
and stderr live to the terminal while still accumulating those channels into
RESULT as JSON. Built-in
commands such as dashboard pjq use the same hook directory. A directory-backed
custom command can provide its real executable as
~/.developer-dashboard/cli/<command>/run, and that runner receives the final
RESULT environment variable. After each hook finishes, dashboard rewrites
RESULT before the next sorted hook starts, so later hook scripts can react to
earlier hook output. Perl hook scripts can read that JSON through
Runtime::Result.
If you want dashboard update, provide it as a normal user command at
./.developer-dashboard/cli/update or ./.developer-dashboard/cli/update/run
first, with the home runtime as fallback. Its hook files can live under
update/ or update.d/, and the real command receives the final RESULT
JSON through the environment after those hook files run.
Use dashboard version to print the installed Developer Dashboard version.
The blank-container integration harness now installs the tarball first and then
builds a fake-project ./.developer-dashboard tree so the shipped test suite
still starts from a clean runtime before exercising project-local overrides.
That same blank-container path now also verifies web stop/restart behavior in a
minimal image where listener ownership may need to be discovered from /proc
instead of ss, including a late listener re-probe before dashboard restart
brings the web service back up.
First Run
Initialize the runtime:
dashboard init
Inspect resolved paths:
dashboard paths
dashboard path resolve bookmarks_root
dashboard path add foobar /tmp/foobar
dashboard path del foobar
Custom path aliases are stored in the effective dashboard config root so shell helpers such as cdr foobar and which_dir foobar keep working across sessions. When a project-local ./.developer-dashboard tree exists, alias writes go there first; otherwise they go to the home runtime. When a saved alias points inside your home directory, the stored config uses $HOME/... instead of a hard-coded absolute home path so a shared fallback runtime remains portable across different developer accounts. Re-adding an existing alias updates it without error, and deleting a missing alias is also safe.
Legacy Folder compatibility also accepts the modern root-style names through AUTOLOAD, so older code can use either Folder->dd or Folder->runtime_root, and likewise bookmarks_root and config_root. Before Folder->configure(...) runs, those runtime-backed names lazily bootstrap a default dashboard path registry from HOME instead of dying. Plain Folder calls also lazy-load the same config-backed path aliases shown by dashboard paths, so a direct perl -MFolder -e 'print Folder->docker' from the active project resolves the configured alias instead of failing with Unknown folder.
Render shell bootstrap:
dashboard shell bash
Resolve or open files from the CLI:
dashboard of --print My::Module
dashboard open-file --print com.example.App
dashboard open-file --print path/to/file.txt
dashboard open-file --print bookmarks welcome
Query structured files from the CLI:
printf '{"alpha":{"beta":2}}' | dashboard pjq alpha.beta
printf 'alpha:\n beta: 3\n' | dashboard pyq alpha.beta
printf '[alpha]\nbeta = 4\n' | dashboard ptomq alpha.beta
printf 'alpha.beta=5\n' | dashboard pjp alpha.beta
dashboard pjq file.json '$d'
Start the local app:
dashboard serve
Open the root path with no bookmark path to get the free-form bookmark editor directly.
Stop the local app and collector loops:
dashboard stop
Restart the local app and configured collector loops:
dashboard restart
Create a helper login user:
dashboard auth add-user <username> <password>
Remove a helper login user:
dashboard auth remove-user helper
Helper sessions show a Logout link in the page chrome. Logging out removes both the helper session and that helper account. Helper page views also show the helper username in the top-right chrome instead of the local system account. Exact-loopback admin requests do not show a Logout link.
Working With Pages
Create a starter page document:
dashboard page new sample "Sample Page"
Save a page:
dashboard page new sample "Sample Page" | dashboard page save sample
List saved pages:
dashboard page list
Render a saved page:
dashboard page render sample
Encode and decode transient pages:
dashboard page show sample | dashboard page encode
dashboard page show sample | dashboard page encode | dashboard page decode
Run a page action:
dashboard action run system-status paths
Bookmark documents use the original separator-line format with directive headers such as TITLE:, STASH:, HTML:, FORM.TT:, FORM:, and CODE1:.
Posting a bookmark document with BOOKMARK: some-id back through the root editor now saves it to the bookmark store so /app/some-id resolves it immediately.
The browser editor now renders syntax-highlight markup again, but keeps that highlight layer inside a clipped overlay viewport that follows the real textarea scroll position by transform instead of via a second scrollbox. That restores the visible highlighting while keeping long bookmark lines, full-text selection, and caret placement aligned with the real textarea.
Edit and source views preserve raw Template Toolkit placeholders inside HTML: and FORM.TT: sections, so values such as [% title %] are kept in the bookmark source instead of being rewritten to rendered HTML after a browser save.
Template Toolkit rendering exposes the page title as title, so a bookmark
with TITLE: Sample Dashboard can reference it directly inside HTML: or
FORM.TT: with [% title %]. Transient play and view-source links are also
encoded from the raw bookmark instruction text when it is available, so
[% stash.foo %] stays in source views instead of being baked into the
rendered scalar value after a render pass.
Legacy CODE* blocks now run before Template Toolkit rendering during
prepare_page, so a block such as CODE1: { a => 1 } can feed
[% stash.a %] in the page body. Returned hash and array values are also
dumped into the runtime output area, so CODE1: { a => 1 } both populates
stash and shows the legacy-style dumped value below the rendered page body.
The hide helper no longer discards already-printed STDOUT, so
CODE2: hide print $a keeps the printed value while suppressing the Perl
return value from affecting later merge logic.
Page TITLE: values only populate the HTML <title> element. If a bookmark should show its title in the page body, add it explicitly inside HTML:, for example with [% title %].
/apps redirects to /app/index, and /app/<name> can load either a saved bookmark document or a saved ajax/url bookmark file.
Working With Collectors
Initialize example collector config:
dashboard config init
Run a collector once:
dashboard collector run example.collector
List collector status:
dashboard collector list
Collector jobs support two execution fields:
commandruns a shell command string throughsh -ccoderuns Perl code directly inside the collector runtime
Example collector definitions:
{
"collectors": [
{
"name": "shell.example",
"command": "printf 'shell collector\\n'",
"cwd": "home",
"interval": 60
},
{
"name": "perl.example",
"code": "print qq(perl collector\\n); return 0;",
"cwd": "home",
"interval": 60,
"indicator": {
"icon": "P"
}
}
]
}
Collector indicators follow the collector exit code automatically: 0 stores
an ok indicator state and any non-zero exit code stores error.
When indicator.name is omitted, the collector name is reused automatically.
When indicator.label is omitted, it defaults to that same name.
Configured collector indicators are now seeded immediately, so prompt and page
status strips show them before the first collector run. Before a collector has
produced real output it appears as missing. Prompt output renders an explicit
status glyph in front of the collector icon, so successful checks show ✅🔑
style fragments and failing or not-yet-run checks show 🚨🔑 style fragments.
The blank-environment integration flow also keeps a regression for mixed
collector health isolation: one intentionally broken Perl collector must stay
red without stopping a second healthy collector from staying green in
dashboard indicator list, dashboard ps1, and /system/status.
Docker Compose
Inspect the resolved compose stack without running Docker:
dashboard docker compose --dry-run config
Include addons or modes:
dashboard docker compose --addon mailhog --mode dev up -d
dashboard docker compose config green
dashboard docker compose config
The resolver also supports old-style isolated service folders without adding entries to dashboard JSON config. If ./.developer-dashboard/docker/green/compose.yml exists in the current project it wins; otherwise the resolver falls back to ~/.developer-dashboard/config/docker/green/compose.yml. dashboard docker compose config green or dashboard docker compose up green will pick it up automatically by inferring service names from the passthrough compose args before the real docker compose command is assembled. If no service name is passed, the resolver scans isolated service folders and preloads every non-disabled folder. If a folder contains disabled.yml it is skipped. Each isolated folder contributes development.compose.yml when present, otherwise compose.yml.
During compose execution the dashboard exports DDDC as the effective config-root docker directory for the current runtime, so compose YAML can keep using ${DDDC} paths inside the YAML itself.
Wrapper flags such as --service, --addon, --mode, --project, and --dry-run are consumed first, and all remaining docker compose flags such as -d and --build pass straight through to the real docker compose command.
Without --dry-run, the dashboard hands off with exec, so you see the normal streaming output from docker compose itself instead of a dashboard JSON wrapper.
Prompt Integration
Render prompt text directly:
dashboard ps1 --jobs 2
Generate bash bootstrap:
dashboard shell bash
Browser Access Model
The browser security model follows the legacy local-first trust concept:
- requests from exact
127.0.0.1with a numericHostof127.0.0.1are treated as local admin - requests from other IPs or from hostnames such as
localhostare treated as helper access - helper sessions are file-backed, bound to the originating remote address, and expire automatically
- helper passwords must be at least 8 characters long
The editor and rendered pages also include a shared top chrome with share/source links on the left and the original status-plus-alias indicator strip on the right, refreshed from /system/status. That top-right area also includes the local username, the current host or IP link, and the current date/time in the same spirit as the old local dashboard chrome.
The displayed address is discovered from the machine interfaces, preferring a VPN-style address when one is active, and the date/time is refreshed in the browser with JavaScript.
The bookmark editor also follows the old auto-submit flow, so the form submits when the textarea changes and loses focus instead of showing a manual update button.
For saved bookmark files, that browser save posts back to the named
/app/<id>/edit route and keeps the Play link on /app/<id> instead of a
transient token= URL, so updates still work while transient URLs are
disabled.
Legacy bookmark parsing also treats a standalone --- line as a section
break, preventing pasted prose after a code block from being compiled into the
saved CODE* body.
Saved bookmark loads now also normalize malformed legacy icon bytes before the
browser sees them. Broken section glyphs fall back to ◈, broken item-icon
glyphs fall back to 🏷️, and common damaged joined emoji sequences such as
🧑💻 are repaired so edit and play routes stop showing Unicode replacement
boxes from older bookmark files.
- helper access requires a login backed by local file-based user and session records
This keeps the fast path for exact loopback access while making non-canonical or remote access explicit.
The default web bind is 0.0.0.0:7890. Trust is still decided from the request origin and host header, not from the listen address.
Runtime Lifecycle
dashboard servestarts the web service in the background by defaultdashboard serve --foregroundkeeps the web service attached to the terminaldashboard serve --sslenables HTTPS in Starman with the generated local certificate and key, and the saved SSL setting is reused by laterdashboard restartruns unless you override itdashboard serve logsprints the combined Dancer2 and Starman runtime log captured in the dashboard log file,dashboard serve logs -n 100starts from the last 100 lines, anddashboard serve logs -ffollows appended output livedashboard serve workers Nsaves the default Starman worker count and starts the web service immediately when it is currently stopped;--host HOSTand--port PORTcan steer that auto-start path, anddashboard serve --workers Nordashboard restart --workers Ncan still override it for one rundashboard stopstops both the web service and managed collector loopsdashboard restartstops both, starts configured collector loops again, then starts the web service- web shutdown and duplicate detection do not trust pid files alone; they validate managed processes by environment marker or process title and use a
pkill-style scan fallback when needed
Environment Customization
After installing with cpanm, the runtime can be customized with these environment variables:
-
DEVELOPER_DASHBOARD_BOOKMARKSOverrides the saved page or bookmark directory. -
DEVELOPER_DASHBOARD_CHECKERSLimits enabled collector or checker jobs to a colon-separated list of names. -
DEVELOPER_DASHBOARD_CONFIGSOverrides the config directory.
Collector definitions now come only from dashboard configuration JSON, so config remains the single source of truth for saved path aliases, providers, collectors, and Docker compose overlays.
Updating Runtime State
Run your user-provided update command:
dashboard update
If ~/.developer-dashboard/cli/update or ~/.developer-dashboard/cli/update/run
exists, dashboard update runs that command after any sorted hook files from
~/.developer-dashboard/cli/update/ or ~/.developer-dashboard/cli/update.d/.
dashboard init seeds three editable starter bookmarks when they are missing:
welcome, api-dashboard, and db-dashboard.
Blank Environment Integration
Run the host-built tarball integration flow with:
integration/blank-env/run-host-integration.sh
This integration path builds the distribution tarball on the host with
dzil build, runs the prebuilt dd-int-test:latest container with only that
tarball mounted into it, installs the tarball with cpanm, and then
exercises the installed dashboard command inside the clean Perl container.
The runtime-manager lifecycle checks also fall back to /proc socket ownership
scans when that minimal image does not provide ss, and they re-probe the
managed port for late listener pids before restart, so dashboard stop and
dashboard restart keep working inside the same blank-container environment
used for release verification.
Those checks also cover the Starman master-worker split, where the recorded
managed pid can be the master while the bound listener pid is a separate
worker process on the same managed port.
Before uploading a release artifact, remove older build directories and tarballs first so only the current release artifact remains, then validate the exact tarball that will ship:
rm -rf Developer-Dashboard-* Developer-Dashboard-*.tar.gz
dzil build
tar -tzf Developer-Dashboard-1.32.tar.gz | grep run-host-integration.sh
cpanm /tmp/Developer-Dashboard-1.32.tar.gz -v
The harness also:
- creates a fake project with its own
./.developer-dashboardruntime tree - verifies the installed CLI works against that fake project through the mounted tarball install
- seeds a user-provided fake-project
./.developer-dashboard/cli/updatecommand plusupdate.dhooks inside the container sodashboard updateexercises the same top-level command-hook path as every other subcommand, including later-hook reads throughRuntime::Result - verifies collector failure isolation with one intentionally broken Perl config collector and one healthy config collector, and confirms the healthy indicator still stays green after
dashboard restart - starts the installed web service
- uses headless Chromium to verify the root editor, a saved fake-project bookmark page from the fake project bookmark directory, and the helper login page
- verifies helper logout cleanup and runtime restart and stop behavior
FAQ
Is this tied to a specific company or codebase?
No. It is meant to give an individual developer one familiar working home that can travel across the projects they touch.
Where should project-specific behavior live?
In configuration, saved pages, and user CLI extensions. That keeps the main dashboard experience stable while still letting each project add the local pages, checks, paths, and helpers it needs.
Is the software spec implemented?
The current distribution implements the core runtime, page engine, action runner, provider loader, prompt and collector system, web lifecycle manager, and Docker Compose resolver described by the software spec.
What remains intentionally lightweight is breadth, not architecture:
- provider pages and action handlers are implemented in a compact v1 form
- legacy bookmarks are supported, with Template Toolkit rendering and one clean sandpit package per page run so
CODE*blocks can share state within a bookmark render without leaking runtime globals into later requests
Does it require a web framework?
No. The current distribution includes a minimal HTTP layer implemented with core Perl-oriented modules.
Why does localhost still require login?
This is intentional. The trust rule is exact and conservative: only numeric loopback on 127.0.0.1 receives local-admin treatment.
Why is the runtime file-backed?
Because prompt rendering, dashboards, and wrappers should consume prepared state quickly instead of re-running expensive checks inline.
How are CPAN releases built?
The repository is set up to build release artifacts with Dist::Zilla and upload them to PAUSE from GitHub Actions.
The Dist::Zilla runtime prerequisite list now pins JSON::XS explicitly so
the built tarball always declares the JSON backend dependency for PAUSE test
installs.
The GitHub Actions workflows pin actions/checkout@v5 and set
FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true so hosted runners stay ahead of the
Node 20 JavaScript-action deprecation window.
What JSON implementation does the project use?
The project uses JSON::XS for JSON encoding and decoding, including shell helper decoding paths.
What does the project use for command capture and HTTP clients?
The project uses Capture::Tiny for command-output capture via capture, with exit codes returned from the capture block rather than read separately. There is currently no outbound HTTP client in the core runtime, so LWP::UserAgent is not yet required by an active code path.
GitHub Release To PAUSE
The repository includes a GitHub Actions workflow at:
.github/workflows/release-cpan.yml
It expects these GitHub Actions secrets:
PAUSE_USERPAUSE_PASS
The workflow:
- checks out the repo
- installs Perl, release dependencies, the explicit
App::Cmdprerequisite chain, Dist::Zilla, andDist::Zilla::Plugin::MetaProvides::Package - builds the CPAN distribution tarball with
dzil build - uploads the tarball to PAUSE
It can be triggered by:
- pushing a tag like
v0.01 - manual
workflow_dispatch
Testing And Coverage
Run the test suite:
prove -lr t
Measure library coverage with Devel::Cover:
cpanm --local-lib-contained ./.perl5 Devel::Cover
export PERL5LIB="$PWD/.perl5/lib/perl5${PERL5LIB:+:$PERL5LIB}"
export PATH="$PWD/.perl5/bin:$PATH"
cover -delete
HARNESS_PERL_SWITCHES=-MDevel::Cover prove -lr t
cover -report text -select_re '^lib/' -coverage statement -coverage subroutine
The repository target is 100% statement and subroutine coverage for lib/.
The coverage-closure suite includes managed collector loop start/stop paths
under Devel::Cover, including wrapped fork coverage in
t/14-coverage-closure-extra.t, so the covered run stays green without
breaking TAP from daemon-style child processes.
For fast saved-bookmark browser regressions, run the dedicated smoke script:
integration/browser/run-bookmark-browser-smoke.pl
That host-side smoke runner creates an isolated temporary runtime, starts the
checkout-local dashboard, loads one saved bookmark page through headless
Chromium, and can assert page-source fragments, saved /ajax/... output, and
the final browser DOM. With no arguments it runs the built-in legacy Ajax
foo.bar bookmark case. For a real bookmark file, point it at the saved file
and add explicit expectations:
integration/browser/run-bookmark-browser-smoke.pl \
--bookmark-file ~/.developer-dashboard/dashboards/test \
--expect-page-fragment "set_chain_value(foo,'bar','/ajax/foobar?type=text')" \
--expect-ajax-path /ajax/foobar?type=text \
--expect-ajax-body 123 \
--expect-dom-fragment '<span class="display">123</span>'