Developer Dashboard

A local home for development work.

Introduction

Developer Dashboard gives a developer one place to organize the moving parts of day-to-day work.

Without it, local development usually ends up spread across shell history, ad-hoc scripts, browser bookmarks, half-remembered file paths, one-off health checks, and project-specific Docker commands. With it, those pieces can live behind one entrypoint: a browser home, a prompt status layer, and a CLI toolchain that all read from the same runtime.

It brings together browser pages, saved notes, helper actions, collectors, prompt indicators, path aliases, open-file shortcuts, data query tools, and Docker Compose helpers so local development can stay centered around one consistent home instead of a pile of disconnected scripts and tabs.

When the current project contains ./.developer-dashboard, that tree becomes the first runtime lookup root for dashboard-managed files. The home runtime under ~/.developer-dashboard stays as the fallback base, so project-local bookmarks, config, CLI hooks, helper users, sessions, and isolated docker service folders can override home defaults without losing shared fallback data that is not redefined locally.

Release tarballs contain installable runtime artifacts only; local Dist::Zilla release-builder configuration is kept out of the shipped archive. Frequently used built-in commands such as of, open-file, pjq, pyq, ptomq, and pjp are also installed as standalone executables so they can run directly without loading the full dashboard runtime. Before publishing a release, the built tarball should be smoke-tested with cpanm from the artifact itself so the shipped archive matches the fixed source tree. Repository metadata should also keep explicit repository links, shipped module provides, and root SECURITY.md / CONTRIBUTING.md policy files aligned for CPAN and Kwalitee consumers.

It provides a small ecosystem for:

Developer Dashboard is meant to become the developer's working home:

What You Get

Web Interface And Access Model

Run the web interface with:

dashboard serve

By default it listens on 0.0.0.0:7890, so you can open it in a browser at:

http://127.0.0.1:7890/

Run dashboard serve --ssl to enable HTTPS with a generated self-signed certificate under ~/.developer-dashboard/certs/, then open:

https://127.0.0.1:7890/

The access model is deliberate:

In practice that means the developer at the machine gets friction-free local admin access, while shared or forwarded access is forced through explicit helper accounts. When helper access is sent to /login, the login form now keeps the original requested path and query string in a hidden redirect target. After a successful helper login, the browser is sent back to that saved route, such as /app/index, instead of being dropped at /.

Collectors, Indicators, And PS1

Collectors are background or on-demand jobs that prepare state for the rest of the dashboard. A collector can run a shell command or a Perl snippet, then store stdout, stderr, exit code, and timestamps as file-backed runtime data.

That prepared state drives indicators. Indicators are the short status records used by:

This matters because prompt and browser status should be cheap to render. Instead of re-running a Docker check, VPN probe, or project health command every time the prompt draws, a collector prepares the answer once and the rest of the system reads the cached result.

Why It Works As A Developer Home

The pieces are designed to reinforce each other:

That combination makes the dashboard useful as a real daily base instead of just another utility script.

Not Just For Perl

Developer Dashboard is implemented in Perl, but it is not only for Perl developers.

It is useful anywhere a developer needs:

The toolchain already understands Perl module names, Java class names, direct files, structured-data formats, and project-local compose flows, so it suits mixed-language teams and polyglot repositories as well as Perl-heavy work.

Project-specific behavior is added through configuration, saved pages, and user CLI extensions.

Documentation

Main Concepts

Environment Variables

The distribution supports these compatibility-style customization variables:

Transient Web Token Policy

Transient page tokens still exist for CLI workflows such as dashboard page encode and dashboard page decode, but browser routes that execute a transient payload from token= or atoken= are disabled by default.

That means links such as:

return a 403 unless DEVELOPER_DASHBOARD_ALLOW_TRANSIENT_URLS is enabled. Saved bookmark-file routes such as /app/index and /app/index/action/... continue to work without that flag. Saved bookmark editor pages also stay on their named /app/<id>/edit and /app/<id> routes when you save from the browser, so editing an existing bookmark file does not fall back to transient token= URLs under the default deny policy.

Legacy Ajax helper calls inside saved bookmark CODE* blocks should use an explicit file => 'name.json' argument. When a saved page supplies that name, the helper stores the Ajax Perl code under the saved dashboard ajax tree and emits a stable saved-bookmark endpoint such as /ajax/name.json?type=text. Those saved Ajax handlers run the stored file as a real process, defaulting to Perl unless the file starts with a shebang, and stream both stdout and stderr back to the browser as they happen. That keeps bookmark Ajax workflows usable even while transient token URLs stay disabled by default, and it means bookmark Ajax code can rely on normal print, warn, die, system, and exec process behaviour instead of a buffered JSON wrapper. Saved bookmark Ajax handlers also default to text/plain when no explicit type => ... argument is supplied, and the generated Perl wrapper now enables autoflush on both STDOUT and STDERR so long-running handlers show incremental output in the browser instead of stalling behind process buffers. If a saved handler also needs refresh-safe process reuse, pass singleton => 'NAME' in the Ajax helper. The generated url then carries that singleton name, the Perl worker runs as dashboard ajax: NAME, and the runtime terminates any older matching Perl Ajax worker before starting the replacement stream for the refreshed browser request. Singleton-managed Ajax workers are also terminated by dashboard stop and dashboard restart, and the bookmark page now registers a pagehide cleanup beacon against /ajax/singleton/stop?singleton=NAME so closing the browser tab also tears down the matching worker instead of leaving it behind. If code => ... is omitted, Ajax(file => 'name') targets the existing executable at dashboards/ajax/name instead of rewriting it. Static files referenced by saved bookmarks are resolved from the effective runtime public tree first and then from the saved bookmark root. The web layer also provides a built-in /js/jquery.js compatibility shim, so bookmark pages that expect a local jQuery-style helper still have $, $(document).ready, $.ajax, and selector .text(...) support even when no runtime file has been copied into dashboard/public/js yet.

Saved bookmark editor and view-source routes also protect literal inline script content from breaking the browser bootstrap. If a bookmark body contains HTML such as </script>, the editor now escapes the inline JSON assignment used to reload the source text, so the browser keeps the full bookmark source inside the editor instead of spilling raw text below the page. Legacy bookmark rendering also loads set_chain_value() before bookmark body HTML, so Ajax jvar => ... helpers can bind saved /ajax/... endpoints without throwing a play-route JavaScript ReferenceError.

User CLI Extensions

Unknown top-level subcommands can be provided by executable files under ./.developer-dashboard/cli first and then ~/.developer-dashboard/cli. For example, dashboard foobar a b will exec the first matching cli/foobar with a b as argv, while preserving stdin, stdout, and stderr.

Shared Nav Fragments

If nav/*.tt files exist under the saved bookmark root, every non-nav page render includes them between the top chrome and the main page body.

For the default runtime that means files such as:

And with route access such as:

The bookmark editor can save those nested ids directly, for example BOOKMARK: nav/foo.tt. On a page like /app/index, the direct nav/*.tt files are loaded in sorted filename order, rendered through the normal page runtime, and inserted above the page body. Non-.tt files and subdirectories under nav/ are ignored by that shared-nav renderer.

Shared nav fragments and normal bookmark pages both render through Template Toolkit with env.current_page set to the active request path, such as /app/index. The same path is also available as env.runtime_context.current_page, alongside the rest of the request-time runtime context. Token play renders for named bookmarks also reuse that saved /app/<id> path for nav context, so shared nav/*.tt fragments do not disappear just because the browser reached the page through a transient /?mode=render&token=... URL. Shared nav markup now wraps horizontally by default and inherits the page theme through CSS variables such as --panel, --line, --text, and --accent, so dark bookmark themes no longer force a pale nav box or hide nav link text against the background.

Open File Commands

dashboard of is the shorthand name for dashboard open-file.

These commands support:

If VISUAL or EDITOR is set, dashboard of and dashboard open-file will exec that editor unless --print is used.

Data Query Commands

These built-in commands parse structured text and optionally extract a dotted path:

If the selected value is a hash or array, the command prints canonical JSON. If the selected value is a scalar, it prints the scalar plus a trailing newline.

The file path and query path are order-independent, and $d selects the whole parsed document. For example, cat file.json | dashboard pjq '$d' and dashboard pjq file.json '$d' return the same result. The same contract applies to pyq, ptomq, and pjp.

Manual

Installation

Install from CPAN with:

cpanm Developer::Dashboard

Or install from a checkout with:

perl Makefile.PL
make
make test
make install

Local Development

Build the distribution:

rm -rf Developer-Dashboard-* Developer-Dashboard-*.tar.gz
dzil build

Run the CLI directly from the repository:

perl -Ilib bin/dashboard init
perl -Ilib bin/dashboard auth add-user <username> <password>
perl -Ilib bin/dashboard version
perl -Ilib bin/dashboard of --print My::Module
perl -Ilib bin/dashboard open-file --print com.example.App
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta
printf 'alpha:\n  beta: 3\n' | perl -Ilib bin/dashboard pyq alpha.beta
mkdir -p ~/.developer-dashboard/cli/update
printf '#!/bin/sh\necho runtime-update\n' > ~/.developer-dashboard/cli/update/01-runtime
chmod +x ~/.developer-dashboard/cli/update/01-runtime
perl -Ilib bin/dashboard update
perl -Ilib bin/dashboard serve
perl -Ilib bin/dashboard stop
perl -Ilib bin/dashboard restart

User CLI extensions can be tested from the repository too:

mkdir -p ~/.developer-dashboard/cli
printf '#!/bin/sh\ncat\n' > ~/.developer-dashboard/cli/foobar
chmod +x ~/.developer-dashboard/cli/foobar
printf 'hello\n' | perl -Ilib bin/dashboard foobar

mkdir -p ~/.developer-dashboard/cli/pjq
printf '#!/usr/bin/env perl\nprint "seed\\n";\n' > ~/.developer-dashboard/cli/pjq/00-seed.pl
chmod +x ~/.developer-dashboard/cli/pjq/00-seed.pl
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta

Per-command hook files can live under either ./.developer-dashboard/cli/<command>/ or ./.developer-dashboard/cli/<command>.d/ first, then the same paths under ~/.developer-dashboard/cli/. Executable files in the selected directory are run in sorted filename order before the real command runs, non-executable files are skipped, and each hook now streams its own stdout and stderr live to the terminal while still accumulating those channels into RESULT as JSON. Built-in commands such as dashboard pjq use the same hook directory. A directory-backed custom command can provide its real executable as ~/.developer-dashboard/cli/<command>/run, and that runner receives the final RESULT environment variable. After each hook finishes, dashboard rewrites RESULT before the next sorted hook starts, so later hook scripts can react to earlier hook output. Perl hook scripts can read that JSON through Runtime::Result.

If you want dashboard update, provide it as a normal user command at ./.developer-dashboard/cli/update or ./.developer-dashboard/cli/update/run first, with the home runtime as fallback. Its hook files can live under update/ or update.d/, and the real command receives the final RESULT JSON through the environment after those hook files run.

Use dashboard version to print the installed Developer Dashboard version.

The blank-container integration harness now installs the tarball first and then builds a fake-project ./.developer-dashboard tree so the shipped test suite still starts from a clean runtime before exercising project-local overrides. That same blank-container path now also verifies web stop/restart behavior in a minimal image where listener ownership may need to be discovered from /proc instead of ss, including a late listener re-probe before dashboard restart brings the web service back up.

First Run

Initialize the runtime:

dashboard init

Inspect resolved paths:

dashboard paths
dashboard path resolve bookmarks_root
dashboard path add foobar /tmp/foobar
dashboard path del foobar

Custom path aliases are stored in the effective dashboard config root so shell helpers such as cdr foobar and which_dir foobar keep working across sessions. When a project-local ./.developer-dashboard tree exists, alias writes go there first; otherwise they go to the home runtime. When a saved alias points inside your home directory, the stored config uses $HOME/... instead of a hard-coded absolute home path so a shared fallback runtime remains portable across different developer accounts. Re-adding an existing alias updates it without error, and deleting a missing alias is also safe.

Legacy Folder compatibility also accepts the modern root-style names through AUTOLOAD, so older code can use either Folder->dd or Folder->runtime_root, and likewise bookmarks_root and config_root. Before Folder->configure(...) runs, those runtime-backed names lazily bootstrap a default dashboard path registry from HOME instead of dying. Plain Folder calls also lazy-load the same config-backed path aliases shown by dashboard paths, so a direct perl -MFolder -e 'print Folder->docker' from the active project resolves the configured alias instead of failing with Unknown folder.

Render shell bootstrap:

dashboard shell bash

Resolve or open files from the CLI:

dashboard of --print My::Module
dashboard open-file --print com.example.App
dashboard open-file --print path/to/file.txt
dashboard open-file --print bookmarks welcome

Query structured files from the CLI:

printf '{"alpha":{"beta":2}}' | dashboard pjq alpha.beta
printf 'alpha:\n  beta: 3\n' | dashboard pyq alpha.beta
printf '[alpha]\nbeta = 4\n' | dashboard ptomq alpha.beta
printf 'alpha.beta=5\n' | dashboard pjp alpha.beta
dashboard pjq file.json '$d'

Start the local app:

dashboard serve

Open the root path with no bookmark path to get the free-form bookmark editor directly.

Stop the local app and collector loops:

dashboard stop

Restart the local app and configured collector loops:

dashboard restart

Create a helper login user:

dashboard auth add-user <username> <password>

Remove a helper login user:

dashboard auth remove-user helper

Helper sessions show a Logout link in the page chrome. Logging out removes both the helper session and that helper account. Helper page views also show the helper username in the top-right chrome instead of the local system account. Exact-loopback admin requests do not show a Logout link.

Working With Pages

Create a starter page document:

dashboard page new sample "Sample Page"

Save a page:

dashboard page new sample "Sample Page" | dashboard page save sample

List saved pages:

dashboard page list

Render a saved page:

dashboard page render sample

Encode and decode transient pages:

dashboard page show sample | dashboard page encode
dashboard page show sample | dashboard page encode | dashboard page decode

Run a page action:

dashboard action run system-status paths

Bookmark documents use the original separator-line format with directive headers such as TITLE:, STASH:, HTML:, FORM.TT:, FORM:, and CODE1:. Posting a bookmark document with BOOKMARK: some-id back through the root editor now saves it to the bookmark store so /app/some-id resolves it immediately.

The browser editor now renders syntax-highlight markup again, but keeps that highlight layer inside a clipped overlay viewport that follows the real textarea scroll position by transform instead of via a second scrollbox. That restores the visible highlighting while keeping long bookmark lines, full-text selection, and caret placement aligned with the real textarea. Edit and source views preserve raw Template Toolkit placeholders inside HTML: and FORM.TT: sections, so values such as [% title %] are kept in the bookmark source instead of being rewritten to rendered HTML after a browser save.

Template Toolkit rendering exposes the page title as title, so a bookmark with TITLE: Sample Dashboard can reference it directly inside HTML: or FORM.TT: with [% title %]. Transient play and view-source links are also encoded from the raw bookmark instruction text when it is available, so [% stash.foo %] stays in source views instead of being baked into the rendered scalar value after a render pass.

Legacy CODE* blocks now run before Template Toolkit rendering during prepare_page, so a block such as CODE1: { a => 1 } can feed [% stash.a %] in the page body. Returned hash and array values are also dumped into the runtime output area, so CODE1: { a => 1 } both populates stash and shows the legacy-style dumped value below the rendered page body. The hide helper no longer discards already-printed STDOUT, so CODE2: hide print $a keeps the printed value while suppressing the Perl return value from affecting later merge logic.

Page TITLE: values only populate the HTML <title> element. If a bookmark should show its title in the page body, add it explicitly inside HTML:, for example with [% title %].

/apps redirects to /app/index, and /app/<name> can load either a saved bookmark document or a saved ajax/url bookmark file.

Working With Collectors

Initialize example collector config:

dashboard config init

Run a collector once:

dashboard collector run example.collector

List collector status:

dashboard collector list

Collector jobs support two execution fields:

Example collector definitions:

{
  "collectors": [
    {
      "name": "shell.example",
      "command": "printf 'shell collector\\n'",
      "cwd": "home",
      "interval": 60
    },
    {
      "name": "perl.example",
      "code": "print qq(perl collector\\n); return 0;",
      "cwd": "home",
      "interval": 60,
      "indicator": {
        "icon": "P"
      }
    }
  ]
}

Collector indicators follow the collector exit code automatically: 0 stores an ok indicator state and any non-zero exit code stores error. When indicator.name is omitted, the collector name is reused automatically. When indicator.label is omitted, it defaults to that same name. Configured collector indicators are now seeded immediately, so prompt and page status strips show them before the first collector run. Before a collector has produced real output it appears as missing. Prompt output renders an explicit status glyph in front of the collector icon, so successful checks show ✅🔑 style fragments and failing or not-yet-run checks show 🚨🔑 style fragments. The blank-environment integration flow also keeps a regression for mixed collector health isolation: one intentionally broken Perl collector must stay red without stopping a second healthy collector from staying green in dashboard indicator list, dashboard ps1, and /system/status.

Docker Compose

Inspect the resolved compose stack without running Docker:

dashboard docker compose --dry-run config

Include addons or modes:

dashboard docker compose --addon mailhog --mode dev up -d
dashboard docker compose config green
dashboard docker compose config

The resolver also supports old-style isolated service folders without adding entries to dashboard JSON config. If ./.developer-dashboard/docker/green/compose.yml exists in the current project it wins; otherwise the resolver falls back to ~/.developer-dashboard/config/docker/green/compose.yml. dashboard docker compose config green or dashboard docker compose up green will pick it up automatically by inferring service names from the passthrough compose args before the real docker compose command is assembled. If no service name is passed, the resolver scans isolated service folders and preloads every non-disabled folder. If a folder contains disabled.yml it is skipped. Each isolated folder contributes development.compose.yml when present, otherwise compose.yml.

During compose execution the dashboard exports DDDC as the effective config-root docker directory for the current runtime, so compose YAML can keep using ${DDDC} paths inside the YAML itself. Wrapper flags such as --service, --addon, --mode, --project, and --dry-run are consumed first, and all remaining docker compose flags such as -d and --build pass straight through to the real docker compose command. Without --dry-run, the dashboard hands off with exec, so you see the normal streaming output from docker compose itself instead of a dashboard JSON wrapper.

Prompt Integration

Render prompt text directly:

dashboard ps1 --jobs 2

Generate bash bootstrap:

dashboard shell bash

Browser Access Model

The browser security model follows the legacy local-first trust concept:

The editor and rendered pages also include a shared top chrome with share/source links on the left and the original status-plus-alias indicator strip on the right, refreshed from /system/status. That top-right area also includes the local username, the current host or IP link, and the current date/time in the same spirit as the old local dashboard chrome. The displayed address is discovered from the machine interfaces, preferring a VPN-style address when one is active, and the date/time is refreshed in the browser with JavaScript. The bookmark editor also follows the old auto-submit flow, so the form submits when the textarea changes and loses focus instead of showing a manual update button. For saved bookmark files, that browser save posts back to the named /app/<id>/edit route and keeps the Play link on /app/<id> instead of a transient token= URL, so updates still work while transient URLs are disabled. Legacy bookmark parsing also treats a standalone --- line as a section break, preventing pasted prose after a code block from being compiled into the saved CODE* body. Saved bookmark loads now also normalize malformed legacy icon bytes before the browser sees them. Broken section glyphs fall back to , broken item-icon glyphs fall back to 🏷️, and common damaged joined emoji sequences such as 🧑‍💻 are repaired so edit and play routes stop showing Unicode replacement boxes from older bookmark files.

This keeps the fast path for exact loopback access while making non-canonical or remote access explicit.

The default web bind is 0.0.0.0:7890. Trust is still decided from the request origin and host header, not from the listen address.

Runtime Lifecycle

Environment Customization

After installing with cpanm, the runtime can be customized with these environment variables:

Collector definitions now come only from dashboard configuration JSON, so config remains the single source of truth for saved path aliases, providers, collectors, and Docker compose overlays.

Updating Runtime State

Run your user-provided update command:

dashboard update

If ~/.developer-dashboard/cli/update or ~/.developer-dashboard/cli/update/run exists, dashboard update runs that command after any sorted hook files from ~/.developer-dashboard/cli/update/ or ~/.developer-dashboard/cli/update.d/.

dashboard init seeds three editable starter bookmarks when they are missing: welcome, api-dashboard, and db-dashboard.

Blank Environment Integration

Run the host-built tarball integration flow with:

integration/blank-env/run-host-integration.sh

This integration path builds the distribution tarball on the host with dzil build, runs the prebuilt dd-int-test:latest container with only that tarball mounted into it, installs the tarball with cpanm, and then exercises the installed dashboard command inside the clean Perl container. The runtime-manager lifecycle checks also fall back to /proc socket ownership scans when that minimal image does not provide ss, and they re-probe the managed port for late listener pids before restart, so dashboard stop and dashboard restart keep working inside the same blank-container environment used for release verification. Those checks also cover the Starman master-worker split, where the recorded managed pid can be the master while the bound listener pid is a separate worker process on the same managed port.

Before uploading a release artifact, remove older build directories and tarballs first so only the current release artifact remains, then validate the exact tarball that will ship:

rm -rf Developer-Dashboard-* Developer-Dashboard-*.tar.gz
dzil build
tar -tzf Developer-Dashboard-1.30.tar.gz | grep run-host-integration.sh
cpanm /tmp/Developer-Dashboard-1.30.tar.gz -v

The harness also:

FAQ

Is this tied to a specific company or codebase?

No. It is meant to give an individual developer one familiar working home that can travel across the projects they touch.

Where should project-specific behavior live?

In configuration, saved pages, and user CLI extensions. That keeps the main dashboard experience stable while still letting each project add the local pages, checks, paths, and helpers it needs.

Is the software spec implemented?

The current distribution implements the core runtime, page engine, action runner, provider loader, prompt and collector system, web lifecycle manager, and Docker Compose resolver described by the software spec.

What remains intentionally lightweight is breadth, not architecture:

Does it require a web framework?

No. The current distribution includes a minimal HTTP layer implemented with core Perl-oriented modules.

Why does localhost still require login?

This is intentional. The trust rule is exact and conservative: only numeric loopback on 127.0.0.1 receives local-admin treatment.

Why is the runtime file-backed?

Because prompt rendering, dashboards, and wrappers should consume prepared state quickly instead of re-running expensive checks inline.

How are CPAN releases built?

The repository is set up to build release artifacts with Dist::Zilla and upload them to PAUSE from GitHub Actions.

The Dist::Zilla runtime prerequisite list now pins JSON::XS explicitly so the built tarball always declares the JSON backend dependency for PAUSE test installs.

The GitHub Actions workflows pin actions/checkout@v5 and set FORCE_JAVASCRIPT_ACTIONS_TO_NODE24=true so hosted runners stay ahead of the Node 20 JavaScript-action deprecation window.

What JSON implementation does the project use?

The project uses JSON::XS for JSON encoding and decoding, including shell helper decoding paths.

What does the project use for command capture and HTTP clients?

The project uses Capture::Tiny for command-output capture via capture, with exit codes returned from the capture block rather than read separately. There is currently no outbound HTTP client in the core runtime, so LWP::UserAgent is not yet required by an active code path.

GitHub Release To PAUSE

The repository includes a GitHub Actions workflow at:

It expects these GitHub Actions secrets:

The workflow:

  1. checks out the repo
  2. installs Perl, release dependencies, the explicit App::Cmd prerequisite chain, Dist::Zilla, and Dist::Zilla::Plugin::MetaProvides::Package
  3. builds the CPAN distribution tarball with dzil build
  4. uploads the tarball to PAUSE

It can be triggered by:

Testing And Coverage

Run the test suite:

prove -lr t

Measure library coverage with Devel::Cover:

cpanm --local-lib-contained ./.perl5 Devel::Cover
export PERL5LIB="$PWD/.perl5/lib/perl5${PERL5LIB:+:$PERL5LIB}"
export PATH="$PWD/.perl5/bin:$PATH"
cover -delete
HARNESS_PERL_SWITCHES=-MDevel::Cover prove -lr t
cover -report text -select_re '^lib/' -coverage statement -coverage subroutine

The repository target is 100% statement and subroutine coverage for lib/.

The coverage-closure suite includes managed collector loop start/stop paths under Devel::Cover, including wrapped fork coverage in t/14-coverage-closure-extra.t, so the covered run stays green without breaking TAP from daemon-style child processes.

For fast saved-bookmark browser regressions, run the dedicated smoke script:

integration/browser/run-bookmark-browser-smoke.pl

That host-side smoke runner creates an isolated temporary runtime, starts the checkout-local dashboard, loads one saved bookmark page through headless Chromium, and can assert page-source fragments, saved /ajax/... output, and the final browser DOM. With no arguments it runs the built-in legacy Ajax foo.bar bookmark case. For a real bookmark file, point it at the saved file and add explicit expectations:

integration/browser/run-bookmark-browser-smoke.pl \
  --bookmark-file ~/.developer-dashboard/dashboards/test \
  --expect-page-fragment "set_chain_value(foo,'bar','/ajax/foobar?type=text')" \
  --expect-ajax-path /ajax/foobar?type=text \
  --expect-ajax-body 123 \
  --expect-dom-fragment '<span class="display">123</span>'