Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Cadmus

Cadmus is a document reader for Kobo’s e-readers.

Documentation

This site is the primary source of documentation for Cadmus. Use the sidebar to navigate, or start at the overview for installation, usage, and workflows.

Supported firmwares

Any 4.X.Y firmware, with X ≥ 6, will do.

Supported devices

  • Libra Colour.
  • Clara Colour.
  • Clara BW.
  • Elipsa 2E.
  • Clara 2E.
  • Libra 2.
  • Sage.
  • Elipsa.
  • Nia.
  • Libra H₂O.
  • Forma.
  • Clara HD.
  • Aura H₂O Edition 2.
  • Aura Edition 2.
  • Aura ONE.
  • Glo HD.
  • Aura H₂O.
  • Aura.
  • Glo.
  • Touch C.
  • Touch B.

Supported formats

  • PDF, CBZ, FB2, MOBI, XPS and TXT via MuPDF.
  • ePUB through a built-in renderer.
  • DJVU via DjVuLibre.

Features

  • Crop the margins.
  • Continuous fit-to-width zoom mode with line preserving cuts.
  • Rotate the screen (portrait ↔ landscape).
  • Adjust the contrast.
  • Define words using dictd dictionaries.
  • Annotations, highlights and bookmarks.
  • Retrieve articles from online sources through hooks.

Screenshots

Tn01 Tn02 Tn03 Tn04

Acknowledgments

Cadmus is a fork of Plato, a document reader created by Bastien Dejean.

Installation

Cadmus comes in different packages. Pick the one that matches your needs.

Available packages

PackageWhat’s includedInstalls to
KoboRoot.tgzCadmus only/mnt/onboard/.adds/cadmus
KoboRoot-nm.tgzCadmus + NickelMenu/mnt/onboard/.adds/cadmus
KoboRoot-test.tgzTest build only/mnt/onboard/.adds/cadmus-tst
KoboRoot-nm-test.tgzTest build + NickelMenu/mnt/onboard/.adds/cadmus-tst

Which one should I pick?

  • Normal installs: Use KoboRoot.tgz or KoboRoot-nm.tgz
  • If you use NickelMenu: Pick a package that includes it (-nm versions)
  • Testing a new feature: Use test packages (-test versions) for trying out changes that haven’t been released yet

First-time setup

  1. Go to the latest release.
  2. Download the package you want from the table above.
  3. Connect your Kobo to your computer via USB.
  4. Copy the downloaded file to /mnt/onboard/.kobo/KoboRoot.tgz on the device.
  5. Eject the device and reboot.

Updating

There are two ways to update Cadmus once it’s installed.

Wirelessly (OTA)

The easiest way — no computer needed, just WiFi. Open Main Menu → Check for Updates and follow the prompts. See OTA updates for details.

Via USB

You can also update by copying a new package over USB, the same way you did the first-time install.

  1. Connect your Kobo to your computer via USB.
  2. When Cadmus asks “Share storage via USB?”, tap Share.
  3. Download the package you want from the latest release.
  4. Copy it to /mnt/onboard/.kobo/KoboRoot.tgz on your Kobo.
  5. Eject and disconnect the USB cable.

Note

Always name the file KoboRoot.tgz on the device, regardless of which package you downloaded (e.g. KoboRoot-nm.tgz must be renamed).

Cadmus detects the file automatically and reboots your Kobo to install the update. You don’t need to do anything else.

Test builds

First-time install

  1. Open the Cadmus GitHub Actions page.
  2. Select the run for the change you want to test.
  3. Download the cadmus-kobo-test-<suffix> file. Download from GitHub Actions
  4. Extract it and pick the package that matches your setup.
  5. Copy the selected KoboRoot file to: /mnt/onboard/.kobo/KoboRoot.tgz
  6. Eject the device and reboot.

Updating an existing test build

Use the OTA feature to download updates from a PR number directly on your device. This lets you test changes without connecting to a computer.

OTA updates

Once Cadmus is installed, you can update it wirelessly without connecting to a computer. The OTA (Over-The-Air) feature downloads updates directly from GitHub.

What you need

  • A WiFi connection

Authentication

Main branch and PR builds require a GitHub account. Stable releases are public and need no authentication.

The first time you request a main branch or PR build, Cadmus will show a screen with a URL and a short code:

  1. Go to the URL shown on screen
  2. Enter the code shown on your device
  3. Sign in to GitHub and approve the request

Cadmus detects the approval automatically and starts the download. The token is saved to disk so you won’t need to sign in again.

ota-pickauth

How to update

Open Main Menu → Check for Updates. You’ll see options for where to get the update from:

SourceDescription
Stable ReleaseLatest official release from GitHub
Main BranchLatest development build (most recent changes)
PR BuildTest a specific pull request

Note

The Stable Release option is not shown in test builds.

Updating from the main branch

Select Main Branch to get the most recent development build. This includes changes that have been merged but not yet released officially.

If you haven’t authenticated before, Cadmus will guide you through the GitHub sign-in process. See Authentication for details.

The update downloads from GitHub, installs automatically, and reboots the device to finish.

Testing a pull request

Select PR Build to try out a specific change before it’s released. Enter the PR number when prompted. If you haven’t authenticated before, Cadmus will guide you through the GitHub sign-in process. See Authentication for details.

Tip

Find the PR number in the GitHub URL. For example, in github.com/OGKevin/cadmus/pull/42 the PR number is 42.

Normal vs test builds

OTA works for both types of builds. The type you’re currently using determines what gets downloaded:

  • Normal builds update to KoboRoot.tgz in /mnt/onboard/.adds/cadmus
  • Test builds update to KoboRoot-test.tgz in /mnt/onboard/.adds/cadmus-tst

See the available packages table for all options.

First-time setup

OTA only works for updating an existing installation. To install Cadmus for the first time, follow the installation guide or the test builds guide to copy a KoboRoot file via USB.

Migrating from Plato

Cadmus is a fork of Plato and uses the same Settings.toml format, so migrating is mostly a matter of copying your settings file across.

Copy your settings

BuildPlato settingsCadmus settings
Stable/mnt/onboard/.adds/plato/Settings.toml/mnt/onboard/.adds/cadmus/Settings.toml
Test/mnt/onboard/.adds/plato/Settings.toml/mnt/onboard/.adds/cadmus-tst/Settings.toml

Copy the file as-is into the Cadmus folder so it is named Settings.toml (for example, /mnt/onboard/.adds/cadmus/Settings.toml or /mnt/onboard/.adds/cadmus-tst/Settings.toml). The [[libraries]] section is the most important part, it tells Cadmus where your books live and drives the reading-progress import on first launch. On first launch, Cadmus will move this file into its Settings/ folder automatically.

[[libraries]]
name = "On Board"
path = "/mnt/onboard"
mode = "database"

Important

Make sure each [[libraries]] entry has the correct path and name. If a path doesn’t match what’s on disk, Cadmus skips that library’s import.

What happens on first launch

When Cadmus starts for the first time it automatically imports your data from each library listed in settings:

SourceWhat’s imported
.metadata.jsonBook metadata (title, author, …) and reading progress
.reading-states/<fp>.jsonReading progress for books not already covered by the above

Both database mode and filesystem mode libraries are handled. Cadmus reads .reading-states/ in all cases, so current page, bookmarks, and annotations carry over regardless of which mode you used in Plato.

Note

After a successful import the original files are renamed:

  • .metadata.json.metadata.json.imported
  • .reading-states/.reading-states.imported

These renamed files are just a safety backup. Once you’ve confirmed everything looks right you can delete them.

Note

Cadmus also removes the .thumbnail-previews/ folder and regenerates thumbnails itself.

Re-running the import

If the import went wrong (for example, the library path was incorrect in settings), you can start it fresh:

  1. Rename .metadata.json.imported back to .metadata.json and .reading-states.imported back to .reading-states/ in each library directory.

  2. Delete the Cadmus SQLite database:

    BuildDatabase path
    Stable/mnt/onboard/.adds/cadmus/cadmus.sqlite
    Test/mnt/onboard/.adds/cadmus-tst/cadmus.sqlite
  3. Restart Cadmus — the import will run again from scratch.

If something still looks wrong after re-running, check the logs for details. See Troubleshooting for where to find them.

Settings

Cadmus reads settings from Settings/Settings-*.toml. Settings can be changed via Main Menu → Settings, which opens the built-in settings editor.

Legend:

  • ✏️ Editable in the settings editor
  • 🔑 Required for feature to work
  • 🧪 Only available in test builds
  • 📱 Kobo

General Settings

keyboard-layout

✏️

Keyboard layout to use for text input.

  • Possible values: "English", "Russian".
keyboard-layout = "English"

sleep-cover

✏️

Handle the magnetic sleep cover event.

sleep-cover = true

auto-share

✏️

Automatically enter shared mode when connected to a computer, skipping the “Share storage via USB?” prompt.

Tip

Turn this on if you update Cadmus via USB often — you won’t have to confirm the sharing dialog each time you plug in.

auto-share = false

auto-suspend

✏️

Number of minutes of inactivity after which the device will automatically go to sleep.

  • Zero means never.
auto-suspend = 30.0

auto-power-off

✏️

Delay in days after which a suspended device will power off.

  • Zero means never.
auto-power-off = 3.0

button-scheme

✏️

Defines how the back and forward buttons are mapped to page forward and page backward actions.

  • Possible values: "natural", "inverted".
button-scheme = "natural"

locale

✏️

The preferred language for the user interface, using BCP 47 format (e.g., "en-US", "de-DE").

This setting is optional. When not set, en-GB is used.

locale = "en-GB"

Reader

Settings that control the reading experience.

reader.finished

✏️

What to do when you finish reading a book.

Possible values:

  • "notify" (show a notification)
  • "close" (close the book and go back)
  • "go-to-next" (open the next book in the library).
[reader]
finished = "close"

Libraries

✏️

Document library configuration. Each library has a name, path, and mode.

[[libraries]]
name = "On Board"
path = "/mnt/onboard"
mode = "database"

libraries.name

✏️

Display name for the library.

libraries.path

✏️

Directory path containing documents.

libraries.mode

✏️

Library indexing mode.

  • Possible values: "database", "filesystem".

libraries.finished

✏️

Override the reader.finished setting for this specific library. When set, this takes precedence over the global reader setting.

Possible values:

  • "notify"
  • "close"
  • "go-to-next".
  • Leave unset to inherit the global reader.finished setting.
[[libraries]]
name = "KePub"
path = "/mnt/onboard/.kobo/kepub"
finished = "go-to-next"

Intermissions

✏️

Defines the images displayed when entering an intermission state.

[intermissions]
suspend = "logo:"
power-off = "logo:"
share = "logo:"

intermissions.suspend

✏️

Image displayed when the device enters sleep mode.

  • Possible values: "logo:" (built-in logo), "cover:" (current book cover), or a path to a custom image file.

intermissions.power-off

✏️

Image displayed when the device powers off.

  • Possible values: "logo:" (built-in logo), "cover:" (current book cover), or a path to a custom image file.

intermissions.share

✏️

Image displayed when entering USB sharing mode.

  • Possible values: "logo:" (built-in logo), "cover:" (current book cover), or a path to a custom image file.

Import

These settings control how Cadmus imports documents from your device. They are available in the Settings → Import menu.

import.startup-trigger

✏️

Automatically import new books when Cadmus starts.

[import]
startup-trigger = true

Tip

If this is turned off, you can still trigger an import manually from the home screen: tap the database icon (bottom-left corner) and choose Import.

import.sync-metadata

✏️

Re-extract metadata (title, author, etc.) whenever a document changes.

[import]
sync-metadata = true

import.metadata-kinds

File extensions of documents whose metadata is extracted during import.

[import]
metadata-kinds = ["epub", "pdf", "djvu"]

import.allowed-kinds

File extensions of documents considered during the import process.

[import]
allowed-kinds = ["djvu", "xps", "fb2", "txt", "pdf", "oxps", "cbz", "epub"]

OTA

The OTA feature downloads builds from GitHub.

Authentication for main branch and PR builds uses GitHub device auth flow. When you select a build that requires authentication, Cadmus will display a short code and a URL. Visit github.com/login/device on any device, enter the code, and Cadmus will automatically continue the download once you authorize.

The token is saved to disk after the first authorization so you will not be prompted again on subsequent downloads.

For step-by-step instructions with screenshots, see the OTA updates guide.

Telemetry

Cadmus writes JSON logs to disk. When the build enables the otel feature, it can also export logs to an OpenTelemetry endpoint.

These settings are available in the Settings → Telemetry menu.

Important

Changes to these settings only take effect after restarting Cadmus. The application initializes telemetry on startup.

logging

[logging]
enabled = true
level = "info"
max-files = 3
directory = "logs"
# otlp-endpoint = "https://otel.example.com:4318"

logging.enabled

✏️

Enable or disable structured JSON logging.

[logging]
enabled = true

logging.level

✏️

Minimum log level to record.

  • Possible values: "trace", "debug", "info", "warn", "error".
[logging]
level = "info"

logging.max-files

Number of log files to keep. Only the most recent N files are kept — older ones are deleted automatically when Cadmus starts.

  • Default: 3
  • Set to 0 to keep all log files.
[logging]
max-files = 3

logging.otlp-endpoint

✏️ (only when the otel feature is enabled)

Optional OTLP endpoint for exporting logs to an OpenTelemetry collector.

[logging]
otlp-endpoint = "https://otel.example.com:4318"

Environment override:

  • OTEL_EXPORTER_OTLP_ENDPOINT takes precedence over logging.otlp-endpoint.

logging.enable-kern-log

🧪 📱 ✏️

Captures kernel logs via logread -F and forwards them to structured logging with the target cadmus_core::logging:kern.

[logging]
enable-kern-log = false

logging.enable-dbus-log

🧪 📱 ✏️

Captures D-Bus signals via the built-in zbus-based DbusMonitorTask and forwards them to structured logging.

[logging]
enable-dbus-log = false

Settings Retention

Cadmus stores each version’s settings in a separate file in the Settings/ directory (for example, Settings-v1.2.3.toml). This ensures backward and forward compatibility when you upgrade.

settings-retention

Number of recent version settings files to keep. Only the most recent N version files are kept. When a new version is saved, older versions beyond this limit are deleted automatically.

  • Default: 3
  • Set to 0 to keep all version files
settings-retention = 3

User Interface

This section explains the different parts of Cadmus you interact with while reading and managing your books.

File Chooser

The file chooser helps you pick files or folders. It appears when you need to select a file or folder. e.g to pick a screen saver.

File chooser

How It Works

The file chooser has three main parts:

  1. Top Bar - Shows what you’re selecting (“Select File”, “Select Folder”, etc.) and a close button
  2. Navigation Bar - Shows folders you can tap to browse deeper
  3. File List - Shows files in the current folder (or a special “Select this folder” option)

Tap any folder name in the navigation bar to open it. The bar expands automatically if there are many folders. Tap and drag the separator line below the folders to resize the navigation area.

Selection Modes

The file chooser adapts based on what you need:

  • File selection mode
  • Folder selection mode
  • File or folder selection mode

The title will indicate the mode.

Troubleshooting

Logs

When something isn’t working right, logs will help with figuring out what went wrong. If you’re reporting an issue, sharing your logs makes it much easier to debug.

Where to find Cadmus logs

Cadmus saves logs in a logs folder. Here’s where to find it on each platform:

PlatformStable buildTest build
Kobo/mnt/onboard/.adds/cadmus/logs/mnt/onboard/.adds/cadmus-tst/logs

Each time you start Cadmus, it creates a new log file with a unique ID. By default, only the 3 most recent log files are kept — older ones are deleted automatically. You can change this with the logging.max-files setting.

The log files look like this:

cadmus-019cf7e3-ef3a-7752-846f-83b92ac90634.json

Finding your run ID

Every time Cadmus starts, it prints a run ID to help you identify which log file belongs to that session. You can find this in:

  1. info.log - The startup log in the Cadmus folder. Look for the line that says Cadmus run started with ID: followed by a string of letters and numbers.

    For example:

    Cadmus run started with ID: 019cf7e3-ef3a-7752-846f-83b92ac90634 (version 0.10.0)
    

    Copy only the UUID part — the string of letters and numbers between ID: and the (version text.

  2. Console output - If you’re running Cadmus from a terminal, the same run ID is printed when it starts.

Kernel logs

Kernel logs can be useful to debug lower level system issues, for example a kernel panic, which triggers a device reboot.

Kernel logs are only available in test builds. If you’re using a test build and want to include kernel logs:

  1. Open Main Menu → Settings
  2. Go to Telemetry
  3. Enable kernel logs
  4. Restart Cadmus

Kernel logs will then be saved in the same log file as your Cadmus logs.

Note

Kernel logs will use more disk space, so don’t forget to turn it back off.

Crashloop recovery

If Cadmus crashes 3 times in a row, it will exit back to Nickel instead of restarting. This prevents the device from getting stuck in an infinite loop of crashes.

When this happens:

  1. Check info.log in the Cadmus folder for the panic error
  2. The crash counter resets when you start Cadmus manually (using the restart option in the menu or rebooting)

Development Environment Setup

Cadmus uses devenv with Nix to provide a reproducible development environment. This guide covers setup on both Linux and macOS.

Prerequisites

  1. Install Nix with flakes enabled. The easiest way is using the Determinate Nix Installer.
  2. Install devenv.

Quick Start

  1. Clone the repository and enter the devenv shell:

    git clone https://github.com/OGKevin/cadmus.git
    cd cadmus
    devenv shell
    
  2. Run the one-time setup to build native dependencies:

    cargo xtask setup-native
    
  3. Run the emulator:

    cargo xtask run-emulator
    

Available Commands

Once inside the devenv shell, these commands are available:

CommandDescription
cargo xtask setup-nativeBuild MuPDF for native development (run once)
cargo xtask testRun the test suite across the feature matrix
cargo xtask run-emulatorRun the emulator
cargo xtask build-koboCross-compile for Kobo device (Linux only)
cargo xtask distAssemble the Kobo distribution directory
cargo xtask bundlePackage KoboRoot.tgz for installation
cadmus-dev-otelRun emulator with OpenTelemetry instrumentation
devenv upStart observability stack (Grafana, Tempo, Loki)
cargo xtask docsBuild documentation portal (mdBook + Cargo docs)
cadmus-docs-serveServe documentation portal locally on port 1111
cadmus-translateGenerates the template .pot file for the docs

Run cargo xtask --help to see all available subcommands, or cargo xtask <cmd> --help for options on a specific command.

Or have a look at the rustdocs for xtask here.

Tasks

The devenv environment uses tasks to manage build dependencies. Tasks are defined in devenv.nix and can be run with devenv tasks run <task>.

Available Tasks

TaskDescriptionDependencies
docs:buildBuild documentation EPUB (only rebuilds if files changed)None
deps:nativeBuild MuPDF and wrapper for native developmentNone
build:koboBuild for Kobo device (Linux only)docs:build

All tasks delegate to cargo xtask under the hood.

How Tasks Work

Tasks with dependencies automatically run their dependencies first. For example:

# This will first run docs:build (if needed), then build for Kobo
devenv tasks run build:kobo

The docs:build task uses execIfModified to only rebuild when documentation files have actually changed.

Documentation Portal

Cadmus provides a unified documentation portal that combines user guides, API reference, and contribution guides in one place.

Building and Serving Locally

To build the documentation portal:

cargo xtask docs

This runs the full build pipeline:

  1. Builds the mdBook user guide (docs/book/html/)
  2. Generates Rust API documentation (target/doc/)
  3. Builds the Zola landing page and integrates all documentation

To serve the portal locally with live reload:

cadmus-docs-serve

The portal will be available at http://localhost:1111 with automatic rebuilds when you change documentation files or Rust code.

Documentation Structure

The portal provides three integrated sections:

  • Landing Page (/) - Overview and feature highlights
  • User Guide (/guide/) - User-facing documentation from mdBook
  • API Reference (/api/) - Auto-generated Rust API documentation

All three sections are deployed as a single artifact to GitHub Pages at https://ogkevin.github.io/cadmus/.

Continuous Integration

Documentation is automatically built and validated on every pull request and deployed on push to main or master. The CI pipeline checks:

  • mdBook documentation compiles
  • Rust code documentation is valid
  • Zola landing page builds successfully

Running Tests

Tests require the TEST_ROOT_DIR environment variable to be set. The easiest way to run the full test matrix is:

cargo xtask test

This sets TEST_ROOT_DIR automatically and runs tests across all feature combinations. To run a single feature combination:

cargo xtask test --features "emulator + test"

Or to run tests manually without xtask:

TEST_ROOT_DIR=$(pwd) cargo test

TEST_ROOT_DIR is automatically configured in CI but must be set manually when running cargo test directly.

Platform Support

Linux (Full Support)

Linux provides full development capabilities including:

  • Native development (emulator, tests)
  • Cross-compilation for Kobo devices using the Linaro ARM toolchain
  • Git hooks (actionlint, shellcheck, shfmt, markdownlint, prettier)

The Linaro toolchain is automatically added to PATH and provides arm-linux-gnueabihf-* commands.

macOS (Native Development Only)

macOS supports native development but has some limitations:

FeatureStatusNotes
Native buildsSupportedEmulator and tests work
Cross-compilationNot supportedLinaro toolchain is Linux-only

macOS-Specific Notes

Cross-compilation for Kobo: The Linaro ARM cross-compilation toolchain consists of x86_64 Linux ELF binaries that cannot run on macOS. To build for Kobo devices on macOS, use Docker with a Linux container or a Linux VM.

MuPDF build: On macOS, the native setup script manually gathers pkg-config CFLAGS for system libraries because MuPDF’s build system doesn’t properly detect them on Darwin.

Observability Stack

The devenv includes a full observability stack for development:

# Start all services
devenv up

# In another terminal, run the instrumented emulator
cadmus-dev-otel

Services available after devenv up:

ServiceURLPurpose
Grafanahttp://localhost:3000Dashboards and exploration
Tempohttp://localhost:3200Distributed tracing
Lokihttp://localhost:3100Log aggregation
Prometheushttp://localhost:9090Metrics
OTLP Collectorhttp://localhost:4318Telemetry ingestion

For more details on telemetry, see OpenTelemetry Integration.

Troubleshooting

Shell takes a long time to start

The first devenv shell invocation downloads and builds dependencies, which can take several minutes. Subsequent invocations are cached and should be fast.

Tests fail with “TEST_ROOT_DIR must be set”

Set the environment variable before running tests:

TEST_ROOT_DIR=$(pwd) cargo test

Local Configuration

Create devenv.local.nix to override settings without modifying the tracked configuration:

{ pkgs, ... }:

{
  env = {
    # Example: Set TEST_ROOT_DIR automatically
    TEST_ROOT_DIR = builtins.getEnv "PWD";
  };
}

This file is gitignored and won’t affect other contributors.

Event System

Cadmus uses a tree-based event system where views are organized hierarchically and events flow through two distinct channels: the Hub and the Bus. Understanding the difference between these channels is essential for implementing correct event handling in views.

Tip

CSS is hard, this page might render better on 80% zoom on smaller screens. I tried to make the mermaid diagrams as big as possible to make them easier to read, which made them overflow a bit.

You might also want to hide the sidebar.

Overview

The UI is a tree of View objects. Each view can have children, forming a hierarchy like:

flowchart TD
    Home["Home (root view)"]
    TopBar["TopBar"]
    Shelf["Shelf"]
    Book1["Book"]
    Book2["Book"]
    BottomBar["BottomBar"]
    Dialog["Dialog ← overlay child"]
    Label["Label"]
    Button["Button"]

    Home --> TopBar
    Home --> Shelf
    Home --> BottomBar
    Home --> Dialog
    Shelf --> Book1
    Shelf --> Book2
    Dialog --> Label
    Dialog --> Button

Events enter the tree from the main loop and travel top-down (root to leaves), with the highest z-level children checked first. Views can communicate back up the tree via the bus, or globally via the hub.

Hub vs Bus

The two channels serve fundamentally different purposes:

flowchart TB
    subgraph MainLoop["Main Loop"]
        rx["rx.recv()"]
        match["match evt { ... }"]
        handle["handle_event(root, &evt, &tx, &bus, ...)"]
        Hub["Hub (tx)"]
        Bus["Bus (VecDeque)"]

        rx --> match
        match -->|dispatches to view tree| handle
        handle --> Hub
        handle --> Bus
        Bus -->|unhandled bus events forwarded to hub| Hub
    end

Hub (Sender<Event>)

The hub is an mpsc::Sender<Event> — a global channel that sends events to the main loop. Events sent to the hub are processed in the next iteration of the main loop, not immediately.

Use the hub when:

  • The event needs to be handled by the main loop directly (e.g., Event::Close, Event::Open, Event::Notification)
  • The event should reach all views in a future dispatch cycle (e.g., Event::Focus)
  • You need to communicate across unrelated parts of the view tree
#![allow(unused)]
fn main() {
// Close a view — handled by the main loop's match statement
hub.send(Event::Close(self.view_id)).ok();

// Show a notification — main loop creates the Notification view
hub.send(Event::Notification(NotificationEvent::Show(msg))).ok();

// Set focus — dispatched to all views in the next loop iteration
hub.send(Event::Focus(Some(ViewId::SearchInput))).ok();
}

Bus (VecDeque<Event>)

The bus is a local VecDeque<Event> that passes events from a child to its parent. Events placed on the bus are handled synchronously during the current dispatch cycle.

Use the bus when:

  • A child needs to communicate with its direct parent
  • The parent is expected to handle the event (e.g., a button telling its parent dialog it was pressed)
  • The event should bubble up through the view hierarchy
#![allow(unused)]
fn main() {
// Child tells parent about a submission
bus.push_back(Event::Submit(self.view_id, self.text.clone()));

// Child requests parent to close it
bus.push_back(Event::Close(ViewId::MarginCropper));
}

Bus Bubbling

When a bus event is not handled by any ancestor view, it reaches the root and gets forwarded to the hub for processing in the next main loop iteration:

#![allow(unused)]
fn main() {
// End of main loop iteration — unhandled bus events become hub events
while let Some(ce) = bus.pop_front() {
    tx.send(ce).ok();
}
}

Event Dispatch

The core dispatch function in view/mod.rs controls how events flow through the tree:

flowchart TD
    Start["handle_event(view, event)"] --> CheckLeaf{"Is view a leaf<br/>(no children)?"}

    CheckLeaf -->|YES| HandleDirect["call view.handle_event()<br/>return result"]
    CheckLeaf -->|NO| ReverseIter["Step 1 - Iterate children in REVERSE order<br/>(highest z-level first)"]

    ReverseIter --> CheckChildren["For each child:<br/>call handle_event(child, event)"]
    CheckChildren --> Captured{"Child returns true?"}
    Captured -->|YES| SetCaptured["captured = true<br/>BREAK"]
    Captured -->|NO| NextChild["Next child"]
    NextChild --> CheckChildren

    SetCaptured --> ProcessBus["Step 2 - Process child_bus events"]
    CheckChildren -->|No more children| ProcessBus

    ProcessBus --> BusEvent{"For each event<br/>on child_bus"}
    BusEvent --> HandleBus["call view.handle_event(child_evt)"]
    HandleBus --> ViewHandles{"View handles it?"}
    ViewHandles -->|YES| RemoveBus["remove from bus"]
    ViewHandles -->|NO| KeepBus["keep on bus<br/>(bubbles up)"]
    RemoveBus --> BusEvent
    KeepBus --> BusEvent

    BusEvent -->|No more bus events| CheckCaptured{"NOT captured<br/>by any child?"}
    CheckCaptured -->|YES| HandleParent["Step 3 - call view.handle_event(event)<br/>return captured || view's result"]
    CheckCaptured -->|NO| ReturnCaptured["return captured"]

Key Rules

  1. Reverse iteration: Children are checked from last to first (highest z-level first). This ensures overlays like dialogs and menus receive events before the views beneath them.

  2. Short-circuit on capture: Once a child returns true, no other children or the parent view receive the event. This is why the Dialog’s outside-tap handler works — it returns true to prevent the event from reaching views behind it.

  3. Parent only runs if uncaptured: The parent’s handle_event is only called if no child captured the event (captured || view.handle_event(...)). If captured is true, the parent is short-circuited.

Main Loop Event Handling

The main loop (app.rs) receives events from the hub and handles them in a large match statement. Some events are dispatched into the view tree, while others are handled directly:

flowchart TB
    subgraph MainLoop["Main Loop"]
        direction TB

        Gesture["Event::Gesture(Tap/Swipe/...)"]
        GestureAction["Dispatched into view tree via handle_event()"]

        Close["Event::Close(id)"]
        CloseAction["Handled directly:<br/>locate_by_id() + remove"]

        Notification["Event::Notification(...)"]
        NotificationAction["Handled directly:<br/>create/update Notification"]

        Open["Event::Open(info)"]
        OpenAction["Handled directly:<br/>push Reader onto history"]

        Focus["Event::Focus(...)"]
        FocusAction["Dispatched into view tree via handle_event()"]

        Select["Event::Select(...)"]
        SelectAction["Some handled directly,<br/>some dispatched"]

        Gesture --> GestureAction
        Close --> CloseAction
        Notification --> NotificationAction
        Open --> OpenAction
        Focus --> FocusAction
        Select --> SelectAction
    end

Event::Close

Event::Close(ViewId) is handled directly by the main loop using locate_by_id(), which searches only the top-level children of the root view:

#![allow(unused)]
fn main() {
Event::Close(id) => {
    if let Some(index) = locate_by_id(view.as_ref(), id) {
        let rect = overlapping_rectangle(view.child(index));
        rq.add(RenderData::expose(rect, UpdateMode::Gui));
        view.children_mut().remove(index);
    }
}
}

Key limitation: The main loop can only remove direct children of the root. To close nested views, either:

  1. Use the parent’s ViewId (via hub): Removes the entire parent container
  2. Use the bus: Parent handles the close and removes just the specific child

See Why ViewId Matters for Close and Closing Nested Views via the Bus for details.

Practical Example: Dialog Outside-Tap

When a Dialog is tapped outside its bounds, the following sequence occurs:

sequenceDiagram
    actor User
    participant MainLoop as Main Loop
    participant ViewTree as View Tree
    participant Dialog as Dialog
    participant Parent as Parent View

    Note over User,Parent: 1. User taps at (x, y) outside the dialog

    User->>MainLoop: Tap gesture
    Note right of MainLoop: 2. Receives Event::Gesture(Tap(point))
    MainLoop->>ViewTree: Dispatches via handle_event()

    Note over ViewTree: 3. View tree dispatch (reverse child order)<br/>Dialog checked first (highest z-level)

    ViewTree->>Dialog: handle_event()
    Note right of Dialog: - Tap outside self.rect<br/>- Matches outside-tap arm<br/>- Sends Event::Close(self.view_id) to hub<br/>- Returns true (captured)
    Dialog-->>ViewTree: return true (captured)

    Note over Parent: 4. Parent SHORT-CIRCUITED<br/>(captured = true)<br/>Parent's handle_event() never called

    Note over MainLoop: 5. Next main loop iteration
    MainLoop->>MainLoop: Receives Event::Close(dialog_view_id)
    Note right of MainLoop: locate_by_id() finds dialog<br/>in top-level children<br/>Removes dialog from view tree

Why ViewId Matters for Close

When using the hub to close views, locate_by_id() only finds top-level children. If a dialog is nested inside a parent, you must use the parent’s ViewId — this removes the entire parent:

flowchart TD
    Root["Root View"]
    Home["Home"]
    OtaView["OtaView (top-level)"]
    Dialog["Dialog (nested)"]

    Root --> Home
    Root --> OtaView
    OtaView --> Dialog

    Note["hub.send(Event::Close(ViewId::Ota(Main)))<br/>→ Removes OtaView + Dialog"]

To close just the nested view without affecting siblings, use the bus instead (see below).

Closing Nested Views via the Bus

Send Event::Close via the bus to close nested views. Parents handle bus events synchronously and can remove any child directly:

#![allow(unused)]
fn main() {
// Child sends close via bus
bus.push_back(Event::Close(ViewId::Dialog));
}
sequenceDiagram
    participant Child as Button
    participant Dialog as Dialog (parent)
    participant Grandparent as OtaView

    Child->>Child: handle_event()
    Child-->>Dialog: bus: Event::Close(ViewId::Dialog)
    Dialog->>Dialog: Returns false (not handled)
    Dialog-->>Grandparent: Event bubbles up
    Grandparent->>Grandparent: Removes Dialog from children

Comparison:

MethodCodeHandlerScope
Hub closehub.send(Event::Close(id))Main loopTop-level children only
Bus closebus.push_back(Event::Close(id))Parent viewAny child

Example implementation:

#![allow(unused)]
fn main() {
impl View for Dialog {
    fn handle_event(&mut self, evt: &Event, hub: &Sender<Event>, bus: &mut Bus, ...) -> bool {
        match *evt {
            // Return false to bubble up so grandparent removes us
            Event::Close(ViewId::Dialog) => false,
            _ => false,
        }
    }
}
}

Summary

AspectHubBus
Typempsc::Sender<Event>VecDeque<Event>
ScopeGlobal (main loop)Local (parent-child)
TimingNext loop iterationCurrent dispatch cycle
DirectionView → Main loopChild → Parent
Unhandled eventsProcessed by main loop matchForwarded to hub
Use forClose, Focus, NotificationsSubmit, child-to-parent signals

OpenTelemetry Integration

Cadmus supports exporting logs and traces to OpenTelemetry-compatible backends when built with the otel feature flag.

Overview

The OpenTelemetry (OTEL) integration allows Cadmus to export both structured logs and distributed traces to observability platforms like Grafana Loki/Tempo, Jaeger, or any OTLP-compatible service. Both logs and traces are first-class features that work together to provide comprehensive observability for monitoring application behavior, debugging issues, and analyzing performance.

Architecture

The telemetry system consists of three main components:

  • Logging: JSON-structured logs written to disk via tracing_subscriber
  • Tracing: Distributed traces capturing execution flow and timing
  • OTLP Export: Optional export of both logs and traces to a remote OTLP endpoint

When the otel feature is enabled, Cadmus initializes:

  • Tracer Provider: Exports distributed traces to <endpoint>/v1/traces using batch span processors for async delivery
  • Logger Provider: Exports structured logs to <endpoint>/v1/logs using batch log processors

Each Cadmus run is assigned a unique Run ID (UUID v7) that ties together all logs and traces for that session, enabling correlation between trace spans and log events.

Building with OTEL Support

To enable OpenTelemetry, build Cadmus with the otel feature:

cargo build --features otel

Configuration

Settings File

Configure OpenTelemetry in your Settings.toml:

[logging]
enabled = true
level = "info"
max-files = 3
directory = "logs"
otlp-endpoint = "https://otel.example.com:4318"

Configuration Options

  • enabled: Enable or disable logging entirely
  • level: Minimum log level (trace, debug, info, warn, error)
  • max-files: Number of log files to retain (0 = keep all)
  • directory: Path to log directory (relative to installation directory)
  • otlp-endpoint: OTLP HTTP endpoint URL (optional)

Environment Variables

You can override the OTLP endpoint using an environment variable:

export OTEL_EXPORTER_OTLP_ENDPOINT="https://otel.example.com:4318"
./cadmus

Environment variables take precedence over Settings.toml configuration.

Log Level Control

The log level can be controlled via the RUST_LOG environment variable, which overrides the level setting:

# Enable debug logs for all modules
export RUST_LOG=debug
./cadmus

# Enable trace logs only for specific modules
export RUST_LOG=cadmus::view=trace,info
./cadmus

Distributed Tracing

Distributed tracing captures the execution flow of operations through the application, providing timing information and context about how different components interact.

How Tracing Works in Cadmus

When the otel feature is enabled, Cadmus automatically instruments key operations using the tracing crate. Each instrumented function creates a span that records:

  • Function name and module path
  • Input parameters (selectively captured)
  • Execution duration
  • Return values (at TRACE level)
  • Hierarchical relationships between spans

Spans are organized hierarchically, showing which operations triggered which other operations, making it easier to understand execution flow and identify performance bottlenecks.

Instrumentation

View components in Cadmus are instrumented at critical chokepoints:

  • handle_event methods: Capture event flow through the UI hierarchy with event type and return value
  • render methods: Capture rendering operations with rectangle dimensions for layout debugging

All instrumentation uses conditional compilation (#[cfg_attr(feature = "otel", ...)]) to ensure zero runtime overhead when the feature is disabled.

For detailed instrumentation guidelines and examples, see .github/instructions/rust-instrumentation.instructions.md.

Resource Attributes

Each telemetry export (both logs and traces) includes the following resource attributes:

  • service.name: Always cadmus
  • service.version: Git version from build metadata
  • cadmus.run_id: Unique identifier for the application run
  • hostname: System hostname

Log File Format

Logs are written as newline-delimited JSON to files named:

cadmus-<run_id>.json

Each log entry includes:

  • timestamp: ISO 8601 formatted timestamp
  • level: Log level (TRACE, DEBUG, INFO, WARN, ERROR)
  • target: Module path where the log originated
  • fields: Structured log data
  • spans: Active tracing spans providing context

Documentation Deployment

Cadmus documentation is deployed to Cloudflare Pages.

URLs

Reviewing Documentation Changes

When you open a pull request that modifies documentation files, a preview deployment is automatically created. The PR will show a deployment status with a link to the preview URL.

Preview URLs follow the pattern: https://pr-{NUMBER}.cadmus-dt6.pages.dev/

Local Development

Building and Serving

Build and serve documentation locally:

devenv shell
cargo xtask docs        # Build all documentation
cadmus-docs-serve       # Serve at http://localhost:1111

cargo xtask docs handles the full pipeline: installing Mermaid assets, building mdBook, generating Rust API docs, and assembling the Zola portal. Pass --mdbook-only to skip the Zola step when you only need to check the mdBook output.

To serve with live reload after building:

cd docs-portal && zola serve --base-url http://localhost

Build Process

Documentation is built from three sources:

  1. mdBook (docs/) - User and contributor guides
  2. Cargo doc (crates/) - Rust API documentation
  3. Zola (docs-portal/) - Documentation portal that combines everything

The build is orchestrated by cargo xtask docs (see xtask/src/tasks/docs.rs). The GitHub Actions workflow (.github/workflows/cadmus-docs.yml) runs this command automatically on every push to main and for every pull request.

Translations

Cadmus has two separate translation systems, each covering a different part of the project.

WhatSystemFiles
DocumentationGNU gettext / POdocs/po/*.po
UI stringsFluent (FTL)crates/core/i18n/**/*.ftl

Pick the guide that matches what you want to translate.

Translating Source Strings

The Cadmus UI uses Fluent for all user-visible strings. Translations are embedded directly into the binary at compile time — no external files are needed on the device.

How it works

  • FTL files live under crates/core/i18n/<lang-tag>/cadmus_core.ftl.
  • The fallback language is en-GB; any string missing from a translation falls back to the English text automatically.
  • crates/core/src/i18n.rs loads the correct language at startup based on settings.locale.
  • The fl!("message-id") macro resolves message IDs at compile time.

Adding a new language

1. Create the FTL file

Create a new file at the path matching the BCP 47 tag for your language:

crates/core/i18n/<lang-tag>/cadmus_core.ftl

For example, for French:

crates/core/i18n/fr/cadmus_core.ftl

2. Copy and translate the English strings

Use the English fallback file as your starting point:

crates/core/i18n/en-GB/cadmus_core.ftl

Translate each message value. The message ID (left of =) must stay unchanged — only the value (right of =) changes:

# en-GB
startup-loading = Cadmus starting up…

# fr
startup-loading = Chargement de Cadmus…

3. Set the locale in Settings

To activate the new language locally during development, add a locale key to your Settings.toml:

locale = "fr"

4. Build and verify

cargo check -p cadmus-core

The fl!() macro validates all message IDs at compile time. A successful build confirms the FTL file is well-formed and all IDs referenced in code are present.

Adding a string to the source code

When you add a new UI string:

  1. Add the message to en-GB, and any other languages you know how to translate it into.

  2. Use the fl!() macro in the Rust source:

    #![allow(unused)]
    fn main() {
    let label = crate::fl!("my-new-message");
    }
  3. For messages with variables, use named arguments:

    # In the FTL file
    books-loaded = Loaded { $count } books
    
    #![allow(unused)]
    fn main() {
    let label = crate::fl!("books-loaded", count = book_count);
    }

FTL file format

Fluent uses a straightforward syntax. A few rules to keep in mind:

  • Message IDs use kebab-case.
  • Values can span multiple lines by indenting continuation lines.
  • Use Unicode characters directly — no escaping needed.
  • Comments start with #.

For the full syntax reference see the Fluent syntax guide.

Translating Cadmus Documentation

This guide explains how to translate the Cadmus documentation into other languages. Translations live in docs/po/ as standard GNU gettext PO files.

Prerequisites

If you are using the devenv environment, all required tools are already available:

  • mdbook-xgettext / mdbook-gettext — string extraction and preprocessing
  • msginit / msgmerge / msgfmt — gettext utilities (from gettext)
  • poedit — graphical PO editor

Install them outside devenv with cargo install mdbook-i18n-helpers and your system’s gettext package.

Adding a new language

1. Extract the POT template

Run the cadmus-translate script (devenv) or the equivalent command:

cadmus-translate

This writes docs/po/messages.pot — the source template every translation derives from. Commit this file whenever the English source changes so translators have an up-to-date starting point.

2. Create a PO file for your locale

# Replace 'fr' with the BCP 47 language tag you are adding.
msginit --input=docs/po/messages.pot \
        --output-file=docs/po/fr.po \
        --locale=fr

Open docs/po/fr.po and set the Language-Name header so the language picker displays a readable label:

"Language-Name: Français\n"

The xtask reads this header when generating locales.json; without it the locale code (e.g. fr) is shown instead.

3. Translate the strings

Open the PO file in Poedit or any text editor and fill in each msgstr:

msgid "Welcome to Cadmus!"
msgstr "Bienvenue dans Cadmus !"

Preserve Markdown formatting — bold, code spans, links — exactly as in the msgid. Untranslated or fuzzy entries fall back to the English source.

4. Build and preview

# Build everything including translated books
cargo xtask docs --base-url http://localhost

# Serve
cd docs-portal
zola serve

Navigate to http://localhost:1111/guide/ and use the language picker in the sidebar to switch to your locale.

Keeping translations up to date

When English source files change, regenerate the template and merge new strings into existing PO files:

cadmus-translate                    # regenerate docs/po/messages.pot
msgmerge --update docs/po/fr.po docs/po/messages.pot

Excluding content from extraction

Wrap any block you want to keep in English with <!-- i18n:skip --> comments:

<!-- i18n:skip -->

This paragraph will not appear in the POT file.

How the build works

  1. cargo xtask docs calls mdbook build -d book/<lang> for each .po file found in docs/po/, passing MDBOOK_BOOK__LANGUAGE=<lang>.
  2. The [preprocessor.gettext] in docs/book.toml substitutes translated strings at build time.
  3. locales.json is written to docs/book/html/ with the available locales; lang-picker.js fetches it at runtime to populate the language dropdown.
  4. Symlinks under docs-portal/static/guide/<lang>/ expose each locale build to Zola so it is served at /guide/<lang>/.

SQLite & SQLx

Cadmus uses SQLite as its embedded database and SQLx as the Rust database library. SQLx provides compile-time SQL verification — every query is checked against the real schema before the code ships.

The .sqlx directory

The .sqlx/ directory at the repository root contains one JSON metadata file per SQL query. Each file stores the resolved column names, types, and parameter types that SQLx inferred from the live database schema at the time cargo sqlx prepare was last run.

.sqlx/
├── query-10c2db2a….json   ← compile-time metadata for one query
├── query-13c26d81….json
└── …

Regenerating query metadata

After adding or changing any SQL query, regenerate the metadata:

cargo sqlx prepare --all --workspace

This connects to the database, re-introspects every query macro in the workspace, and rewrites the .sqlx/ JSON files. Commit the updated files alongside your code change.

Important

If you forget to run cargo sqlx prepare, the CI check job will fail because the cached metadata will be out of date with your query changes.

Compile-time SQL checking

SQLx’s typed query macros (query!, query_as!, query_scalar!) verify SQL at compile time using the metadata in .sqlx/. This means:

  • Typos in column names are compiler errors, not runtime panics.
  • Binding the wrong type to a ? placeholder is a type error.
  • Adding or removing a column in a migration without updating queries is caught before deployment.

The macros require the DATABASE_URL environment variable to point at a live database when running cargo sqlx prepare, but not during regular cargo build or cargo check — those use the pre-generated .sqlx/ files.

Important

.sqlx/ is only used when the SQLX_OFFLINE=true field is set which is the default if you’re using devenv.nix.

Review rules

The following rules are enforced during code review for all SQLx queries.

Use typed macros only

Always use the typed macros. Never call the untyped query(), query_as(), or query_scalar() functions:

GoalUse
INSERT, UPDATE, DELETE, raw SELECTsqlx::query!
SELECT mapped into a named structsqlx::query_as!
Single-column SELECTsqlx::query_scalar!

When the column is nullable, call .flatten() on the result to collapse Option<Option<T>> into Option<T>:

#![allow(unused)]
fn main() {
let id: Option<i64> =
    sqlx::query_scalar!("SELECT id FROM libraries WHERE path = ?", path)
        .fetch_optional(pool)
        .await?
        .flatten();
}

List explicit column names

Never use SELECT *. Always name every column you need:

-- ✅ Good
SELECT id, path, name FROM libraries WHERE id = ?

-- ❌ Bad
SELECT * FROM libraries WHERE id = ?

Store timestamps as Unix epoch integers

All date/time values must be stored as INTEGER NOT NULL (Unix epoch seconds). Do not use TEXT columns for timestamps:

-- ✅ Good
created_at INTEGER NOT NULL

-- ❌ Bad
created_at TEXT NOT NULL DEFAULT (datetime('now'))

Add only indexes that are actively used

Every index must be used by at least one query in the codebase. Unused indexes waste write performance and storage without any read benefit. Before adding an index, verify a query filters, sorts, or joins on the indexed column(s).

API reference

The primary database types live in the cadmus_core::db module:

See Library Database for how the library subsystem uses the database, and Runtime Migrations for how to write one-time data migrations.

Library Database

The library subsystem stores all book metadata, reading progress, and table-of-contents data in SQLite. This page explains the schema, the key database types, and how data flows from disk into the database.

Schema overview

The database is created and versioned by the SQL migration files in crates/core/migrations/. The initial schema defines eleven tables plus one aggregating view:

erDiagram
    books {
        TEXT fingerprint PK
        TEXT title
        TEXT file_path
        TEXT file_kind
        INTEGER file_size
        INTEGER added_at
    }

    authors {
        INTEGER id PK
        TEXT name
    }

    book_authors {
        TEXT book_fingerprint FK
        INTEGER author_id FK
        INTEGER position
    }

    categories {
        INTEGER id PK
        TEXT name
    }

    book_categories {
        TEXT book_fingerprint FK
        INTEGER category_id FK
    }

    reading_states {
        TEXT fingerprint PK
        INTEGER opened
        INTEGER current_page
        INTEGER pages_count
        INTEGER finished
    }

    thumbnails {
        TEXT fingerprint PK
        BLOB thumbnail_data
    }

    toc_entries {
        TEXT id PK
        TEXT book_fingerprint FK
        TEXT parent_id FK
        INTEGER position
        TEXT title
        TEXT location_kind
    }

    libraries {
        INTEGER id PK
        TEXT path
        TEXT name
        INTEGER created_at
    }

    library_books {
        INTEGER library_id FK
        TEXT book_fingerprint FK
        INTEGER added_to_library_at
    }

    _cadmus_migrations {
        TEXT id PK
        INTEGER executed_at
        TEXT status
    }

    books ||--o{ book_authors : ""
    authors ||--o{ book_authors : ""
    books ||--o{ book_categories : ""
    categories ||--o{ book_categories : ""
    books ||--o| reading_states : ""
    books ||--o| thumbnails : ""
    books ||--o{ toc_entries : ""
    toc_entries ||--o{ toc_entries : "parent_id"
    libraries ||--o{ library_books : ""
    books ||--o{ library_books : ""

Key design choices

  • books is the main table. Every other per-book table references books.fingerprint with ON DELETE CASCADE, so deleting a book row removes all associated data automatically.
  • Authors are normalised. authors holds unique author names; book_authors is the join table and carries a position column that preserves display order.
  • All tables use STRICT mode. SQLite’s STRICT pragma enforces column type constraints at the storage layer, catching type mismatches early.
  • Timestamps are Unix epoch integers. added_at, created_at, and similar columns are INTEGER NOT NULL; never TEXT.
  • TOC tree via adjacency list. toc_entries.parent_id is a self-reference; position preserves sibling order. The id is a UUID7 (generated in Rust) so ORDER BY id ASC gives stable insertion order without a growing rowid.
  • library_books_full_info view. An aggregating view joins books, reading_states, book_authors, authors, book_categories, and categories in one query. The library_id column from library_books is exposed so callers can filter with a plain WHERE library_id = ?.

Data access layer

The cadmus_core::library::db::Db struct is the entry point for all library database operations. It wraps the shared SqlitePool and exposes a synchronous API by bridging every async SQLx call through the global Tokio runtime:

flowchart LR
    caller["Caller (sync event loop)"]
    Db["library::db::Db"]
    RUNTIME["RUNTIME.block_on()"]
    SQLx["SQLx async query"]
    SQLite[("SQLite")]

    caller -->|sync call| Db
    Db -->|wraps in| RUNTIME
    RUNTIME -->|awaits| SQLx
    SQLx -->|reads/writes| SQLite
    SQLx -->|result| RUNTIME
    RUNTIME -->|returns| caller

The sync bridge exists because Cadmus’s UI event loop is single-threaded and synchronous. The global RUNTIME (a tokio::runtime::Runtime singleton) lets the rest of the codebase call database methods without needing to be async.

Key methods on Db:

MethodPurpose
register_libraryInsert a new library row and return its id
get_library_by_pathLook up a library id by filesystem path
get_all_booksFetch every book in a library via the full-info view
insert_bookWrite a new book and its authors/categories
save_reading_stateSave or update reading progress for a book
save_tocBulk-write a book’s table of contents
get_thumbnailRetrieve the stored cover thumbnail BLOB
save_thumbnailSave or replace a cover thumbnail

How a book scan flows into the database

When a library directory is scanned, Cadmus follows this sequence:

sequenceDiagram
    participant Scanner as Library Scanner
    participant Db as library::db::Db
    participant SQLite

    Scanner->>Db: register_library(path, name)
    Db->>SQLite: INSERT INTO libraries
    SQLite-->>Db: library_id

    loop for each book file
        Scanner->>Db: insert_book(library_id, fp, info)
        Db->>SQLite: INSERT INTO books
        Db->>SQLite: INSERT INTO authors / book_authors
        Db->>SQLite: INSERT INTO book_categories
        Db->>SQLite: INSERT INTO library_books
    end

    loop for each book with reading progress
        Scanner->>Db: save_reading_state(fp, reader_info)
        Db->>SQLite: INSERT OR REPLACE INTO reading_states
    end

    loop for each book with a TOC
        Scanner->>Db: save_toc(fp, entries)
        Db->>SQLite: INSERT INTO toc_entries
    end

Runtime Migrations

Cadmus has two distinct migration pipelines:

  1. Schema migrations — plain .sql files in crates/core/migrations/, applied by SQLx’s migrate! macro at startup. Use these for CREATE TABLE, ALTER TABLE, and similar DDL changes.
  2. Runtime migrations — Rust async fn blocks declared with the migration! macro. Use these for one-time data operations: backfilling columns, importing legacy files, cleaning up obsolete rows, or any procedural work that goes beyond SQL DDL.

How runtime migrations work

flowchart TD
    ctor["#[ctor] runs at process start"]
    registry["Global REGISTRY HashMap<br>(migration id → async fn)"]
    startup["Database::migrate() called on startup"]
    schema["sqlx::migrate!() — applies .sql files"]
    runner["MigrationRunner::run_all()"]
    table["_cadmus_migrations table<br>(id, executed_at, status)"]
    pending["Filter: id NOT IN already-succeeded rows"]
    exec["Execute each pending migration in id order"]
    record["Record success or failure in _cadmus_migrations"]

    ctor --> registry
    startup --> schema
    schema --> runner
    runner --> table
    table --> pending
    pending --> exec
    exec --> record

At process start the #[ctor] attribute runs for every migration! call and inserts the migration function into a global HashMap. When Database::migrate is called during application startup, it first applies all pending SQL schema migrations, then calls MigrationRunner::run_all(), which:

  1. Reads _cadmus_migrations and collects IDs that already succeeded.
  2. Skips those; runs the remaining ones sorted by ID.
  3. Records each result (success or failed) before moving on.
  4. Continues past failures so one broken migration does not block others.

A failed migration can be retried by deleting its tracking row (see Re-running a migration).

The migration! macro

cadmus_core::migration! takes a stable string ID and an async fn definition:

#![allow(unused)]
fn main() {
cadmus_core::migration!(
    /// One-line doc comment forwarded to rustdoc.
    "v1_my_migration",
    async fn my_migration(pool: &SqlitePool) {
        sqlx::query!("UPDATE books SET title = TRIM(title)")
            .execute(pool)
            .await?;
        Ok(())
    }
);
}

The macro:

  • Generates the async fn with the provided body.
  • Creates a public submodule named after the function that exposes a MIGRATION_ID constant — useful for tests and cross-references.
  • Registers the function in the global REGISTRY via #[ctor] so it runs automatically without any manual wiring.
  • Forwards doc comments onto the generated items so rustdoc picks them up.
  • Appends the migration ID and a re-run SQL snippet to the generated docs.

Where to put migration code

Co-locate migrations with the feature they belong to. The library subsystem’s migrations live in crates/core/src/library/migrations.rs; a hypothetical reader subsystem would put its migrations in crates/core/src/reader/migrations.rs.

The module only needs to be declared once so the #[ctor] registration runs. There is no central registry file to update.

Writing a migration step by step

1. Choose a stable ID

The ID is the primary key in _cadmus_migrations. Once a migration has been deployed it must never be renamed, because existing installations track it by this string.

Convention: v<N>_<short_description>, for example v1_backfill_book_language.

2. Create the migration file (or add to an existing one)

#![allow(unused)]
fn main() {
// crates/core/src/my_feature/migrations.rs

use sqlx::SqlitePool;

cadmus_core::migration!(
    /// Backfills the `language` column for books that were imported before
    /// language detection was added.
    "v1_backfill_book_language",
    async fn backfill_book_language(pool: &SqlitePool) {
        sqlx::query!(
            "UPDATE books SET language = 'en' WHERE language = '' OR language IS NULL"
        )
        .execute(pool)
        .await?;

        Ok(())
    }
);
}

3. Use SQLx typed macros

All SQL inside a migration must use the typed macros for compile-time verification (see SQLite & SQLx):

#![allow(unused)]
fn main() {
// ✅ Good — compile-time checked
sqlx::query!("DELETE FROM books WHERE file_path = ?", path)
    .execute(pool)
    .await?;

// ❌ Bad — untyped, bypasses verification
sqlx::query("DELETE FROM books WHERE file_path = ?")
    .bind(path)
    .execute(pool)
    .await?;
}

4. Make it idempotent

Use INSERT OR IGNORE, ON CONFLICT DO NOTHING, or guard with a WHERE clause so the migration is safe to re-run without corrupting data.

5. Regenerate .sqlx metadata

After adding or changing any query in the migration, regenerate the compile-time metadata:

cargo sqlx prepare --all --workspace

Commit the updated .sqlx/ files alongside your migration code.

Re-running a migration

Delete the tracking row, then restart the application:

DELETE FROM _cadmus_migrations WHERE id = 'v1_my_migration';

The next startup will treat the migration as pending and run it again.

API reference

Investigations

When something needs investigating, the investigation and its results are documented here. This is to ensure that the investigation is not lost and can be referred to in the future if needed.

DatePlatformTitle
2026-03-24KoboDHCP IP Address Changes on WiFi Toggle

DHCP IP Address Changes on WiFi Toggle

After identifying that killing the original dhcpcd and replacing it with udhcpc caused the issue reported in #51, I confirmed the fix works in testing but wanted to understand the root cause before finalising PR #299.


Summary

Investigated the DHCP behaviour by inspecting the running device alongside the KOReader source tree and the original Plato shell scripts.

What is actually running on the device

1074  /libexec/dhcpcd-dbus
1110  wpa_supplicant -D nl80211 -s -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf ...
1120  dhcpcd -d -z wlan0

Nickel uses dhcpcd, not udhcpc. There is no udhcpc running under normal operation.

Why the old script caused a new IP every toggle

The original scripts/wifi-enable.sh (inherited from Plato) always spawned a fresh udhcpc:

[root@monza root]# udhcpc --help
BusyBox v1.35.99.139-g15f7d618e (2021-11-14 22:54:11 CET) multi-call binary.

Usage: udhcpc [-fbqvRB] [-a[MSEC]] [-t N] [-T SEC] [-A SEC|-n]
 [-i IFACE] [-P PORT] [-s PROG] [-p PIDFILE]
 [-oC] [-r IP] [-V VENDOR] [-F NAME] [-x OPT:VAL]... [-O OPT]...

 -i IFACE Interface to use (default eth0)
 -P PORT  Use PORT (default 68)
 -s PROG  Run PROG at DHCP events (default /usr/share/udhcpc/default.script)
 -p FILE  Create pidfile
 -B  Request broadcast replies
 -t N  Send up to N discover packets (default 3)
 -T SEC  Pause between packets (default 3)
 -A SEC  Wait if lease is not obtained (default 20)
 -b  Background if lease is not obtained
 -n  Exit if lease is not obtained
 -q  Exit after obtaining lease
 -R  Release IP on exit
 -f  Run in foreground
 -S  Log to syslog too
 -a[MSEC] Validate offered address with ARP ping
 -r IP  Request this IP address
 -o  Don't request any options (unless -O is given)
 -O OPT  Request option OPT from server (cumulative)
 -x OPT:VAL Include option OPT in sent packets (cumulative)
   Examples of string, numeric, and hex byte opts:
   -x hostname:bbox - option 12
   -x lease:3600 - option 51 (lease time)
   -x 0x3d:0100BEEFC0FFEE - option 61 (client id)
   -x 14:'"dumpfile"' - option 14 (shell-quoted)
 -F NAME  Ask server to update DNS mapping for NAME
 -V VENDOR Vendor identifier (default 'udhcp VERSION')
 -C  Don't send MAC as client identifier
 -v  Verbose
Signals:
 USR1 Renew lease
 USR2 Release lease
udhcpc -S -i "$INTERFACE" -s /etc/udhcpc.d/default.script -t15 -T10 -A3 -b -q > /dev/null &

The -q flag is the core issue. It tells udhcpc to quit immediately after obtaining a lease. The full lifecycle on every WiFi toggle was:

  1. udhcpc spawned → sends DISCOVER → gets OFFER → sends REQUEST → gets ACK → runs default.script bound (sets IP, rewrites resolv.conf) → process exits
  2. WiFi disabled → killall udhcpc default.script → nothing to kill anyway, already exited
  3. WiFi enabled again → repeat from step 1 with zero memory of the previous lease

Because the process exits after getting the lease, there is no daemon to renew it and no lease file written anywhere. Busybox’s udhcpc has no lease persistence mechanism. On the next cycle it sends a bare DISCOVER with no preferred-IP hint (no DHCP Option 50), so the DHCP server is free to hand out any address from its pool.

Additionally, cadmus.sh (previously plato.sh) was killing Nickel’s already-running dhcpcd at startup:

killall -TERM nickel hindenburg sickel fickel adobehost foxitpdf iink dhcpcd-dbus dhcpcd fmon

So even the stateful daemon that Nickel had set up (which would have requested the same IP again) was torn down before Cadmus replaced it with a stateless udhcpc -q.

What default.script does

/etc/udhcpc.d/default.script is a minimal Busybox hook called by udhcpc on lease events. The relevant part:

case "$1" in
    renew|bound|probe)
        /sbin/ifconfig $interface $ip $BROADCAST $NETMASK
        # ... deletes all default routes, adds new ones from $router ...
        echo -n > $RESOLV_CONF          # ← truncates resolv.conf to zero
        for i in $dns ; do
            echo nameserver $i >> $RESOLV_CONF
        done
        ;;
esac
echo network $1 ip="$ip" ... > /tmp/nickel-hardware-status &

Notable behaviours:

  • Wipes resolv.conf on every bound event. This is why KOReader’s disable-wifi.sh saves and restores resolv.conf with an md5 check. This is a safety net against udhcpc’s script wiping DNS on lease release.
  • Writes to /tmp/nickel-hardware-status, which is a FIFO Nickel listens on for network events. KOReader explicitly removes this FIFO (rm -f /tmp/nickel-hardware-status) to prevent scripts hanging when Nickel is not running.

Why dhcpcd produces stable IPs

dhcpcd writes a per-SSID lease file to /var/db/:

/var/db/dhcpcd-wlan0-1.lease
/var/db/dhcpcd-wlan0-2.lease
/var/db/dhcpcd-wlan0-3.lease
/var/db/dhcpcd-wlan0-4.lease

/var/db lives on the root eMMC partition (/dev/mmcblk0p10), not a tmpfs. It survives reboots.

When reconnecting to a known SSID, dhcpcd reads the matching .lease file, parses the previously-held IP, and sends a DHCP REQUEST directly for that IP (DHCP Option 50: Requested IP Address), skipping DISCOVER entirely if the lease has not expired. The DHCP server sees a familiar MAC + familiar IP request and simply ACKs it.

The lease file for the current network encodes 192.x.x.x in the yiaddr field of the saved BOOTREPLY, which is exactly the IP the device is using right now.

Filesystem layout: what is ephemeral vs persistent

PathTypePersistent?
/tmptmpfsNo
/var/libtmpfs (16 KiB)No
/var/runtmpfs (128 KiB)No
/var/logtmpfs (16 KiB)No
/var/dbeMMC (mmcblk0p10)Yes
/etceMMCYes

dhcpcd’s runtime state (/var/run/dhcpcd.pid, /var/run/dhcpcd.sock) is ephemeral as expected, but the lease database in /var/db/ persists. That is the architectural key.

Why KOReader kills dhcpcd in its startup script but still gets stable IPs

From koreader.sh:

# NOTE: We kill Nickel's master dhcpcd daemon on purpose,
#       as we want to be able to use our own per-if processes w/ custom args later on.
#       A SIGTERM does not break anything, it'll just prevent automatic lease renewal
#       until the time KOReader actually sets the if up itself (i.e., it'll do)...
killall -q -TERM nickel ... dhcpcd-dbus dhcpcd ...

KOReader kills Nickel’s dhcpcd because it wants to start its own instance with custom arguments later via obtain-ip.sh. Crucially, obtain-ip.sh prefers dhcpcd over udhcpc:

# NOTE: Prefer dhcpcd over udhcpc if available. That's what Nickel uses,
#       and udhcpc appears to trip some insanely wonky corner cases on current FW (#6421)
if [ -x "/sbin/dhcpcd" ]; then
    dhcpcd -d -t 30 -w "${INTERFACE}"
else
    udhcpc -S -i "${INTERFACE}" -s /etc/udhcpc.d/default.script -b -q
fi

The new dhcpcd instance KOReader starts reads the same /var/db/*.lease files and requests the same IP. Stable address despite the kill/restart cycle.

Why the fix in PR #299 works

The fix is twofold:

  1. Remove dhcpcd from the kill list in cadmus.sh. Nickel’s running dhcpcd -d -z wlan0 instance survives into the Cadmus session, continuously managing the lease.

  2. Do not start udhcpc at all in the native Rust WiFi implementation. No new DHCP client is spawned on toggle, so the already-running dhcpcd is never displaced.

The result: one long-lived dhcpcd daemon manages the lease for the entire session, renews it in the background, and requests the same IP on every reconnect using the persisted /var/db/*.lease file.