Files
fruix/docs/system-deployment-workflow.md

14 KiB

Fruix system deployment workflow

Date: 2026-04-04

Purpose

This document defines the current canonical Fruix workflow for:

  • building a declarative system closure
  • materializing deployable artifacts
  • installing a declarative system onto an image or disk
  • booting through installer media
  • rolling forward to a candidate system
  • switching an installed system to a staged candidate generation
  • rolling an installed system back to an earlier recorded generation

This is now the Phase 19 operator-facing view of the system model as validated through explicit installed-system generation switching and rollback.

Core model

A Fruix system workflow starts from a Scheme file that binds an operating-system object.

Today, the canonical frontend is:

  • ./bin/fruix system ...

The important output objects are:

  • system closure
    • a content-addressed store item under /frx/store/*-fruix-system-<host-name>
    • includes boot assets, activation logic, profile tree, metadata, and references
  • rootfs tree
    • a materialized runtime tree for inspection or image staging
  • disk image
    • a bootable GPT/UEFI raw disk image
  • installer image
    • a bootable Fruix installer disk image that installs a selected target system from inside the guest
  • installer ISO
    • a bootable UEFI ISO with an embedded installer mdroot payload
  • install metadata
    • /var/lib/fruix/install.scm on installed targets
    • records the selected closure path, install spec, and referenced store items including source provenance

The current deployment story is therefore already declaration-driven and content-addressed, even before first-class installed-system generations are modeled more explicitly.

Canonical command surface

Build a system closure

sudo env HOME="$HOME" \
  GUILE_AUTO_COMPILE=0 \
  GUIX_SOURCE_DIR="$HOME/repos/guix" \
  GUILE_BIN="/tmp/guile-freebsd-validate-install/bin/guile" \
  GUILE_EXTRA_PREFIX="/tmp/guile-gnutls-freebsd-validate-install" \
  SHEPHERD_PREFIX="/tmp/shepherd-freebsd-validate-install" \
  ./bin/fruix system build path/to/system.scm --system my-operating-system

Primary result:

  • closure_path=/frx/store/...-fruix-system-...

Use this when you want to:

  • validate the declarative system composition itself
  • inspect provenance/layout metadata
  • compare candidate and current closure paths
  • drive later rootfs/image/install steps from the same declaration

Materialize a rootfs tree

sudo env HOME="$HOME" ... \
  ./bin/fruix system rootfs path/to/system.scm ./rootfs --system my-operating-system

Primary result:

  • rootfs=...
  • closure_path=/frx/store/...

Use this when you want to:

  • inspect the runtime filesystem layout directly
  • stage a tree for debugging
  • validate /run/current-system-style symlink layout without booting a full image

Materialize a bootable disk image

sudo env HOME="$HOME" ... \
  ./bin/fruix system image path/to/system.scm \
  --system my-operating-system \
  --root-size 6g

Primary result:

  • disk_image=/frx/store/.../disk.img

Use this when you want to:

  • boot the system directly as a VM image
  • test a candidate deployment under QEMU or XCP-ng
  • validate a roll-forward or rollback candidate by image boot

Install directly to an image file or block device

sudo env HOME="$HOME" ... \
  ./bin/fruix system install path/to/system.scm \
  --system my-operating-system \
  --target ./installed.img \
  --disk-capacity 12g \
  --root-size 10g

Primary result:

  • target=...
  • target_kind=raw-file or block-device
  • install_metadata_path=/var/lib/fruix/install.scm

Use this when you want to:

  • produce an installed target image without booting an installer guest
  • validate installation mechanics directly
  • populate a raw image or a real /dev/... target

Materialize a bootable installer disk image

sudo env HOME="$HOME" ... \
  ./bin/fruix system installer path/to/system.scm \
  --system my-operating-system \
  --install-target-device /dev/vtbd1 \
  --root-size 10g

Primary result:

  • installer_disk_image=/frx/store/.../disk.img

Use this when you want to:

  • boot a Fruix installer environment as a disk image
  • let the in-guest installer partition and install onto a second disk
  • validate non-interactive installation from inside a booted Fruix guest

Materialize a bootable installer ISO

sudo env HOME="$HOME" ... \
  ./bin/fruix system installer-iso path/to/system.scm \
  --system my-operating-system \
  --install-target-device /dev/vtbd0

Primary result:

  • iso_image=/frx/store/.../installer.iso
  • boot_efi_image=/frx/store/.../efiboot.img
  • root_image=/frx/store/.../root.img

Use this when you want to:

  • boot through UEFI ISO media instead of a writable installer disk image
  • install from an ISO-attached Fruix environment
  • test the same install model on more realistic VM paths

Installed-system generation commands

Installed Fruix systems now also ship a small in-guest deployment helper at:

  • /usr/local/bin/fruix

Current validated in-guest commands are:

fruix system status
fruix system switch /frx/store/...-fruix-system-...
fruix system rollback

Current intended usage:

  1. build a candidate closure on the operator side with ./bin/fruix system build
  2. ensure that candidate closure is present on the installed target's /frx/store
  3. run fruix system switch /frx/store/... on the installed system
  4. reboot into the staged candidate generation
  5. if needed, run fruix system rollback
  6. reboot back into the recorded rollback generation

Important current limitation:

  • fruix system switch does not yet fetch or copy the candidate closure onto the target for you
  • it assumes the selected closure is already present in the installed system's /frx/store

Deployment patterns

1. Build-first workflow

The default Fruix operator workflow starts by building the closure first:

  1. edit the system declaration
  2. run fruix system build
  3. inspect emitted metadata
  4. if needed, produce one of:
    • rootfs
    • image
    • install
    • installer
    • installer-iso

This keeps the declaration-to-closure boundary explicit.

2. VM image deployment workflow

Use this when you want to boot a system directly rather than through an installer.

  1. run fruix system image
  2. boot the image in QEMU or convert/import it for XCP-ng
  3. validate:
    • /run/current-system
    • shepherd/sshd state
    • activation log
  4. keep the closure path from the build metadata as the deployment identity

This is the current canonical direct deployment path for already-built images.

3. Direct installation workflow

Use this when you want an installed target image or disk without a booted installer guest.

  1. run fruix system install --target ...
  2. let Fruix partition, format, populate, and install the target
  3. boot the installed result
  4. validate /var/lib/fruix/install.scm and target services

This is the most direct install path.

4. Installer-environment workflow

Use this when the install itself should happen from inside a booted Fruix environment.

  1. run fruix system installer
  2. boot the installer disk image
  3. let the in-guest installer run onto the selected target device
  4. boot the installed target

This is useful when the installer environment itself is part of what needs validation.

5. Installer-ISO workflow

Use this when the desired operator artifact is a bootable UEFI ISO.

  1. run fruix system installer-iso
  2. boot the ISO under the target virtualization path
  3. let the in-guest installer run onto the selected target device
  4. eject the ISO and reboot the installed target

This is now validated on both:

  • local QEMU/UEFI/TCG
  • the approved real XCP-ng VM path

Install-target device conventions

The install target device is not identical across all boot styles.

Current validated defaults are:

  • direct installer disk-image path under QEMU:
    • /dev/vtbd1
  • installer ISO path under QEMU:
    • /dev/vtbd0
  • installer ISO path under XCP-ng:
    • /dev/ada0

Therefore the canonical workflow is:

  • always treat --install-target-device as an explicit deployment parameter when moving between virtualization environments

Do not assume that a device name validated in one harness is portable to another.

Installed-system generation layout

Installed Fruix systems now record an explicit first-generation deployment layout under:

  • /var/lib/fruix/system

Initial installed shape:

/var/lib/fruix/system/
  current -> generations/1
  current-generation
  generations/
    1/
      closure -> /frx/store/...-fruix-system-...
      metadata.scm
      provenance.scm
      install.scm   # present on installed targets

After a validated in-place switch, the layout extends to:

/var/lib/fruix/system/
  current -> generations/2
  current-generation
  rollback -> generations/1
  rollback-generation
  generations/
    1/
      ...
    2/
      closure -> /frx/store/...-fruix-system-...
      metadata.scm
      provenance.scm
      install.scm   # deployment metadata for the switch operation

Installed systems also now create explicit GC-root-style deployment links under:

  • /frx/var/fruix/gcroots

Current validated shape:

/frx/var/fruix/gcroots/
  current-system -> /frx/store/...-fruix-system-...
  rollback-system -> /frx/store/...-fruix-system-...
  system-1 -> /frx/store/...-fruix-system-...
  system-2 -> /frx/store/...-fruix-system-...

Important detail:

  • /run/current-system still points directly at the active closure path in /frx/store
  • the explicit generation layout therefore adds deployment metadata and retention roots without changing the already-validated runtime contract used by activation, rc.d wiring, and tests

Roll-forward workflow

The current Fruix roll-forward model now has two validated layers.

Declaration/deployment roll-forward

Canonical process:

  1. keep the current known-good system declaration
  2. prepare a candidate declaration
    • this may differ by FreeBSD base identity
    • source revision
    • services
    • users/groups
    • or other operating-system fields
  3. run fruix system build for the candidate
  4. materialize either:
    • fruix system image
    • fruix system install
    • fruix system installer
    • fruix system installer-iso
  5. boot or install the candidate
  6. validate the candidate closure in the booted system

Installed-system generation roll-forward

When the candidate closure is already present on an installed target:

  1. run fruix system switch /frx/store/...candidate...
  2. confirm the staged state with fruix system status
  3. reboot into the candidate generation
  4. validate the new active closure after reboot

The important property is still that the candidate closure appears beside the earlier one in /frx/store rather than mutating it in place.

Rollback workflow

The current canonical rollback workflow also now has two validated layers.

Declaration/deployment rollback

You can still roll back by redeploying the earlier declaration:

  1. retain the earlier declaration that produced the known-good closure
  2. rebuild or rematerialize that earlier declaration
  3. redeploy or reboot that earlier artifact again

Concretely, the usual declaration-level rollback choices are:

  • rebuild the earlier declaration with fruix system build and confirm the old closure path reappears
  • boot the earlier declaration again through fruix system image
  • reinstall the earlier declaration through fruix system install, installer, or installer-iso if the deployment medium itself must change

Installed-system generation rollback

When an installed target already has both the current and rollback generations recorded:

  1. run fruix system rollback
  2. confirm the staged state with fruix system status
  3. reboot into the rollback generation
  4. validate the restored active closure after reboot

This installed-system rollback path is now validated on local QEMU/UEFI/TCG.

Important scope note

This is still not yet the same thing as Guix's full reconfigure/generation UX.

Current installed-system rollback is intentionally modest:

  • it switches between already-recorded generations on the target
  • it does not yet fetch candidate closures onto the machine for you
  • it does not yet expose a richer history-management or generation-pruning policy

Still pending:

  • operator-facing closure transfer or fetch onto installed systems
  • multi-generation lifecycle policy beyond the validated current and rollback pointers
  • a fuller reconfigure-style installed-system UX

Provenance and deployment identity

For any serious deployment or rollback decision, the canonical identity is not merely the host name. It is the emitted metadata:

  • closure_path
  • declared FreeBSD base/source metadata
  • materialized source store paths
  • install metadata at /var/lib/fruix/install.scm
  • store item counts and reference lists

Operators should retain metadata from successful candidate and current deployments because Fruix already emits enough data to answer:

  • which declaration was built
  • which closure booted
  • which source snapshot was materialized
  • which target device or image was installed

Current limitations

The deployment workflow is now coherent, and Fruix now has a validated installed-system switch/rollback path, but it is still not the final generation-management story.

Not yet first-class:

  • host-side closure transfer/fetch onto installed systems as part of fruix system switch
  • a fuller reconfigure workflow that builds and stages the new closure from inside the target environment
  • multi-generation lifecycle policy beyond the validated current and rollback pointers
  • generation pruning and retention policy independent of full redeploy

Those are the next logical steps after the current explicit-generation switch/rollback model.

Summary

The current canonical Fruix deployment model is:

  • declare a system in Scheme
  • build the closure with fruix system build
  • materialize the artifact appropriate to the deployment target
  • boot or install that artifact
  • identify deployments by closure path and provenance metadata
  • on installed systems, switch to a staged candidate with fruix system switch
  • on installed systems, roll back to the recorded rollback generation with fruix system rollback
  • still use declaration/redeploy rollback when the target does not already have the desired closure staged locally

That is the operator-facing workflow Fruix should document and use while its installed-system generation UX remains simpler than Guix's mature in-place system-generation workflow.