Files
fruix/docs/system-deployment-workflow.md

24 KiB

Fruix system deployment workflow

Date: 2026-04-06

Purpose

This document defines the current canonical Fruix workflow for:

  • building a declarative system closure
  • materializing deployable artifacts
  • installing a declarative system onto an image or disk
  • booting through installer media
  • rolling forward to a candidate system
  • switching an installed system to a staged candidate generation
  • rolling an installed system back to an earlier recorded generation

This is now the Phase 19 operator-facing view of the system model as validated through explicit installed-system generation switching and rollback.

Core model

A Fruix system workflow starts from a Scheme file that binds an operating-system object.

Today, the canonical frontend is:

  • ./bin/fruix system ...

The important output objects are:

  • system closure
    • a content-addressed store item under /frx/store/*-fruix-system-<host-name>
    • includes boot assets, activation logic, profile tree, metadata, and references
  • rootfs tree
    • a materialized runtime tree for inspection or image staging
  • disk image
    • a bootable GPT/UEFI raw disk image
  • installer image
    • a bootable Fruix installer disk image that installs a selected target system from inside the guest
  • installer ISO
    • a bootable UEFI ISO with an embedded installer mdroot payload
  • install metadata
    • /var/lib/fruix/install.scm on installed targets
    • records the selected closure path, install spec, and referenced store items including source provenance

The current deployment story is therefore already declaration-driven and content-addressed, even before first-class installed-system generations are modeled more explicitly.

Canonical command surface

Build a system closure

sudo env HOME="$HOME" \
  GUILE_AUTO_COMPILE=0 \
  GUIX_SOURCE_DIR="$HOME/repos/guix" \
  GUILE_BIN="/tmp/guile-freebsd-validate-install/bin/guile" \
  GUILE_EXTRA_PREFIX="/tmp/guile-gnutls-freebsd-validate-install" \
  SHEPHERD_PREFIX="/tmp/shepherd-freebsd-validate-install" \
  ./bin/fruix system build path/to/system.scm --system my-operating-system

Primary result:

  • closure_path=/frx/store/...-fruix-system-...

Use this when you want to:

  • validate the declarative system composition itself
  • inspect provenance/layout metadata
  • compare candidate and current closure paths
  • drive later rootfs/image/install steps from the same declaration

Materialize a rootfs tree

sudo env HOME="$HOME" ... \
  ./bin/fruix system rootfs path/to/system.scm ./rootfs --system my-operating-system

Primary result:

  • rootfs=...
  • closure_path=/frx/store/...

Use this when you want to:

  • inspect the runtime filesystem layout directly
  • stage a tree for debugging
  • validate /run/current-system-style symlink layout without booting a full image

Materialize a bootable disk image

sudo env HOME="$HOME" ... \
  ./bin/fruix system image path/to/system.scm \
  --system my-operating-system \
  --root-size 6g

Primary result:

  • disk_image=/frx/store/.../disk.img

Use this when you want to:

  • boot the system directly as a VM image
  • test a candidate deployment under QEMU or XCP-ng
  • validate a roll-forward or rollback candidate by image boot

Install directly to an image file or block device

sudo env HOME="$HOME" ... \
  ./bin/fruix system install path/to/system.scm \
  --system my-operating-system \
  --target ./installed.img \
  --disk-capacity 12g \
  --root-size 10g

Primary result:

  • target=...
  • target_kind=raw-file or block-device
  • install_metadata_path=/var/lib/fruix/install.scm

Use this when you want to:

  • produce an installed target image without booting an installer guest
  • validate installation mechanics directly
  • populate a raw image or a real /dev/... target

Materialize a bootable installer disk image

sudo env HOME="$HOME" ... \
  ./bin/fruix system installer path/to/system.scm \
  --system my-operating-system \
  --install-target-device /dev/vtbd1 \
  --root-size 10g

Primary result:

  • installer_disk_image=/frx/store/.../disk.img

Use this when you want to:

  • boot a Fruix installer environment as a disk image
  • let the in-guest installer partition and install onto a second disk
  • validate non-interactive installation from inside a booted Fruix guest

Materialize a bootable installer ISO

sudo env HOME="$HOME" ... \
  ./bin/fruix system installer-iso path/to/system.scm \
  --system my-operating-system \
  --install-target-device /dev/vtbd0

Primary result:

  • iso_image=/frx/store/.../installer.iso
  • boot_efi_image=/frx/store/.../efiboot.img
  • root_image=/frx/store/.../root.img

Use this when you want to:

  • boot through UEFI ISO media instead of a writable installer disk image
  • install from an ISO-attached Fruix environment
  • test the same install model on more realistic VM paths

Installed-system generation commands

Installed Fruix systems now also ship a small in-guest deployment helper at:

  • /usr/local/bin/fruix

Current validated in-guest commands are:

fruix system build
fruix system reconfigure
fruix system status
fruix system switch /frx/store/...-fruix-system-...
fruix system rollback

Installed systems now carry canonical declaration state in:

  • /run/current-system/metadata/system-declaration.scm
  • /run/current-system/metadata/system-declaration-info.scm
  • /run/current-system/metadata/system-declaration-system

So the in-guest helper can now build from the node's own embedded declaration inputs.

Current validated build/reconfigure behavior is:

  • fruix system build
    • with no extra arguments, builds from the embedded current declaration
  • fruix system reconfigure
    • with no extra arguments, builds from the embedded current declaration and stages a switch to the resulting closure
  • both commands can also take an explicit declaration file plus --system NAME

Current intended usage now has two validated patterns.

Pattern A: build elsewhere, then switch/rollback locally

  1. build a candidate closure on the operator side with ./bin/fruix system build
  2. ensure that candidate closure is present on the installed target's /frx/store
  3. run fruix system switch /frx/store/... on the installed system
  4. reboot into the staged candidate generation
  5. if needed, run fruix system rollback
  6. reboot back into the recorded rollback generation

Important current limitation of this lower-level pattern:

  • fruix system switch does not yet fetch or copy the candidate closure onto the target for you
  • it assumes the selected closure is already present in the installed system's /frx/store

Pattern B: build and reconfigure from the node itself

  1. inspect or edit the node declaration inputs
    • embedded current declaration, or
    • an explicit replacement declaration file
  2. run:
fruix system build

or:

fruix system build /path/to/candidate.scm --system my-operating-system
  1. stage a local generation update with:
fruix system reconfigure

or:

fruix system reconfigure /path/to/candidate.scm --system my-operating-system
  1. reboot into the staged generation
  2. if needed, run fruix system rollback
  3. reboot back into the recorded prior generation

In-guest development and build environments

Opt-in systems can now expose two separate overlays above the main runtime profile:

  • development:
    • /run/current-system/development-profile
    • /run/current-development
    • /usr/local/bin/fruix-development-environment
  • build:
    • /run/current-system/build-profile
    • /run/current-build
    • /usr/local/bin/fruix-build-environment

Intended use:

eval "$(/usr/local/bin/fruix-development-environment)"

for interactive development work, and:

eval "$(/usr/local/bin/fruix-build-environment)"

for a narrower native base-build contract.

The current split is:

  • runtime profile
  • development profile
  • build profile

The development helper remains intentionally interactive and currently exposes at least:

  • native headers under usr/include
  • FreeBSD share/mk files for bsd.*.mk
  • Clang toolchain commands such as cc, c++, ar, ranlib, and nm
  • MAKEFLAGS pointing at the development profile's usr/share/mk

The build helper is intentionally more sanitized and less interactive. It clears development-shell variables such as:

  • MAKEFLAGS
  • CPPFLAGS
  • CFLAGS
  • CXXFLAGS
  • LDFLAGS

and then exposes build-oriented paths such as:

  • FRUIX_BUILD_PROFILE
  • FRUIX_BUILD_INCLUDE
  • FRUIX_BUILD_SHARE_MK
  • FRUIX_BUILD_CC
  • FRUIX_BUILD_CXX
  • FRUIX_BUILD_AR
  • FRUIX_BUILD_RANLIB
  • FRUIX_BUILD_NM
  • FRUIX_BMAKE

For native base-build compatibility, build-enabled systems now expose canonical links at:

  • /usr/include -> /run/current-system/build-profile/usr/include
  • /usr/share/mk -> /run/current-system/build-profile/usr/share/mk

So Fruix now separates interactive development support from the stricter environment used for buildworld / buildkernel style work, instead of treating them as one overlay.

Host-initiated native base builds inside a Fruix-managed guest

The currently validated intermediate path toward self-hosting is still host-orchestrated.

The host:

  1. boots a development-enabled Fruix guest
  2. connects over SSH
  3. recovers the materialized FreeBSD source store from system metadata
  4. runs native FreeBSD build commands inside the guest
  5. collects and records the staged outputs

The validated build sequence inside the guest is:

  • make -jN buildworld
  • make -jN buildkernel
  • make DESTDIR=... installworld
  • make DESTDIR=... distribution
  • make DESTDIR=... installkernel

For staged install steps, the validated path uses:

  • DB_FROM_SRC=yes

so the staged install is driven by the declared source tree's account database rather than by accidental guest-local /etc/master.passwd contents.

This is the current Phase 20.2 answer to “where should native base builds run?”

  • inside a Fruix-managed FreeBSD environment
  • but still with the host driving the outer orchestration loop

Controlled guest self-hosted native-build prototype

Fruix now also has a narrower in-guest prototype helper at:

  • /usr/local/bin/fruix-self-hosted-native-build

Intended use:

FRUIX_SELF_HOSTED_NATIVE_BUILD_JOBS=8 \
  /usr/local/bin/fruix-self-hosted-native-build

That helper:

  1. evaluates the build helper and verifies the build overlay plus canonical compatibility links
  2. recovers the materialized FreeBSD source store from:
    • /run/current-system/metadata/store-layout.scm
  3. runs the native FreeBSD build/install phases inside the guest
  4. records staged results under:
    • /var/lib/fruix/native-builds/<run-id>
    • /var/lib/fruix/native-builds/latest
  5. emits promotion metadata for first-class artifact identities covering:
    • world
    • kernel
    • headers
    • bootloader
  6. keeps the heavier object/stage work under:
    • /var/tmp/fruix-self-hosted-native-builds/<run-id>

Important current detail:

  • the self-hosted helper now uses the separate fruix-build-environment contract instead of reusing the interactive development helper wholesale
  • that build helper intentionally clears development-shell exports such as MAKEFLAGS, CPPFLAGS, CFLAGS, CXXFLAGS, and LDFLAGS before buildworld
  • this keeps the base-build path closer to the exact contract needed for real world/kernel bootstrap work

So the validated Phase 20.3 answer is:

  • a controlled guest self-hosted base-build prototype now works
  • but the simpler default operator flow should still be the Phase 20.2 host-initiated in-guest path unless there is a specific reason to push the build loop farther into the guest

Promoting native-build results into first-class Fruix store objects

The guest-side result root is now explicitly a staging/result area, not the final immutable identity.

Current validated flow:

  1. run the in-guest helper so the guest records a result under:
    • /var/lib/fruix/native-builds/<run-id>
  2. copy that result root back to the host
  3. run:
fruix native-build promote RESULT_ROOT

The promotion step creates immutable /frx/store identities for:

  • world
  • kernel
  • headers
  • bootloader

and also creates a result-bundle store object that references those promoted artifact stores.

Current metadata split:

  • mutable staging/result root:
    • /var/lib/fruix/native-builds/<run-id>
  • immutable artifact stores:
    • /frx/store/...-fruix-native-world-...
    • /frx/store/...-fruix-native-kernel-...
    • /frx/store/...-fruix-native-headers-...
    • /frx/store/...-fruix-native-bootloader-...
  • immutable result bundle:
    • /frx/store/...-fruix-native-build-result-...

The promoted store objects record explicit Fruix-native metadata including at least:

  • executor kind / name / version
  • run-id / guest-host-name
  • closure path
  • source store provenance
  • build policy
  • artifact kind
  • required-file expectations
  • recorded content signatures and hashes

This is the current Fruix-native answer to the question:

  • where should mutable native-build state live?
    • /var/lib/fruix/native-builds/...
  • where should immutable native-build identity live?
    • /frx/store/...

Using promoted native-build results in system declarations

Fruix system declarations can now refer directly to a promoted native-build result bundle.

Current declaration-level helpers are:

  • promoted-native-build-result
  • operating-system-from-promoted-native-build-result

Representative pattern:

(define promoted
  (promoted-native-build-result
   #:store-path "/frx/store/...-fruix-native-build-result-..."))

(define os
  (operating-system-from-promoted-native-build-result
   promoted
   #:host-name "fruix-freebsd"
   ...))

That now gives Fruix a more product-like story:

  1. a build runs under some executor policy
  2. Fruix records the staged mutable result
  3. Fruix promotes it into immutable store identities
  4. a later system declaration can point at that promoted result identity
  5. Fruix materializes and boots a normal system from that promoted identity

The resulting closure now records that provenance explicitly through:

  • metadata/promoted-native-build-result.scm
  • metadata/store-layout.scm
  • closure references that retain the selected result-bundle store path

So the operator-facing statement is now:

  • “this Fruix system is based on promoted native-base result X”

not only:

  • “some earlier build happened and its files were copied somewhere.”

Native-build executor model

Fruix now has an explicit executor model for native base builds.

Current executor kinds are:

  • host
  • ssh-guest
  • self-hosted

and the intended future extension points are:

  • jail
  • remote-builder

The important change is architectural:

  • declared source identity stays the same
  • expected artifact kinds stay the same
  • result/promotion metadata shape stays the same
  • only the executor policy changes

So “where the build runs” is now treated as executor policy rather than as a separate native-build architecture each time.

Current end-to-end validated executors for the staged-result-plus-promotion model are:

  • ssh-guest
  • self-hosted

Both now converge on the same Fruix-native flow:

  1. run the build under a selected executor
  2. stage a result root under /var/lib/fruix/native-builds/...
  3. emit the same promotion/provenance shape
  4. promote the result into immutable /frx/store/... objects

Deployment patterns

1. Build-first workflow

The default Fruix operator workflow starts by building the closure first:

  1. edit the system declaration
  2. run fruix system build
  3. inspect emitted metadata
  4. if needed, produce one of:
    • rootfs
    • image
    • install
    • installer
    • installer-iso

This keeps the declaration-to-closure boundary explicit.

2. VM image deployment workflow

Use this when you want to boot a system directly rather than through an installer.

  1. run fruix system image
  2. boot the image in QEMU or convert/import it for XCP-ng
  3. validate:
    • /run/current-system
    • shepherd/sshd state
    • activation log
  4. keep the closure path from the build metadata as the deployment identity

This is the current canonical direct deployment path for already-built images.

3. Direct installation workflow

Use this when you want an installed target image or disk without a booted installer guest.

  1. run fruix system install --target ...
  2. let Fruix partition, format, populate, and install the target
  3. boot the installed result
  4. validate /var/lib/fruix/install.scm and target services

This is the most direct install path.

4. Installer-environment workflow

Use this when the install itself should happen from inside a booted Fruix environment.

  1. run fruix system installer
  2. boot the installer disk image
  3. let the in-guest installer run onto the selected target device
  4. boot the installed target

This is useful when the installer environment itself is part of what needs validation.

5. Installer-ISO workflow

Use this when the desired operator artifact is a bootable UEFI ISO.

  1. run fruix system installer-iso
  2. boot the ISO under the target virtualization path
  3. let the in-guest installer run onto the selected target device
  4. eject the ISO and reboot the installed target

This is now validated on both:

  • local QEMU/UEFI/TCG
  • the approved real XCP-ng VM path

Install-target device conventions

The install target device is not identical across all boot styles.

Current validated defaults are:

  • direct installer disk-image path under QEMU:
    • /dev/vtbd1
  • installer ISO path under QEMU:
    • /dev/vtbd0
  • installer ISO path under XCP-ng:
    • /dev/ada0

Therefore the canonical workflow is:

  • always treat --install-target-device as an explicit deployment parameter when moving between virtualization environments

Do not assume that a device name validated in one harness is portable to another.

Installed-system generation layout

Installed Fruix systems now record an explicit first-generation deployment layout under:

  • /var/lib/fruix/system

Initial installed shape:

/var/lib/fruix/system/
  current -> generations/1
  current-generation
  generations/
    1/
      closure -> /frx/store/...-fruix-system-...
      metadata.scm
      provenance.scm
      install.scm   # present on installed targets

After a validated in-place switch, the layout extends to:

/var/lib/fruix/system/
  current -> generations/2
  current-generation
  rollback -> generations/1
  rollback-generation
  generations/
    1/
      ...
    2/
      closure -> /frx/store/...-fruix-system-...
      metadata.scm
      provenance.scm
      install.scm   # deployment metadata for the switch operation

Installed systems also now create explicit GC-root-style deployment links under:

  • /frx/var/fruix/gcroots

Current validated shape:

/frx/var/fruix/gcroots/
  current-system -> /frx/store/...-fruix-system-...
  rollback-system -> /frx/store/...-fruix-system-...
  system-1 -> /frx/store/...-fruix-system-...
  system-2 -> /frx/store/...-fruix-system-...

Important detail:

  • /run/current-system still points directly at the active closure path in /frx/store
  • the explicit generation layout therefore adds deployment metadata and retention roots without changing the already-validated runtime contract used by activation, rc.d wiring, and tests

Roll-forward workflow

The current Fruix roll-forward model now has two validated layers.

Declaration/deployment roll-forward

Canonical process:

  1. keep the current known-good system declaration
  2. prepare a candidate declaration
    • this may differ by FreeBSD base identity
    • source revision
    • services
    • users/groups
    • or other operating-system fields
  3. run fruix system build for the candidate
  4. materialize either:
    • fruix system image
    • fruix system install
    • fruix system installer
    • fruix system installer-iso
  5. boot or install the candidate
  6. validate the candidate closure in the booted system

Installed-system generation roll-forward

When the candidate closure is already present on an installed target:

  1. run fruix system switch /frx/store/...candidate...
  2. confirm the staged state with fruix system status
  3. reboot into the candidate generation
  4. validate the new active closure after reboot

The important property is still that the candidate closure appears beside the earlier one in /frx/store rather than mutating it in place.

Rollback workflow

The current canonical rollback workflow also now has two validated layers.

Declaration/deployment rollback

You can still roll back by redeploying the earlier declaration:

  1. retain the earlier declaration that produced the known-good closure
  2. rebuild or rematerialize that earlier declaration
  3. redeploy or reboot that earlier artifact again

Concretely, the usual declaration-level rollback choices are:

  • rebuild the earlier declaration with fruix system build and confirm the old closure path reappears
  • boot the earlier declaration again through fruix system image
  • reinstall the earlier declaration through fruix system install, installer, or installer-iso if the deployment medium itself must change

Installed-system generation rollback

When an installed target already has both the current and rollback generations recorded:

  1. run fruix system rollback
  2. confirm the staged state with fruix system status
  3. reboot into the rollback generation
  4. validate the restored active closure after reboot

This installed-system rollback path is now validated on local QEMU/UEFI/TCG.

Important scope note

This is still not yet the same thing as Guix's full reconfigure/generation UX.

Current installed-system rollback is intentionally modest:

  • it switches between already-recorded generations on the target
  • it does not yet fetch candidate closures onto the machine for you
  • it does not yet expose a richer history-management or generation-pruning policy

Still pending:

  • operator-facing closure transfer or fetch onto installed systems
  • multi-generation lifecycle policy beyond the validated current and rollback pointers
  • a fuller reconfigure-style installed-system UX

Provenance and deployment identity

For any serious deployment or rollback decision, the canonical identity is not merely the host name. It is the emitted metadata:

  • closure_path
  • declared FreeBSD base/source metadata
  • materialized source store paths
  • install metadata at /var/lib/fruix/install.scm
  • store item counts and reference lists

Operators should retain metadata from successful candidate and current deployments because Fruix already emits enough data to answer:

  • which declaration was built
  • which closure booted
  • which source snapshot was materialized
  • which target device or image was installed

Current limitations

The deployment workflow is now coherent, and Fruix now has a validated installed-system switch/rollback path, but it is still not the final generation-management story.

Not yet first-class:

  • host-side closure transfer/fetch onto installed systems as part of fruix system switch
  • a fuller reconfigure workflow that builds and stages the new closure from inside the target environment
  • multi-generation lifecycle policy beyond the validated current and rollback pointers
  • generation pruning and retention policy independent of full redeploy

Those are the next logical steps after the current explicit-generation switch/rollback model.

Summary

The current canonical Fruix deployment model is:

  • declare a system in Scheme
  • build the closure with fruix system build
  • materialize the artifact appropriate to the deployment target
  • boot or install that artifact
  • identify deployments by closure path and provenance metadata
  • on installed systems, switch to a staged candidate with fruix system switch
  • on installed systems, roll back to the recorded rollback generation with fruix system rollback
  • still use declaration/redeploy rollback when the target does not already have the desired closure staged locally

That is the operator-facing workflow Fruix should document and use while its installed-system generation UX remains simpler than Guix's mature in-place system-generation workflow.