23 KiB
Fruix system deployment workflow
Date: 2026-04-06
Purpose
This document defines the current canonical Fruix workflow for:
- building a declarative system closure
- materializing deployable artifacts
- installing a declarative system onto an image or disk
- booting through installer media
- rolling forward to a candidate system
- switching an installed system to a staged candidate generation
- rolling an installed system back to an earlier recorded generation
This is now the Phase 19 operator-facing view of the system model as validated through explicit installed-system generation switching and rollback.
Core model
A Fruix system workflow starts from a Scheme file that binds an operating-system object.
Today, the canonical frontend is:
./bin/fruix system ...
The important output objects are:
- system closure
- a content-addressed store item under
/frx/store/*-fruix-system-<host-name> - includes boot assets, activation logic, profile tree, metadata, and references
- a content-addressed store item under
- rootfs tree
- a materialized runtime tree for inspection or image staging
- disk image
- a bootable GPT/UEFI raw disk image
- installer image
- a bootable Fruix installer disk image that installs a selected target system from inside the guest
- installer ISO
- a bootable UEFI ISO with an embedded installer mdroot payload
- install metadata
/var/lib/fruix/install.scmon installed targets- records the selected closure path, install spec, and referenced store items including source provenance
The current deployment story is therefore already declaration-driven and content-addressed, even before first-class installed-system generations are modeled more explicitly.
Canonical command surface
Build a system closure
sudo env HOME="$HOME" \
GUILE_AUTO_COMPILE=0 \
GUIX_SOURCE_DIR="$HOME/repos/guix" \
GUILE_BIN="/tmp/guile-freebsd-validate-install/bin/guile" \
GUILE_EXTRA_PREFIX="/tmp/guile-gnutls-freebsd-validate-install" \
SHEPHERD_PREFIX="/tmp/shepherd-freebsd-validate-install" \
./bin/fruix system build path/to/system.scm --system my-operating-system
Primary result:
closure_path=/frx/store/...-fruix-system-...
Use this when you want to:
- validate the declarative system composition itself
- inspect provenance/layout metadata
- compare candidate and current closure paths
- drive later rootfs/image/install steps from the same declaration
Materialize a rootfs tree
sudo env HOME="$HOME" ... \
./bin/fruix system rootfs path/to/system.scm ./rootfs --system my-operating-system
Primary result:
rootfs=...closure_path=/frx/store/...
Use this when you want to:
- inspect the runtime filesystem layout directly
- stage a tree for debugging
- validate
/run/current-system-style symlink layout without booting a full image
Materialize a bootable disk image
sudo env HOME="$HOME" ... \
./bin/fruix system image path/to/system.scm \
--system my-operating-system \
--root-size 6g
Primary result:
disk_image=/frx/store/.../disk.img
Use this when you want to:
- boot the system directly as a VM image
- test a candidate deployment under QEMU or XCP-ng
- validate a roll-forward or rollback candidate by image boot
Install directly to an image file or block device
sudo env HOME="$HOME" ... \
./bin/fruix system install path/to/system.scm \
--system my-operating-system \
--target ./installed.img \
--disk-capacity 12g \
--root-size 10g
Primary result:
target=...target_kind=raw-fileorblock-deviceinstall_metadata_path=/var/lib/fruix/install.scm
Use this when you want to:
- produce an installed target image without booting an installer guest
- validate installation mechanics directly
- populate a raw image or a real
/dev/...target
Materialize a bootable installer disk image
sudo env HOME="$HOME" ... \
./bin/fruix system installer path/to/system.scm \
--system my-operating-system \
--install-target-device /dev/vtbd1 \
--root-size 10g
Primary result:
installer_disk_image=/frx/store/.../disk.img
Use this when you want to:
- boot a Fruix installer environment as a disk image
- let the in-guest installer partition and install onto a second disk
- validate non-interactive installation from inside a booted Fruix guest
Materialize a bootable installer ISO
sudo env HOME="$HOME" ... \
./bin/fruix system installer-iso path/to/system.scm \
--system my-operating-system \
--install-target-device /dev/vtbd0
Primary result:
iso_image=/frx/store/.../installer.isoboot_efi_image=/frx/store/.../efiboot.imgroot_image=/frx/store/.../root.img
Use this when you want to:
- boot through UEFI ISO media instead of a writable installer disk image
- install from an ISO-attached Fruix environment
- test the same install model on more realistic VM paths
Installed-system generation commands
Installed Fruix systems now also ship a small in-guest deployment helper at:
/usr/local/bin/fruix
Current validated in-guest commands are:
fruix system build
fruix system reconfigure
fruix system status
fruix system switch /frx/store/...-fruix-system-...
fruix system rollback
Installed systems now carry canonical declaration state in:
/run/current-system/metadata/system-declaration.scm/run/current-system/metadata/system-declaration-info.scm/run/current-system/metadata/system-declaration-system
So the in-guest helper can now build from the node's own embedded declaration inputs.
Current validated build/reconfigure behavior is:
fruix system build- with no extra arguments, builds from the embedded current declaration
fruix system reconfigure- with no extra arguments, builds from the embedded current declaration and stages a switch to the resulting closure
- both commands can also take an explicit declaration file plus
--system NAME
Current intended usage now has two validated patterns.
Pattern A: build elsewhere, then switch/rollback locally
- build a candidate closure on the operator side with
./bin/fruix system build - ensure that candidate closure is present on the installed target's
/frx/store - run
fruix system switch /frx/store/...on the installed system - reboot into the staged candidate generation
- if needed, run
fruix system rollback - reboot back into the recorded rollback generation
Important current limitation of this lower-level pattern:
fruix system switchdoes not yet fetch or copy the candidate closure onto the target for you- it assumes the selected closure is already present in the installed system's
/frx/store
Pattern B: build and reconfigure from the node itself
- inspect or edit the node declaration inputs
- embedded current declaration, or
- an explicit replacement declaration file
- run:
fruix system build
or:
fruix system build /path/to/candidate.scm --system my-operating-system
- stage a local generation update with:
fruix system reconfigure
or:
fruix system reconfigure /path/to/candidate.scm --system my-operating-system
- reboot into the staged generation
- if needed, run
fruix system rollback - reboot back into the recorded prior generation
In-guest development environment
Opt-in systems can also expose a separate development overlay under:
/run/current-system/development-profile/run/current-development
Those systems now ship a helper at:
/usr/local/bin/fruix-development-environment
Intended use:
eval "$(/usr/local/bin/fruix-development-environment)"
That helper exports a development-oriented environment while keeping the main runtime profile separate. The validated Phase 20 path currently uses this to expose at least:
- native headers under
usr/include - FreeBSD
share/mkfiles forbsd.*.mk - Clang toolchain commands such as
cc,c++,ar,ranlib, andnm MAKEFLAGSpointing at the development profile'susr/share/mk
For native base-build compatibility, development-enabled systems also now expose canonical links at:
/usr/include -> /run/current-system/development-profile/usr/include/usr/share/mk -> /run/current-system/development-profile/usr/share/mk
This is the current Fruix-native way to make a running system suitable for controlled native base-development work without merging development content back into the main runtime profile.
Host-initiated native base builds inside a Fruix-managed guest
The currently validated intermediate path toward self-hosting is still host-orchestrated.
The host:
- boots a development-enabled Fruix guest
- connects over SSH
- recovers the materialized FreeBSD source store from system metadata
- runs native FreeBSD build commands inside the guest
- collects and records the staged outputs
The validated build sequence inside the guest is:
make -jN buildworldmake -jN buildkernelmake DESTDIR=... installworldmake DESTDIR=... distributionmake DESTDIR=... installkernel
For staged install steps, the validated path uses:
DB_FROM_SRC=yes
so the staged install is driven by the declared source tree's account database rather than by accidental guest-local /etc/master.passwd contents.
This is the current Phase 20.2 answer to “where should native base builds run?”
- inside a Fruix-managed FreeBSD environment
- but still with the host driving the outer orchestration loop
Controlled guest self-hosted native-build prototype
Fruix now also has a narrower in-guest prototype helper at:
/usr/local/bin/fruix-self-hosted-native-build
Intended use:
FRUIX_SELF_HOSTED_NATIVE_BUILD_JOBS=8 \
/usr/local/bin/fruix-self-hosted-native-build
That helper:
- verifies the development overlay and canonical compatibility links
- recovers the materialized FreeBSD source store from:
/run/current-system/metadata/store-layout.scm
- runs the native FreeBSD build/install phases inside the guest
- records staged results under:
/var/lib/fruix/native-builds/<run-id>/var/lib/fruix/native-builds/latest
- emits promotion metadata for first-class artifact identities covering:
worldkernelheadersbootloader
- keeps the heavier object/stage work under:
/var/tmp/fruix-self-hosted-native-builds/<run-id>
Important current detail:
- the self-hosted helper intentionally sanitizes development-shell exports such as
MAKEFLAGS,CPPFLAGS,CFLAGS,CXXFLAGS, andLDFLAGSbeforebuildworld - directly reusing the full development-shell environment polluted FreeBSD's bootstrap path and was not reliable enough for real world/kernel builds
So the validated Phase 20.3 answer is:
- a controlled guest self-hosted base-build prototype now works
- but the simpler default operator flow should still be the Phase 20.2 host-initiated in-guest path unless there is a specific reason to push the build loop farther into the guest
Promoting native-build results into first-class Fruix store objects
The guest-side result root is now explicitly a staging/result area, not the final immutable identity.
Current validated flow:
- run the in-guest helper so the guest records a result under:
/var/lib/fruix/native-builds/<run-id>
- copy that result root back to the host
- run:
fruix native-build promote RESULT_ROOT
The promotion step creates immutable /frx/store identities for:
worldkernelheadersbootloader
and also creates a result-bundle store object that references those promoted artifact stores.
Current metadata split:
- mutable staging/result root:
/var/lib/fruix/native-builds/<run-id>
- immutable artifact stores:
/frx/store/...-fruix-native-world-.../frx/store/...-fruix-native-kernel-.../frx/store/...-fruix-native-headers-.../frx/store/...-fruix-native-bootloader-...
- immutable result bundle:
/frx/store/...-fruix-native-build-result-...
The promoted store objects record explicit Fruix-native metadata including at least:
- executor kind / name / version
- run-id / guest-host-name
- closure path
- source store provenance
- build policy
- artifact kind
- required-file expectations
- recorded content signatures and hashes
This is the current Fruix-native answer to the question:
- where should mutable native-build state live?
/var/lib/fruix/native-builds/...
- where should immutable native-build identity live?
/frx/store/...
Using promoted native-build results in system declarations
Fruix system declarations can now refer directly to a promoted native-build result bundle.
Current declaration-level helpers are:
promoted-native-build-resultoperating-system-from-promoted-native-build-result
Representative pattern:
(define promoted
(promoted-native-build-result
#:store-path "/frx/store/...-fruix-native-build-result-..."))
(define os
(operating-system-from-promoted-native-build-result
promoted
#:host-name "fruix-freebsd"
...))
That now gives Fruix a more product-like story:
- a build runs under some executor policy
- Fruix records the staged mutable result
- Fruix promotes it into immutable store identities
- a later system declaration can point at that promoted result identity
- Fruix materializes and boots a normal system from that promoted identity
The resulting closure now records that provenance explicitly through:
metadata/promoted-native-build-result.scmmetadata/store-layout.scm- closure references that retain the selected result-bundle store path
So the operator-facing statement is now:
- “this Fruix system is based on promoted native-base result X”
not only:
- “some earlier build happened and its files were copied somewhere.”
Native-build executor model
Fruix now has an explicit executor model for native base builds.
Current executor kinds are:
hostssh-guestself-hosted
and the intended future extension points are:
jailremote-builder
The important change is architectural:
- declared source identity stays the same
- expected artifact kinds stay the same
- result/promotion metadata shape stays the same
- only the executor policy changes
So “where the build runs” is now treated as executor policy rather than as a separate native-build architecture each time.
Current end-to-end validated executors for the staged-result-plus-promotion model are:
ssh-guestself-hosted
Both now converge on the same Fruix-native flow:
- run the build under a selected executor
- stage a result root under
/var/lib/fruix/native-builds/... - emit the same promotion/provenance shape
- promote the result into immutable
/frx/store/...objects
Deployment patterns
1. Build-first workflow
The default Fruix operator workflow starts by building the closure first:
- edit the system declaration
- run
fruix system build - inspect emitted metadata
- if needed, produce one of:
rootfsimageinstallinstallerinstaller-iso
This keeps the declaration-to-closure boundary explicit.
2. VM image deployment workflow
Use this when you want to boot a system directly rather than through an installer.
- run
fruix system image - boot the image in QEMU or convert/import it for XCP-ng
- validate:
/run/current-system- shepherd/sshd state
- activation log
- keep the closure path from the build metadata as the deployment identity
This is the current canonical direct deployment path for already-built images.
3. Direct installation workflow
Use this when you want an installed target image or disk without a booted installer guest.
- run
fruix system install --target ... - let Fruix partition, format, populate, and install the target
- boot the installed result
- validate
/var/lib/fruix/install.scmand target services
This is the most direct install path.
4. Installer-environment workflow
Use this when the install itself should happen from inside a booted Fruix environment.
- run
fruix system installer - boot the installer disk image
- let the in-guest installer run onto the selected target device
- boot the installed target
This is useful when the installer environment itself is part of what needs validation.
5. Installer-ISO workflow
Use this when the desired operator artifact is a bootable UEFI ISO.
- run
fruix system installer-iso - boot the ISO under the target virtualization path
- let the in-guest installer run onto the selected target device
- eject the ISO and reboot the installed target
This is now validated on both:
- local
QEMU/UEFI/TCG - the approved real
XCP-ngVM path
Install-target device conventions
The install target device is not identical across all boot styles.
Current validated defaults are:
- direct installer disk-image path under QEMU:
/dev/vtbd1
- installer ISO path under QEMU:
/dev/vtbd0
- installer ISO path under XCP-ng:
/dev/ada0
Therefore the canonical workflow is:
- always treat
--install-target-deviceas an explicit deployment parameter when moving between virtualization environments
Do not assume that a device name validated in one harness is portable to another.
Installed-system generation layout
Installed Fruix systems now record an explicit first-generation deployment layout under:
/var/lib/fruix/system
Initial installed shape:
/var/lib/fruix/system/
current -> generations/1
current-generation
generations/
1/
closure -> /frx/store/...-fruix-system-...
metadata.scm
provenance.scm
install.scm # present on installed targets
After a validated in-place switch, the layout extends to:
/var/lib/fruix/system/
current -> generations/2
current-generation
rollback -> generations/1
rollback-generation
generations/
1/
...
2/
closure -> /frx/store/...-fruix-system-...
metadata.scm
provenance.scm
install.scm # deployment metadata for the switch operation
Installed systems also now create explicit GC-root-style deployment links under:
/frx/var/fruix/gcroots
Current validated shape:
/frx/var/fruix/gcroots/
current-system -> /frx/store/...-fruix-system-...
rollback-system -> /frx/store/...-fruix-system-...
system-1 -> /frx/store/...-fruix-system-...
system-2 -> /frx/store/...-fruix-system-...
Important detail:
/run/current-systemstill points directly at the active closure path in/frx/store- the explicit generation layout therefore adds deployment metadata and retention roots without changing the already-validated runtime contract used by activation, rc.d wiring, and tests
Roll-forward workflow
The current Fruix roll-forward model now has two validated layers.
Declaration/deployment roll-forward
Canonical process:
- keep the current known-good system declaration
- prepare a candidate declaration
- this may differ by FreeBSD base identity
- source revision
- services
- users/groups
- or other operating-system fields
- run
fruix system buildfor the candidate - materialize either:
fruix system imagefruix system installfruix system installerfruix system installer-iso
- boot or install the candidate
- validate the candidate closure in the booted system
Installed-system generation roll-forward
When the candidate closure is already present on an installed target:
- run
fruix system switch /frx/store/...candidate... - confirm the staged state with
fruix system status - reboot into the candidate generation
- validate the new active closure after reboot
The important property is still that the candidate closure appears beside the earlier one in /frx/store rather than mutating it in place.
Rollback workflow
The current canonical rollback workflow also now has two validated layers.
Declaration/deployment rollback
You can still roll back by redeploying the earlier declaration:
- retain the earlier declaration that produced the known-good closure
- rebuild or rematerialize that earlier declaration
- redeploy or reboot that earlier artifact again
Concretely, the usual declaration-level rollback choices are:
- rebuild the earlier declaration with
fruix system buildand confirm the old closure path reappears - boot the earlier declaration again through
fruix system image - reinstall the earlier declaration through
fruix system install,installer, orinstaller-isoif the deployment medium itself must change
Installed-system generation rollback
When an installed target already has both the current and rollback generations recorded:
- run
fruix system rollback - confirm the staged state with
fruix system status - reboot into the rollback generation
- validate the restored active closure after reboot
This installed-system rollback path is now validated on local QEMU/UEFI/TCG.
Important scope note
This is still not yet the same thing as Guix's full reconfigure/generation UX.
Current installed-system rollback is intentionally modest:
- it switches between already-recorded generations on the target
- it does not yet fetch candidate closures onto the machine for you
- it does not yet expose a richer history-management or generation-pruning policy
Still pending:
- operator-facing closure transfer or fetch onto installed systems
- multi-generation lifecycle policy beyond the validated
currentandrollbackpointers - a fuller
reconfigure-style installed-system UX
Provenance and deployment identity
For any serious deployment or rollback decision, the canonical identity is not merely the host name. It is the emitted metadata:
closure_path- declared FreeBSD base/source metadata
- materialized source store paths
- install metadata at
/var/lib/fruix/install.scm - store item counts and reference lists
Operators should retain metadata from successful candidate and current deployments because Fruix already emits enough data to answer:
- which declaration was built
- which closure booted
- which source snapshot was materialized
- which target device or image was installed
Current limitations
The deployment workflow is now coherent, and Fruix now has a validated installed-system switch/rollback path, but it is still not the final generation-management story.
Not yet first-class:
- host-side closure transfer/fetch onto installed systems as part of
fruix system switch - a fuller
reconfigureworkflow that builds and stages the new closure from inside the target environment - multi-generation lifecycle policy beyond the validated
currentandrollbackpointers - generation pruning and retention policy independent of full redeploy
Those are the next logical steps after the current explicit-generation switch/rollback model.
Summary
The current canonical Fruix deployment model is:
- declare a system in Scheme
- build the closure with
fruix system build - materialize the artifact appropriate to the deployment target
- boot or install that artifact
- identify deployments by closure path and provenance metadata
- on installed systems, switch to a staged candidate with
fruix system switch - on installed systems, roll back to the recorded rollback generation with
fruix system rollback - still use declaration/redeploy rollback when the target does not already have the desired closure staged locally
That is the operator-facing workflow Fruix should document and use while its installed-system generation UX remains simpler than Guix's mature in-place system-generation workflow.