624 lines
20 KiB
Markdown
624 lines
20 KiB
Markdown
# Fruix system deployment workflow
|
|
|
|
Date: 2026-04-05
|
|
|
|
## Purpose
|
|
|
|
This document defines the current canonical Fruix workflow for:
|
|
|
|
- building a declarative system closure
|
|
- materializing deployable artifacts
|
|
- installing a declarative system onto an image or disk
|
|
- booting through installer media
|
|
- rolling forward to a candidate system
|
|
- switching an installed system to a staged candidate generation
|
|
- rolling an installed system back to an earlier recorded generation
|
|
|
|
This is now the Phase 19 operator-facing view of the system model as validated through explicit installed-system generation switching and rollback.
|
|
|
|
## Core model
|
|
|
|
A Fruix system workflow starts from a Scheme file that binds an `operating-system` object.
|
|
|
|
Today, the canonical frontend is:
|
|
|
|
- `./bin/fruix system ...`
|
|
|
|
The important output objects are:
|
|
|
|
- **system closure**
|
|
- a content-addressed store item under `/frx/store/*-fruix-system-<host-name>`
|
|
- includes boot assets, activation logic, profile tree, metadata, and references
|
|
- **rootfs tree**
|
|
- a materialized runtime tree for inspection or image staging
|
|
- **disk image**
|
|
- a bootable GPT/UEFI raw disk image
|
|
- **installer image**
|
|
- a bootable Fruix installer disk image that installs a selected target system from inside the guest
|
|
- **installer ISO**
|
|
- a bootable UEFI ISO with an embedded installer mdroot payload
|
|
- **install metadata**
|
|
- `/var/lib/fruix/install.scm` on installed targets
|
|
- records the selected closure path, install spec, and referenced store items including source provenance
|
|
|
|
The current deployment story is therefore already declaration-driven and content-addressed, even before first-class installed-system generations are modeled more explicitly.
|
|
|
|
## Canonical command surface
|
|
|
|
### Build a system closure
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" \
|
|
GUILE_AUTO_COMPILE=0 \
|
|
GUIX_SOURCE_DIR="$HOME/repos/guix" \
|
|
GUILE_BIN="/tmp/guile-freebsd-validate-install/bin/guile" \
|
|
GUILE_EXTRA_PREFIX="/tmp/guile-gnutls-freebsd-validate-install" \
|
|
SHEPHERD_PREFIX="/tmp/shepherd-freebsd-validate-install" \
|
|
./bin/fruix system build path/to/system.scm --system my-operating-system
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `closure_path=/frx/store/...-fruix-system-...`
|
|
|
|
Use this when you want to:
|
|
|
|
- validate the declarative system composition itself
|
|
- inspect provenance/layout metadata
|
|
- compare candidate and current closure paths
|
|
- drive later rootfs/image/install steps from the same declaration
|
|
|
|
### Materialize a rootfs tree
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" ... \
|
|
./bin/fruix system rootfs path/to/system.scm ./rootfs --system my-operating-system
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `rootfs=...`
|
|
- `closure_path=/frx/store/...`
|
|
|
|
Use this when you want to:
|
|
|
|
- inspect the runtime filesystem layout directly
|
|
- stage a tree for debugging
|
|
- validate `/run/current-system`-style symlink layout without booting a full image
|
|
|
|
### Materialize a bootable disk image
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" ... \
|
|
./bin/fruix system image path/to/system.scm \
|
|
--system my-operating-system \
|
|
--root-size 6g
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `disk_image=/frx/store/.../disk.img`
|
|
|
|
Use this when you want to:
|
|
|
|
- boot the system directly as a VM image
|
|
- test a candidate deployment under QEMU or XCP-ng
|
|
- validate a roll-forward or rollback candidate by image boot
|
|
|
|
### Install directly to an image file or block device
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" ... \
|
|
./bin/fruix system install path/to/system.scm \
|
|
--system my-operating-system \
|
|
--target ./installed.img \
|
|
--disk-capacity 12g \
|
|
--root-size 10g
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `target=...`
|
|
- `target_kind=raw-file` or `block-device`
|
|
- `install_metadata_path=/var/lib/fruix/install.scm`
|
|
|
|
Use this when you want to:
|
|
|
|
- produce an installed target image without booting an installer guest
|
|
- validate installation mechanics directly
|
|
- populate a raw image or a real `/dev/...` target
|
|
|
|
### Materialize a bootable installer disk image
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" ... \
|
|
./bin/fruix system installer path/to/system.scm \
|
|
--system my-operating-system \
|
|
--install-target-device /dev/vtbd1 \
|
|
--root-size 10g
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `installer_disk_image=/frx/store/.../disk.img`
|
|
|
|
Use this when you want to:
|
|
|
|
- boot a Fruix installer environment as a disk image
|
|
- let the in-guest installer partition and install onto a second disk
|
|
- validate non-interactive installation from inside a booted Fruix guest
|
|
|
|
### Materialize a bootable installer ISO
|
|
|
|
```sh
|
|
sudo env HOME="$HOME" ... \
|
|
./bin/fruix system installer-iso path/to/system.scm \
|
|
--system my-operating-system \
|
|
--install-target-device /dev/vtbd0
|
|
```
|
|
|
|
Primary result:
|
|
|
|
- `iso_image=/frx/store/.../installer.iso`
|
|
- `boot_efi_image=/frx/store/.../efiboot.img`
|
|
- `root_image=/frx/store/.../root.img`
|
|
|
|
Use this when you want to:
|
|
|
|
- boot through UEFI ISO media instead of a writable installer disk image
|
|
- install from an ISO-attached Fruix environment
|
|
- test the same install model on more realistic VM paths
|
|
|
|
### Installed-system generation commands
|
|
|
|
Installed Fruix systems now also ship a small in-guest deployment helper at:
|
|
|
|
- `/usr/local/bin/fruix`
|
|
|
|
Current validated in-guest commands are:
|
|
|
|
```sh
|
|
fruix system status
|
|
fruix system switch /frx/store/...-fruix-system-...
|
|
fruix system rollback
|
|
```
|
|
|
|
Current intended usage:
|
|
|
|
1. build a candidate closure on the operator side with `./bin/fruix system build`
|
|
2. ensure that candidate closure is present on the installed target's `/frx/store`
|
|
3. run `fruix system switch /frx/store/...` on the installed system
|
|
4. reboot into the staged candidate generation
|
|
5. if needed, run `fruix system rollback`
|
|
6. reboot back into the recorded rollback generation
|
|
|
|
Important current limitation:
|
|
|
|
- `fruix system switch` does **not** yet fetch or copy the candidate closure onto the target for you
|
|
- it assumes the selected closure is already present in the installed system's `/frx/store`
|
|
|
|
### In-guest development environment
|
|
|
|
Opt-in systems can also expose a separate development overlay under:
|
|
|
|
- `/run/current-system/development-profile`
|
|
- `/run/current-development`
|
|
|
|
Those systems now ship a helper at:
|
|
|
|
- `/usr/local/bin/fruix-development-environment`
|
|
|
|
Intended use:
|
|
|
|
```sh
|
|
eval "$(/usr/local/bin/fruix-development-environment)"
|
|
```
|
|
|
|
That helper exports a development-oriented environment while keeping the main runtime profile separate. The validated Phase 20 path currently uses this to expose at least:
|
|
|
|
- native headers under `usr/include`
|
|
- FreeBSD `share/mk` files for `bsd.*.mk`
|
|
- Clang toolchain commands such as `cc`, `c++`, `ar`, `ranlib`, and `nm`
|
|
- `MAKEFLAGS` pointing at the development profile's `usr/share/mk`
|
|
|
|
For native base-build compatibility, development-enabled systems also now expose canonical links at:
|
|
|
|
- `/usr/include -> /run/current-system/development-profile/usr/include`
|
|
- `/usr/share/mk -> /run/current-system/development-profile/usr/share/mk`
|
|
|
|
This is the current Fruix-native way to make a running system suitable for controlled native base-development work without merging development content back into the main runtime profile.
|
|
|
|
### Host-initiated native base builds inside a Fruix-managed guest
|
|
|
|
The currently validated intermediate path toward self-hosting is still host-orchestrated.
|
|
|
|
The host:
|
|
|
|
1. boots a development-enabled Fruix guest
|
|
2. connects over SSH
|
|
3. recovers the materialized FreeBSD source store from system metadata
|
|
4. runs native FreeBSD build commands inside the guest
|
|
5. collects and records the staged outputs
|
|
|
|
The validated build sequence inside the guest is:
|
|
|
|
- `make -jN buildworld`
|
|
- `make -jN buildkernel`
|
|
- `make DESTDIR=... installworld`
|
|
- `make DESTDIR=... distribution`
|
|
- `make DESTDIR=... installkernel`
|
|
|
|
For staged install steps, the validated path uses:
|
|
|
|
- `DB_FROM_SRC=yes`
|
|
|
|
so the staged install is driven by the declared source tree's account database rather than by accidental guest-local `/etc/master.passwd` contents.
|
|
|
|
This is the current Phase 20.2 answer to “where should native base builds run?”
|
|
|
|
- **inside** a Fruix-managed FreeBSD environment
|
|
- but still with the **host** driving the outer orchestration loop
|
|
|
|
### Controlled guest self-hosted native-build prototype
|
|
|
|
Fruix now also has a narrower in-guest prototype helper at:
|
|
|
|
- `/usr/local/bin/fruix-self-hosted-native-build`
|
|
|
|
Intended use:
|
|
|
|
```sh
|
|
FRUIX_SELF_HOSTED_NATIVE_BUILD_JOBS=8 \
|
|
/usr/local/bin/fruix-self-hosted-native-build
|
|
```
|
|
|
|
That helper:
|
|
|
|
1. verifies the development overlay and canonical compatibility links
|
|
2. recovers the materialized FreeBSD source store from:
|
|
- `/run/current-system/metadata/store-layout.scm`
|
|
3. runs the native FreeBSD build/install phases inside the guest
|
|
4. records staged results under:
|
|
- `/var/lib/fruix/native-builds/<run-id>`
|
|
- `/var/lib/fruix/native-builds/latest`
|
|
5. emits promotion metadata for first-class artifact identities covering:
|
|
- `world`
|
|
- `kernel`
|
|
- `headers`
|
|
- `bootloader`
|
|
6. keeps the heavier object/stage work under:
|
|
- `/var/tmp/fruix-self-hosted-native-builds/<run-id>`
|
|
|
|
Important current detail:
|
|
|
|
- the self-hosted helper intentionally **sanitizes** development-shell exports such as `MAKEFLAGS`, `CPPFLAGS`, `CFLAGS`, `CXXFLAGS`, and `LDFLAGS` before `buildworld`
|
|
- directly reusing the full development-shell environment polluted FreeBSD's bootstrap path and was not reliable enough for real world/kernel builds
|
|
|
|
So the validated Phase 20.3 answer is:
|
|
|
|
- a controlled guest self-hosted base-build prototype now works
|
|
- but the simpler default operator flow should still be the Phase 20.2 host-initiated in-guest path unless there is a specific reason to push the build loop farther into the guest
|
|
|
|
### Promoting native-build results into first-class Fruix store objects
|
|
|
|
The guest-side result root is now explicitly a **staging/result area**, not the final immutable identity.
|
|
|
|
Current validated flow:
|
|
|
|
1. run the in-guest helper so the guest records a result under:
|
|
- `/var/lib/fruix/native-builds/<run-id>`
|
|
2. copy that result root back to the host
|
|
3. run:
|
|
|
|
```sh
|
|
fruix native-build promote RESULT_ROOT
|
|
```
|
|
|
|
The promotion step creates immutable `/frx/store` identities for:
|
|
|
|
- `world`
|
|
- `kernel`
|
|
- `headers`
|
|
- `bootloader`
|
|
|
|
and also creates a result-bundle store object that references those promoted artifact stores.
|
|
|
|
Current metadata split:
|
|
|
|
- mutable staging/result root:
|
|
- `/var/lib/fruix/native-builds/<run-id>`
|
|
- immutable artifact stores:
|
|
- `/frx/store/...-fruix-native-world-...`
|
|
- `/frx/store/...-fruix-native-kernel-...`
|
|
- `/frx/store/...-fruix-native-headers-...`
|
|
- `/frx/store/...-fruix-native-bootloader-...`
|
|
- immutable result bundle:
|
|
- `/frx/store/...-fruix-native-build-result-...`
|
|
|
|
The promoted store objects record explicit Fruix-native metadata including at least:
|
|
|
|
- executor / executor-version
|
|
- run-id / guest-host-name
|
|
- closure path
|
|
- source store provenance
|
|
- build policy
|
|
- artifact kind
|
|
- required-file expectations
|
|
- recorded content signatures and hashes
|
|
|
|
This is the current Fruix-native answer to the question:
|
|
|
|
- where should mutable native-build state live?
|
|
- `/var/lib/fruix/native-builds/...`
|
|
- where should immutable native-build identity live?
|
|
- `/frx/store/...`
|
|
|
|
## Deployment patterns
|
|
|
|
### 1. Build-first workflow
|
|
|
|
The default Fruix operator workflow starts by building the closure first:
|
|
|
|
1. edit the system declaration
|
|
2. run `fruix system build`
|
|
3. inspect emitted metadata
|
|
4. if needed, produce one of:
|
|
- `rootfs`
|
|
- `image`
|
|
- `install`
|
|
- `installer`
|
|
- `installer-iso`
|
|
|
|
This keeps the declaration-to-closure boundary explicit.
|
|
|
|
### 2. VM image deployment workflow
|
|
|
|
Use this when you want to boot a system directly rather than through an installer.
|
|
|
|
1. run `fruix system image`
|
|
2. boot the image in QEMU or convert/import it for XCP-ng
|
|
3. validate:
|
|
- `/run/current-system`
|
|
- shepherd/sshd state
|
|
- activation log
|
|
4. keep the closure path from the build metadata as the deployment identity
|
|
|
|
This is the current canonical direct deployment path for already-built images.
|
|
|
|
### 3. Direct installation workflow
|
|
|
|
Use this when you want an installed target image or disk without a booted installer guest.
|
|
|
|
1. run `fruix system install --target ...`
|
|
2. let Fruix partition, format, populate, and install the target
|
|
3. boot the installed result
|
|
4. validate `/var/lib/fruix/install.scm` and target services
|
|
|
|
This is the most direct install path.
|
|
|
|
### 4. Installer-environment workflow
|
|
|
|
Use this when the install itself should happen from inside a booted Fruix environment.
|
|
|
|
1. run `fruix system installer`
|
|
2. boot the installer disk image
|
|
3. let the in-guest installer run onto the selected target device
|
|
4. boot the installed target
|
|
|
|
This is useful when the installer environment itself is part of what needs validation.
|
|
|
|
### 5. Installer-ISO workflow
|
|
|
|
Use this when the desired operator artifact is a bootable UEFI ISO.
|
|
|
|
1. run `fruix system installer-iso`
|
|
2. boot the ISO under the target virtualization path
|
|
3. let the in-guest installer run onto the selected target device
|
|
4. eject the ISO and reboot the installed target
|
|
|
|
This is now validated on both:
|
|
|
|
- local `QEMU/UEFI/TCG`
|
|
- the approved real `XCP-ng` VM path
|
|
|
|
## Install-target device conventions
|
|
|
|
The install target device is not identical across all boot styles.
|
|
|
|
Current validated defaults are:
|
|
|
|
- direct installer disk-image path under QEMU:
|
|
- `/dev/vtbd1`
|
|
- installer ISO path under QEMU:
|
|
- `/dev/vtbd0`
|
|
- installer ISO path under XCP-ng:
|
|
- `/dev/ada0`
|
|
|
|
Therefore the canonical workflow is:
|
|
|
|
- always treat `--install-target-device` as an explicit deployment parameter when moving between virtualization environments
|
|
|
|
Do not assume that a device name validated in one harness is portable to another.
|
|
|
|
## Installed-system generation layout
|
|
|
|
Installed Fruix systems now record an explicit first-generation deployment layout under:
|
|
|
|
- `/var/lib/fruix/system`
|
|
|
|
Initial installed shape:
|
|
|
|
```text
|
|
/var/lib/fruix/system/
|
|
current -> generations/1
|
|
current-generation
|
|
generations/
|
|
1/
|
|
closure -> /frx/store/...-fruix-system-...
|
|
metadata.scm
|
|
provenance.scm
|
|
install.scm # present on installed targets
|
|
```
|
|
|
|
After a validated in-place switch, the layout extends to:
|
|
|
|
```text
|
|
/var/lib/fruix/system/
|
|
current -> generations/2
|
|
current-generation
|
|
rollback -> generations/1
|
|
rollback-generation
|
|
generations/
|
|
1/
|
|
...
|
|
2/
|
|
closure -> /frx/store/...-fruix-system-...
|
|
metadata.scm
|
|
provenance.scm
|
|
install.scm # deployment metadata for the switch operation
|
|
```
|
|
|
|
Installed systems also now create explicit GC-root-style deployment links under:
|
|
|
|
- `/frx/var/fruix/gcroots`
|
|
|
|
Current validated shape:
|
|
|
|
```text
|
|
/frx/var/fruix/gcroots/
|
|
current-system -> /frx/store/...-fruix-system-...
|
|
rollback-system -> /frx/store/...-fruix-system-...
|
|
system-1 -> /frx/store/...-fruix-system-...
|
|
system-2 -> /frx/store/...-fruix-system-...
|
|
```
|
|
|
|
Important detail:
|
|
|
|
- `/run/current-system` still points directly at the active closure path in `/frx/store`
|
|
- the explicit generation layout therefore adds deployment metadata and retention roots without changing the already-validated runtime contract used by activation, rc.d wiring, and tests
|
|
|
|
## Roll-forward workflow
|
|
|
|
The current Fruix roll-forward model now has two validated layers.
|
|
|
|
### Declaration/deployment roll-forward
|
|
|
|
Canonical process:
|
|
|
|
1. keep the current known-good system declaration
|
|
2. prepare a candidate declaration
|
|
- this may differ by FreeBSD base identity
|
|
- source revision
|
|
- services
|
|
- users/groups
|
|
- or other operating-system fields
|
|
3. run `fruix system build` for the candidate
|
|
4. materialize either:
|
|
- `fruix system image`
|
|
- `fruix system install`
|
|
- `fruix system installer`
|
|
- `fruix system installer-iso`
|
|
5. boot or install the candidate
|
|
6. validate the candidate closure in the booted system
|
|
|
|
### Installed-system generation roll-forward
|
|
|
|
When the candidate closure is already present on an installed target:
|
|
|
|
1. run `fruix system switch /frx/store/...candidate...`
|
|
2. confirm the staged state with `fruix system status`
|
|
3. reboot into the candidate generation
|
|
4. validate the new active closure after reboot
|
|
|
|
The important property is still that the candidate closure appears beside the earlier one in `/frx/store` rather than mutating it in place.
|
|
|
|
## Rollback workflow
|
|
|
|
The current canonical rollback workflow also now has two validated layers.
|
|
|
|
### Declaration/deployment rollback
|
|
|
|
You can still roll back by redeploying the earlier declaration:
|
|
|
|
1. retain the earlier declaration that produced the known-good closure
|
|
2. rebuild or rematerialize that earlier declaration
|
|
3. redeploy or reboot that earlier artifact again
|
|
|
|
Concretely, the usual declaration-level rollback choices are:
|
|
|
|
- rebuild the earlier declaration with `fruix system build` and confirm the old closure path reappears
|
|
- boot the earlier declaration again through `fruix system image`
|
|
- reinstall the earlier declaration through `fruix system install`, `installer`, or `installer-iso` if the deployment medium itself must change
|
|
|
|
### Installed-system generation rollback
|
|
|
|
When an installed target already has both the current and rollback generations recorded:
|
|
|
|
1. run `fruix system rollback`
|
|
2. confirm the staged state with `fruix system status`
|
|
3. reboot into the rollback generation
|
|
4. validate the restored active closure after reboot
|
|
|
|
This installed-system rollback path is now validated on local `QEMU/UEFI/TCG`.
|
|
|
|
### Important scope note
|
|
|
|
This is still not yet the same thing as Guix's full `reconfigure`/generation UX.
|
|
|
|
Current installed-system rollback is intentionally modest:
|
|
|
|
- it switches between already-recorded generations on the target
|
|
- it does not yet fetch candidate closures onto the machine for you
|
|
- it does not yet expose a richer history-management or generation-pruning policy
|
|
|
|
Still pending:
|
|
|
|
- operator-facing closure transfer or fetch onto installed systems
|
|
- multi-generation lifecycle policy beyond the validated `current` and `rollback` pointers
|
|
- a fuller `reconfigure`-style installed-system UX
|
|
|
|
## Provenance and deployment identity
|
|
|
|
For any serious deployment or rollback decision, the canonical identity is not merely the host name. It is the emitted metadata:
|
|
|
|
- `closure_path`
|
|
- declared FreeBSD base/source metadata
|
|
- materialized source store paths
|
|
- install metadata at `/var/lib/fruix/install.scm`
|
|
- store item counts and reference lists
|
|
|
|
Operators should retain metadata from successful candidate and current deployments because Fruix already emits enough data to answer:
|
|
|
|
- which declaration was built
|
|
- which closure booted
|
|
- which source snapshot was materialized
|
|
- which target device or image was installed
|
|
|
|
## Current limitations
|
|
|
|
The deployment workflow is now coherent, and Fruix now has a validated installed-system switch/rollback path, but it is still not the final generation-management story.
|
|
|
|
Not yet first-class:
|
|
|
|
- host-side closure transfer/fetch onto installed systems as part of `fruix system switch`
|
|
- a fuller `reconfigure` workflow that builds and stages the new closure from inside the target environment
|
|
- multi-generation lifecycle policy beyond the validated `current` and `rollback` pointers
|
|
- generation pruning and retention policy independent of full redeploy
|
|
|
|
Those are the next logical steps after the current explicit-generation switch/rollback model.
|
|
|
|
## Summary
|
|
|
|
The current canonical Fruix deployment model is:
|
|
|
|
- **declare** a system in Scheme
|
|
- **build** the closure with `fruix system build`
|
|
- **materialize** the artifact appropriate to the deployment target
|
|
- **boot or install** that artifact
|
|
- **identify deployments by closure path and provenance metadata**
|
|
- on installed systems, **switch** to a staged candidate with `fruix system switch`
|
|
- on installed systems, **roll back** to the recorded rollback generation with `fruix system rollback`
|
|
- still use declaration/redeploy rollback when the target does not already have the desired closure staged locally
|
|
|
|
That is the operator-facing workflow Fruix should document and use while its installed-system generation UX remains simpler than Guix's mature in-place system-generation workflow.
|