#bootc

2025-06-19

I got a new laptop purely for better Rust compilation performance. Now, I'm almost done setting up my new bootable container-based Linux system. But I'm starting to worry about its performance. After all, I'm adding an extra layer (a container layer) on top of my bare-metal machine, which could introduce even a 0.001% performance degradation. I've already witnessed some performance degradation with Podman/Docker containers in my server-side work. #linux #bootc #AtomicDesktop #fedora

FrostyX :redhat:FrostyX@fosstodon.org
2025-06-17

@cgwalters we started using bootc images for Copr builders and so far everything works great.

frostyx.cz/posts/copr-builders

#fedora #copr #bootc

2025-06-13

Join the second keynote of #DevConf_CZ 2025.
@kubealex and Luca will tell us what changed in the past year in the area of #Bootc!
See you at 9:30 in the D105 room.

Adam Williamson :fedora:adamw@fosstodon.org
2025-06-13

yesterday in #fedora qa:
* #devconf day one, saw several talks, some AI ones which were interesting but had a tendency to end "...and then it didn't work" and/or involve way too much setup work. Some really interesting #bootc stuff, though
* sent github.com/rpm-software-manage to work around false failures in dnf repoclosure, and filed pagure.io/releng/issue/12777 about reconciling it and spam-o-matic (an old script that does repoclosure-y stuff)
* sent github.com/rhinstaller/anacond to fix issues.redhat.com/browse/RHEL-

Joel Wirāmu, Paulingjwp@cloudisland.nz
2025-06-13

Work shill. I will be running a 2 hour interactive workshop next week on #redhat #linux #imagemode AKA #bootc upstream and what forms the basis of a few really cool community projects including #bazzite / #universalblue

This is an #APAC/#ANZ friendly time-slot next Tuesday - register here : red.ht/3ZrFJrK

2025-06-09

Made some good progress today building a Fedora bootc based OS image for my home server. Mostly reading docs and example code, and setting up a skeleton GitHub repo with actions to build and publish the thing.

Next I want to practice installing it. I hear #anaconda can be used to install bootc images in an ad-hoc way. That seems like the simplest for me since I want to poke at disk layouts, and have no need to automate the install.

#Fedora #bootc

2025-06-09

My Aurora desktop 😊

Ptyxis terminal is great for toolbox container integration. I'm used to a Quake-style dropdown terminal, so I'd like to find a way to hide/show the terminal window with a keyboard shortcut (alt+z). There is possibly a way to do it by writing a kwin script.

I'm eventually going to build my own custom image with the ublue-os/image-template repository.

#ublueaurora #Universalblue #Fedora #Kinoite #Plasma6 #cloudnative #bootc

Screenshot of Aurora Plasma 6 desktop. Ptyxis terminal emulator is in the middle of the screen showing the output of the Aurora's custom fastfetch program.

Manage your Linux systems like a container!

I’ve got to tell you, I have not been so excited about a technology… probably since Containers. At Summit this year Red Hat announced the General Availability of Image Mode for RHEL. So I got to spend a week in Boston, explaining, over and over again, why that’s important.

See, Image mode is kind of a big deal. It takes container workflows, and applies it to your data center servers using a technology called bootc. This concept isn’t new exactly, this sort of technology has been applied to edge devices, and phones, and other appliances for years. But what we have now is a general purpose linux that you can update using a bootable container image. This changes things.

So think about a Linux system as you know it today. We’re calling that Package Mode now in order to avoid confusion. RHEL Package Mode is a Linux base, with a package manager, where you install and configure things, and then fight to keep those things from drifting pretty much from then until eternity. There’s a whole facet of the IT industry around mitigating that drift. Package and config management is a huge business! For good reason! Drift is what makes your routine 2AM maintenance into a panic attack when the database server doesn’t come back up.

So I talked a lot about Image Mode at Summit, but I have to admit, I hadn’t touched it yet! So Now that I’m back home, and my time is a little less all consumed by prep for the RHEL 10 release, and Summit deadlines, I decided to take some time and get hands on with this revolutionary thing.

Building a pipeline

So, I use Gitlab community edition as a repository for a few container builds I maintain. Some time back I managed to get the CI/CD pipelines working for my container builds. These were nothing fancy, but they work. I commit a change to the repository, and a job kicks off to rebuild the container, and push it into a registry. In some cases that’s just the internal Gitlab registry, in others its Docker Hub. I, of course, do it all with Podman. So when I decided to tackle Image Mode, I thought it would be best to just rip that band-aid right off and do it in Gitlab, and have the builds happen there. How hard could it be? I already had container builds running there!

So I made a repo, and copied my CI config from one of the container builds that just used podman and the local registry, and threw in a basic Containerfile that just sourced FROM the RHEL bootc base image, and then did a package install. Commit, sit back in my arrogance and wait for my image.

It failed. For reasons I still don’t fully understand, the container build uses fuse-overlayfs to do its build, and couldn’t in my runner’s podman in podman build container. I did some research, and luckily I have access to internal Red Hat knowledge, so I was able to bounce some ideas around and came up with a solution. Two things actually. My runner needed some config changes. Here, I’ll share them with you.

Here is my Runner config

[[runners]]  name = "dind-container"  url = "https://git.undrground.org"  id = 3  token = "NoTokenForYou"  token_obtained_at = somedatestamp  token_expires_at = someotherdatestamp  executor = "docker"  environment = ["FF_NETWORK_PER_BUILD=1"]  [runners.cache]    MaxUploadedArchiveSize = 0    [runners.cache.s3]    [runners.cache.gcs]    [runners.cache.azure]  [runners.docker]    tls_verify = false    image = "docker:git"    privileged = true    disable_entrypoint_overwrite = false    oom_kill_disable = false    disable_cache = false    volumes = ["/cache"]    shm_size = 0    network_mtu = 0

The things I had to add were, first, privileged = true. This gives the container the access it needs to do its fusefs work. And the environment “FF_NETWORK_PER_BUILD=1”, which I believe tweaks the podman networking such that it fixed a DNS resolution problem I was having in my builds.

With that fixed, I was able to get builds working! I have two things to share that may help you if you are trying to do the same. First, another Red Hatter built a public example repo that will apparently “just work” if you use it as a base for your Image Mode CI/CD. It didn’t work for me, but I suspect that was more about my gitlab setup and less about the functionality of the example. You can find that example, Here. What I ended up doing was modify my existing podman CI file. That looks like this:

---image: registry.undrground.org/gangrif/podman-builder:latest#services:#    - docker:dindbefore_script:    - dnf -y install podman git subscription-manager buildah skopeo podman    - subscription-manager register --org=${RHT_ORGID} --activationkey=${RHT_ACT_KEY}    - subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms --enable rhel-9-for-x86_64-baseos-rpms    - export REVISION=$(git rev-parse --short HEAD)    - podman login --username gitlab-ci-token --password $CI_JOB_TOKEN      $CI_REGISTRY    - podman login --username $RHLOGIN --password "$RHPASS" registry.redhat.ioafter_script:    - podman logout $CI_REGISTRY    - subscription-manager unregisterstages:    - buildcontainerize:    stage: build    script:      .    - podman build --secret id=creds,src=/run/containers/0/auth.json --build-arg GIT_HASH=$CI_COMMIT_SHA      -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -t $CI_REGISTRY_IMAGE:latest       .    - podman push $CI_REGISTRY_IMAGE

Now, this example contains no verification or validation, so I suggest you maybe look into the proper example linked externally. That one has a lot of testing included. Mine will improve with time. 😉

Registry Authentication for your build

Now, there’s a few things to note here. First, Notice that I am not just logging into my own registry, but registry.redhat.io. You register using your Red Hat login for the Red Hat private registry, and that’s where the bootc base images come from. I also use subscription-manager to register the build container to Red Hat’s CDN. That’s because the RHEL Image Mode build is building RHEL, and must be done using an entitled host in order to receive any updates or packages during the container build. This was something I had gotten stuck on for some time, its a little tough to wrap your head around. Once you do though, it makes sense.

Authenticating your bootc system with your registry, automatically

I am also passing the podman authentication token file into a podman secret at build time. This is important later. If your bootc images are stored in a registry that is not public, you will need to authenticate to that registry in order to pull your updated images after deployment. The easiest way to bake in that authentication is to simply take the authentication from the build host, and place it into the built image. There is some trickery that happens in your Containerfile to make this work. You can read more about this here.

Containerfile

So, I told you we build image mode like a container. I meant it. We literally write a Contanerfile, and source it from these special bootc images that are published by Red Hat. There are a few things you’ll want to think about when building a bootc Containerfile vs a standard application container. Things that you wouldn’t normally think about when building a normal container.

Content

First, RHEL is entitled software, that doesn’t change for RHEL Image Mode. This is pretty seemless if you are doing your build directly on an Entitled RHEL system. But if you’re in a ubi container like I am, you’ll need to subscribe the UBI container because the BootC build will depend on that entitlement to enable its own repositories. That is not true, however, for 3rd party public repositories. Those just get enabled right inside of the Containerfile. This sounds confusing, but it boils down to this. RHEL repository? Entitled by the build host, Other repository? Add it via the Containerfile. I add EPEL in my example below.

Users

Something else I don’t usually see done in a standard container is the addition of users. Remember this is going to be a full RHEL host at the other end, so you might need to add users. In my case I am adding a local “breakglass” user, because I am leveraging IdM for my identities. But if something goes wrong during the provisioning, i want a user I can login to the system with to troubleshoot. You can also come in later with other tools to add users. You can enable cloud-init and add them there, or if you are using the image builder tool I’ll talk about in a bit, you can give it a config.toml file to add users at that point.

Other Considerations

Other things that you’ll need to think about might be firewall rules, container registry authentication, and even the lack of an ENTRYPOINT or CMD. Because this system is expected to boot into a full OS, it is not going to run a single dedicated workload. Instead you’ll be enabling services like you would on a standard RHEL system, with systemctl.

My Containerfile

Now that we’re through all of that, let me show you what I ended up with as a Containerfile.

FROM registry.redhat.io/rhel9/rhel-bootc:latest# Enable EPEL, install updates, and install some packagesRUN dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpmRUN dnf -y updateRUN dnf -y install ipa-hcc-client rhc rhc-worker-playbook cloud-init && dnf clean all# This sets up automatic registration with Red Hat InsightsCOPY --chmod=0644 rhc-connect.service /usr/lib/systemd/system/rhc-connect.serviceCOPY .rhc_connect_credentials /etc/rhc/.rhc_connect_credentialsRUN systemctl enable rhc-connect && touch /etc/rhc/.run_rhc_connect_next_boot# This is my backdoor user, in case of IdM join failureRUN useradd breakglassRUN usermod -p '$6$s0m3pAssw0rDHasH' breakglassRUN groupmems -g wheel -a breakglass# This picks up that podman pull secret, and adds it to the build imageCOPY link-podman-credentials.conf /usr/lib/tmpfiles.d/link-podman-credentials.confRUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && \    chmod 0600 /usr/lib/container-auth.json && \    ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json# This configures the bootc update timer to run at a time that I consider acceptableRUN mkdir -p /etc/systemd/system/bootc-fetch-apply-updates.timer.d/COPY weekly-timer.conf /etc/systemd/system/bootc-fetch-apply-updates.timer.d/weekly.conf

You can see from my comments what’s going on in the various blocks in that Containerfile. My intention is to use this as a base RHEL system, and then make more derivative images based on this one. For instance, if I wanted a web server, I would base a new Containerfile on this image, and then add in a RUN dnf install httpd. Its important to note that you shouldn’t be installing packages on these deployed systems after they are up and running. Those installations should happen in the image. If you install a package on a running image mode system, that change will not be carried into the next image update on your system unless you then incorporate it into your bootable container image. This means that you will need to plan ahead, but it also means that tracking package drift in the future is a thing of the past!

In my case, the above mentioned CI automation, and this Containerfile worked in my Gitlab instance, with the above Runner modifications. The build job will take some time, a bootc image is much larger than the lightweight container images you are used to if you’ve been building application containers.

But what about turning that into a VM?

So I am covering but ONE method of getting this image deployed to an acutal system. You can use a myriad of different methods including Kickstart, writing an ISO, PXEBOOT, but what I am doing (because it suits my needs) is turning my image into a qcow2 file, which is a virtual disk image for use with Libvirt. If you’re familiar with Image Builder, the tool used to churn out tailored RHEL disk images, then this wont be a surprise. Theres a container that you can grab that just runs image builder, you give it a bootable container image, and it turns it into a qcow2! Ive cooked up a script that pulls my bootable container right from my registry, writes it to a qcow2, then immediately passes that to virt-install and builds a VM out of it!

In my case, it also uses cloud-init to set its hostname, auto registers, and connects to insights, and then uses a slick new tech preview feature that auto-joins my lab’s IdM domain through insights! Here is my script:

#!/bin/bashVMNAME=$1podman login --username my-gitlab-username -p 'gitlab-token' registry.undrground.orgpodman login --username my-redhat-login -p 'redhatpassword registry.redhat.iopodman pull registry.undrground.org/gangrif/rhel9-imagemode:latestsudo podman run \    --rm \    -it \    --privileged \    --pull=newer \    --security-opt label=type:unconfined_t \    -v $(pwd)/config.toml:/config.toml \    -v $(pwd)/output:/output \    -v /var/lib/containers/storage:/var/lib/containers/storage \    registry.redhat.io/rhel9/bootc-image-builder:latest \    --type qcow2 \    registry.undrground.org/gangrif/rhel9-imagemode:latestcat << EOF > $VMNAME.init#cloud-configfqdn: $VMNAME.idm.undrground.orgEOFmv $(pwd)/output/qcow2/disk.qcow2 /var/lib/libvirt/images/$VMNAME-disk0.qcow2virt-install \--name $VMNAME \--memory 4096 \--vcpus 2 \--os-variant rhel9-unknown \--import \--clock offset=localtime \--disk=/var/lib/libvirt/images/$VMNAME-disk0.qcow2 \-w bridge=bridge20-lab \--autoconsole none \--cloud-init user-data=$VMNAME.init 

This, of course, can be improved, but as a proof of concept it works great! Ive build a few test systems and so far its working flawlessly! Now, when I wans to update my systems, I update the gitlab repository with the changes, and let the CI run. Then once it completes, all I do is run this script to make a new vm! The running vms -should- (i have not tested this yet) get the updated bootble container image from the registry on saturday at 3AM, and reboot if new changes are applied.

Wrapping it up

This is, i think, the thing we’ve been promised for years. Ever since the advent of the cloud when we were told that we should stop treating our servers like pets, but never really given a clear definition of how. Image Mode makes that promise a reality. I’m certain I’ll be sharing more as my Image Mode journey progresses. Thanks for reading!

Share via:

0Shares
  • Facebook
  • Twitter
  • LinkedIn
  • More

#bootc #cloud #image #imageMode #linux #redHat #redHatEnterpriseLinux #rhel #services

2025-06-03

Day 2 of #hackdays2025: #EU_OS has now a new Proof-of-Concept section on the website (work in progress). The team enrolled some #bootc machines in #Foreman. The results so far were pitched to a selection board. Thanks to the great team who made this all possible! 🙏

Unfortunately, #EU_OS has not been selected to proceed to the next stage.

Robert Riemann 🇪🇺rriemann@chaos.social
2025-06-02

@escapetofreedom @eu_os @opensuse

Please check out eu-os.eu/faq#fedora

#opensuse does not support #bootc so far. Otherwise EU OS proof of concept is agnostic to the distro.

🌈 Kerblambuli 🦄 (4392@GPN23)ChrisUplus@chaos.social
2025-06-02

Finally! It seems that both bootc + rpm-ostree in Fedora 42 know how to handle zstd:chunked OCI bundles.

So both the improved compression by the Zstandard algorithm AND it's inherent capability to produce a segmented output stream that is a series of compressed files with embedded metadata, which can be deduplicated over the network (!), are supported.

Perfect addition to OSTree chunks, which are small, grouped blobs (e.g. pairs of metadata and one or more packages).

#bootc #ostree #fedora42

Freya :nb_verified:Venefilyn@snug.moe
2025-05-29

Anyone in bootc or container land? who can lend a hand?

Can see the labels at the bottom here for example
​:neofox_woozy:​
github.com/Venefilyn/veneos/pkgs/container/veneos/426508416?tag=stable.2025-05-29

#bootc #containers #ostree #linux

2025-05-27

6 days until #EU_OS at Paris #HackDays: EU OS is a so-called atomic operating system that offers interruption free software updates thanks to #bootc (bootable container) technology. While people work, bootc downloads and installs unnoticeable to the user the update. After the next computer restart (trigged by the user), EU OS switches to the new version. If something breaks, the user can roll back. 1/2

#Linux #DigitalSovereignty #Microsoft #Windows #Trump #Khan #Tariffs #sysadmin #endof10

Atomic Linux software update process: download update in background + switch to update during boot. User files stay untouched.
Robert Riemann 🇪🇺rriemann@chaos.social
2025-05-26

Dear @debby ,

can you please check out first our FAQ at eu-os.eu/faq#sovereignty

So far, #LinuxMint does not support #bootc, which EU OS relies on to enable collaboration through sharing of container layers.

@eu_os @opensuse @linuxmint

2025-05-26

Excited to share that I'll be presenting at Flock this year in Prague! I'll be sharing my experience implementing Bootable Containers with real workloads, showing what is really capable today with this amazing technology! If you'll be there or have any suggestions about Prague please let me know!

cfp.fedoraproject.org/flock-to

#Fedora #Flock #bootc

Robert Riemann 🇪🇺rriemann@chaos.social
2025-05-12

@axiomatisch @eu_os

I recommend you read blues.win/posts/joy-of-linux-t that poses the very relevant question: "what is a Linux distribution anyway"?

So if EU OS is a distribution by itself pretty much depends on the definition of distribution. I think the definition should be narrow and then #EU_OS should would not be a distribution.

#winblues #bootc

2025-05-09

Who wants #gentoo portage to be using an atomic model, with #ostree and/or #bootc underneath?

2025-05-05

Podman Desktop is just so awesome. I'm going to create a bootable container disk image for AlmaLinux 9.5, which I will upload to netcup and install it on a VPS.

#Podman #bootc #FOSS #AlmaLinux

Screenshot of the Podman Desktop application, showing the bootable containers disk image builder.
2025-05-05

What changed in the past year in the area of #Bootc?
Join @kubealex and Luca on the second #Keynote of #DevConf_CZ 2025 to learn about all the latest developments!

More information 👉 pretalx.devconf.info/devconf-c

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst