#Ceph

2025-06-14

Ok probably another week to go until I get these drives running

I should probably use that to figure out backups for this current pool.....

if the bulk storage poll in ceph works well enough I'll ship the current file server off to the parent's but for now I'll be able to use the LAN to do initial syncs

S3-compatible backup via Velero is one option, with MinIO, or
Garage running in a container backed by ZFS (I am not building a remote ceph cluster 😅)

Anyone have thoughts/suggestions on backup strategy here? Probably backing up 2-3TB of data total (lots of photos)

I'll end up with local snapshots, and remote backups, huge bonus if they can be recovered without needing a ceph cluster to restore to in the case of something catastrophic.
#HomeLab #Ceph

2025-06-11

#proxmox #ceph #raid #hp On my test setup for a Proxmox HCI Ceph Cluster I now have each Ceph Disk as Raid0 so I can add them to Ceph. Even there is a warning that disks behind Raid controllers are not supported, it works without error till now.

HP Gen8 SmartArray does not support Raid/HBA hybrid mode. So I am forced to use this way

Michael DiLeo on GoToSocialmdileo@michaeldileo.org
2025-06-11

More progress in setting up #talos and #kubernetes!

Because my provider, Netcup, doesn't have a firewall in front of the #vps, I want to set up a #wireguard server to secure things, but that requires storage. Last time I finally got talos to split the SSD into volumes, one part for ephemeral talos, and the rest for #ceph and #ceph-rook.

But for that to work, I also had to do something with #fluxcd (at least as part of the guide I'm following). I think it's working! There's still more to do as far as cleanup and continuing, but I should be able to get #kustomize working soon!

Then, I'll follow more setup steps so that I can finally do what I could have done with #docker on regular #linux lol.

#overcomplicatingThings #putItOnTheResume

screenshot of k9s showing various kubernetes services running, flux (for ci-cd) and ceph (for volume management).
Michael DiLeo on GoToSocialmdileo@michaeldileo.org
2025-06-11

I just discovered something in how to handle #talos in my #selfhosting single-node #kubernetes cluster, I think. I'm still in the process of trying to get things installed and running.

Since I currently have a single 550GB disk, the way that it look like to handle data is to set up a volume configuration targeting the EPHEMERAL volume, which is what talos uses for the disk install, and set a limit on it. Then make a user volume configuration to target the rest of the disk space that you want.

I'm still trying to see if this will actually work, but what I have so far is below. I'm planning to use #ceph and #rook-ceph to manage volume storage.

# nodes/n1.yaml
machine:
  install:
    disk: /dev/vda
  network:
    hostname: n1
    interfaces:
    - interface: eth0
      dhcp: true

---
# goal: limit the size of talos ephemeral volume to 100GB and use the rest for ceph
apiVersion: v1alpha1
kind: VolumeConfig
name: EPHEMERAL
provisioning:
  diskSelector:
    match: system_disk
  minSize: 2GB
  maxSize: 100GB
  grow: false
---
apiVersion: v1alpha1
kind: UserVolumeConfig
name: ceph-data
provisioning:
  diskSelector:
    match: system_disk # my vps has one volume
  minSize: 100GB
2025-06-10

A #Question to all #Ceph guys. I have some older HP with smart array controllers here and want o build a Ceph test setup with adding a disk to each server. They run proxmox 8 latest.

I tried to make the disk available to Linux/proxmox, but looks like this controller does not support HBA. What is the Ceph problem if I create a Raid0 logical disk for each additional harddisk, compared to managing the devices by Ceph?

2025-06-10

More Techposting:

Second 1g copper sfp didn't work, idly picky switch given that the sfp+ ports seems to not care. Ordered another cheap eBay sfp, third time's the charm?

Got the downloader running but not in kata, dang. I put pretty comprehensive network policies around it so good enough for now. This lets me shut down the last Proxmox VM.

I got the drives in but have not installed them yet, waiting on a few more parts

Contemplating if I want to use all 8 or keep 2 as spares
#Homelab #Kubernetes #Ceph #Mikrotik

2025-06-09

Ok so I should get the drives today which means hopefully this week the bulk pool will move in-cluster

For each of CephFS/rgw/rbd in the Ceph cluster spec it lists an array so I could add a second CephFS? But also I
think I can add a second pool but then I'd have to create a storage class manually to use it

Not sure just yet the pros/cons of each
#Homelab #Kubernetes #Ceph

OpenInfra FoundationOpenInfra@openinfra.dev
2025-06-09

⏳ Final days to submit! ⏳

The CFP for the #OpenInfraSummit Europe closes this Friday! If you’re building with #OpenStack, #Kubernetes, #Ceph, #KataContainers, or other #OpenSource infrastructure, this is your moment to shine on stage in Paris! summit2025.openinfra.org/cfp/

📅 Deadline: Friday, June 13
📍 17–19 October 2025, Paris-Saclay, France!

2025-06-08

What is the proper way to set Ceph ms_type when using MicroCeph? #juju #ceph

askubuntu.com/q/1550202/612

2025-06-06

Приоткрываем завесу: о принципах работы дисковых хранилищ VK Cloud

Инфраструктурный слой большинства облачных платформ — та часть айсберга, которая остается глубоко под водой и никогда не видна простым обывателям. Вместе с тем именно IaaS-сервисы в целом и дисковые хранилища в частности являются основой для построения пользователями своих инфраструктур в облаке. Привет, Хабр. Меня зовут Василий Степанов. Я руководитель команды разработки Storage в VK Cloud. В этой статье я расскажу о том, как устроено наше дисковое хранилище: какие диски используются в VK Cloud и как мы с ними работаем.

habr.com/ru/companies/vk/artic

#vk_cloud #SDS #дисковые_хранилища #vk_tech #HighIOPS #softwaredefined_storage #виртуальная_машина #виртуализация #ceph

2025-05-31

Welche der folgenden kreativen Aussprachevarianten für Ceph habt ihr schon in freier Wildbahn gehört?

#Ceph

Elimáticaelimatica
2025-05-29

🚀 Si administras sistemas en Proxmox VE, aprovecha el almacenamiento distribuido con Ceph. Gana en escalabilidad, disponibilidad y tranquilidad operativa.
¿Tienes experiencia con Ceph en tu infraestructura virtual?

Thoralf Will 🇺🇦🇮🇱🇹🇼thoralf@soc.umrath.net
2025-05-28

Test von #ceph auf meinem #Proxmox-Cluster abgeschlossenen.

Es hat funktioniert, aber der Performance-Overhead ist gewaltig und steht in keinem sinnvollen Verhältnis zum Nutzen, der definitiv vorhanden wäre.

Also alles wieder auf zfs zurückgebaut.
Evtl. behalte ich einen kleinen Cluster für Backups, etc.

Thoralf Will 🇺🇦🇮🇱🇹🇼thoralf@soc.umrath.net
2025-05-27

Eine Idee für das "Wie?" in meinem Kopf wäre:

1. Auf jedem Node eine Disk aus dem #zfs-Mirror heraustrennen.
2. #ceph mit den 3 Disks aufbauen.
3. Volumes von zfs auf ceph migrieren.
4. Alles prüfen.
5. zfs zerlegen
6. Die weiteren Platten ceph zur Verfügung stellen.

Ginge das?

Alternativ hätte ich auch noch 3 Platten rumliegen (1, 2 und 4 TB), mit denen ich eine initiale ceph-Umgebung aufbauen könnte, einen Node komplett umziehen könnte und mich dann Stück für Stück durcharbeite ...

Thoralf Will 🇺🇦🇮🇱🇹🇼thoralf@soc.umrath.net
2025-05-27

Ich betreibe derzeit einen #Proxmox-Cluster mit 3 Nodes.
Jeder Node hat ein #zfs-Volume mit 2 Disks (gespiegelt) mit jeweils 10, 12 bzw. 4 TB Platz.

Kann ich das live irgendwie auf #ceph migrieren?

Falls ja:
1. Was gewinne oder verliere ich dabei?

Ich frage, weil der zeitversetze Replikation durchaus IO-bottlenecks erzeugt und ich die Hoffnung habe, dass eine permanente Replikation das entzerren könnte und ich gleichzeitig eine höhere Stabilität bekomme.

2. Wie mache ich das am besten?

Finished the Ceph upgrade from 18.2.4 to 19.2.2. It took a mere 10 minutes. No issues at all, and the Homelab hummed along fine during the update.

#HomeLab #Ceph

The Ceph project really knows how to do consent correctly. I've just upgraded from v18.2.4 to v19.2.2, and they added some things to their opt-in telemetry. And instead of just adding the additional information, they disable telemetry sending and require you to opt-in again.

#HomeLab #Ceph #GoodThings

A screenshot of the Ceph dashboard, showing the cluster in "Warning" state. The message for the warning reads like this:

TELEMETRY_CHANGED: Telemetry requires re-opt-in
telemetry module includes new collections; please re-opt-in to new collections with `ceph telemetry on`
2025-05-22

I tried to benchmark our network storage, and this happened.

TLDR: Many factors influence benchmarks for network-attached storage. Latency and throughput limitations, as well as protocol overhead, network congestion, and caching effects, may create much better or worse performance results than found in a real-world workload. Benchmarking a storage solution, especially for network-attached storage, is supposed to provide us with a baseline for performance evaluation. You run the performance test, slap them into a dashboard, build a presentation, and […]

simplyblock.io/blog/network-at

With network-attached storage, there is a whole stack underneathIOPS performance comparison of Simplyblock vs. Ceph at 16K blocksize
2025-05-17

That struggle: I have to shutdown my #Proxmox cluster (with #Ceph and #Corosync) to update the SwOS firmware on one @mikrotik switch …
Or … _the only_ switch for the entire cluster.

#HomeLab

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst