#sysops

𝚓𝚓@𝟷$:~thejtoken@hachyderm.io
2026-02-02

I have been working with Docker for years and I didn't know about the lazy loading feature engineering.grab.com/docker-la #SRE #SysOps #Containers #Docker #Performance

2026-01-30

📢 Avis aux Ch'tis passionnés de pingouins ! 🐧🍟

Tu es un(e) mordu(e) d'#OpenSource et tu cherches un nouveau défi sur la métropole lilloise ? La Capen'Team n'attend plus que toi !

Orfèvre Terraform, as de l'observabilité, dompteur(euse) de K8s ou encore MacGyver de l'Ops, si tu aimes partager tes connaissances et évoluer dans une ambiance sympa, ce poste d'IngéSys #Linux est fait pour toi !

🔗 capensis.fr/ingenieur-systeme-
#Lille #RecrutementLille #IngénieurSystème #EmploiIT #AdminSys #SysOps

Sysops note:
Learning happens in the gaps between design and cock-ups.
Root disk filled, Postgres did the right thing and refused to start, backups proved their worth, DB moved onto its own volume, system calmer than before.
Architecture improves fastest when reality gets a vote.
#sysops #selfhosting #postgresql #learningbydoing #calmtech

Grzegorz Cichockicichy1173
2026-01-20

W ciągu ostatnich 3 miesięcy zdałem -owe trio certyfikatów Cloud Associate, czyli , oraz . Zacząłem od SAA. Najtrudniejszym był , jego pytania były najdłuższe i najbardziej wymagały zastanowienia, natomiast najprostszym był - ten jest na kompletnie niższym poziomie zaawansowania (nie licząc pewnych wyjątków). to w zasadzie trochę taki SAA, ale z krótszymi pytaniami i bardziej skupiający się na korzystaniu z AWS-a niż na migracjach.

ves (what/why)ves93
2026-01-20

How active you’re in your planning? Do you go after what’s needed in your company or following own idea? Or maybe data mining to get best grasp?

ves (what/why)ves93
2026-01-20

I’ve got different background in IT and outside (PM, ScrumMaster, App Admin). Over the years I decided to dive into technical aspects. This landed me over time as senior staff but then realization came that I’m missing some critical parts. Did AWS DevOps - company wasn’t looking to utilize this. Potential new one were looking for experience not just training. Dead loop. Recently was working on deployment on this kicked me to get
what old school are learning?

2026-01-10

TIL: If permissions of a pam.d file does not allow reading for a program it's ignored and any unix account is accepted. No logging about that whatsoever. Insane!

#linux #ops #sysops

Roni Rolle Laukkarinenrolle@mementomori.social
2026-01-07

As our company hosts servers, we have a public Security Policy and a security.txt file for ethical hackers to disclose vulnerabilities responsibly: handbook.dude.fi/security-poli

Because of this, I receive quite a few reports, most of them ineligible. I've also run into some "security experts" getting upset about not receiving a bounty for a non-issue or putting heavy pressure on payments for valid ones. It often feels unfair, like I'm being held hostage.

That's why replies like the one I just received warm my heart so much:

"Thank you very much for the clarification and for taking quick action to remove the DNS record. I appreciate the transparency and the kind offer as well.

I'd prefer to donate the amount to a child support charity instead. You’re very welcome to donate it on my behalf to any such organization of your choice."

Donation made. Thank you, stranger. Kindness costs nothing.

#Security #SysOps #SysAdmin #SecOps

2026-01-06

@wild1145

The #FediVerse difference in a nutshell, I think: relatable interruptions and problems in life.

We don't have service interruptions because some nutter decided to spend millions of quid on some snake oil or playing at politics in a foreign country.

We have service interruptions because the sysop happened to be in the middle of cooking dinner. Or was in IKEA. (-:

And the rest of the world does not fall apart.

It's the BBS days all over again, where we actually know our #sysops as people, which isn't a bad thing.

Szymon Nowickihey@nowicki.io
2025-12-29

Why rsync from HDD to SSD is slower than from SSD to HDD?

To be exact, ~60-100MB/s HDD -> SSD, pretty stable 120MB/s SSD->HDD

I'm talking 460GB of small files (photos and videos) so I don't believe cache/buffering is significant here no?

#sysops #devops #selfhosting

Patryk Krawaczyńskiagresor@infosec.exchange
2025-12-04
Roni Rolle Laukkarinenrolle@mementomori.social
2025-11-19

What a great and transparent analysis of the outage. No excuses, honest admission of mistakes, and even shared an internal chat. Many large corporations could learn from this.

blog.cloudflare.com/18-novembe

#Cloudflare #CloudflareDown #CloudflareOutage #Outage #SysOps #Servers

Jiji, the catjiji@ohai.social
2025-11-04

if i want to test out in-wall ethernet wiring, not just for correct wiring but also shielding/performance, which tester do people recommend these days? :boost_ok: #it #admin #sysops

2025-10-17

A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: I'll push the new image - just pull "latest"

ItSec (walking by): Careful. "latest" doesn't work the way you think.

devops1: How so?

ItSec: It's just a tag. Whoever pushes the image decides what "latest" points to. Sometimes it's the newest.

First, assume you have a local registry running on localhost:5000 and two Ubuntu images already present: ubuntu:23.04 and ubuntu:22.04. Tag and push both by their actual versions so the registry has explicit versioned tags. Then, on purpose, point latest to 22.04.

# start quick&dirty&unsecure local registry
docker run -d --name registry -p 5000:5000 --restart=always registry:2


# push explicit versions
docker tag ubuntu:23.04 localhost:5000/ubuntu:23.04
docker push localhost:5000/ubuntu:23.04

docker tag ubuntu:22.04 localhost:5000/ubuntu:22.04
docker push localhost:5000/ubuntu:22.04

# intentionally make "latest" refer to 22.04
docker tag ubuntu:22.04 localhost:5000/ubuntu:latest
docker push localhost:5000/ubuntu:latest

Now pull without a tag and see what you actually get. Omitting the tag defaults the client to requesting “:latest”. Because you explicitly set latest to 22.04, that’s exactly what will be pulled and run.

# pull without a tag -> defaults to :latest
docker pull localhost:5000/ubuntu

# verify the version by inspecting inside a container
docker run --rm localhost:5000/ubuntu cat /etc/os-release | grep VERSION=

VERSION="22.04.5 LTS (Jammy Jellyfish)"

If you now retag latest to 23.04 and push again, the same pull with no tag will start returning 23.04. Nothing "automatic" updated it; you changed it yourself by moving the tag.

That's the entire point, latest is a conventional, movable label, not a magical link to the newest software. It can be older than other tags in the same repository if someone set it that way. It can also be missing entirely.

For more grumpy stories visit:
1) infosec.exchange/@reynardsec/1
2) infosec.exchange/@reynardsec/1
3) infosec.exchange/@reynardsec/1
4) infosec.exchange/@reynardsec/1
5) infosec.exchange/@reynardsec/1
6) infosec.exchange/@reynardsec/1
7) infosec.exchange/@reynardsec/1

#appsec #devops #programming #webdev #docker #containers #cybersecurity #infosec #cloud #sysadmin #sysops #java #php #javascript #node

Grumpy Cat
2025-10-14

Cześć wszystkim.
Jestem nowym użytkownikiem mastodon i postanowiłem, że skorzystam z okazji i się przedstawię.

Mówią na mnie Kazoo i jestem pasjonatem Linuxa i wszystko co z nim związane

Prowadzę mini bloga blog.howfaristovalhalla.com — blog o administrowaniu Linuxem, automatyzacji i chmurze.

Piszę głównie dla osób, które już trochę znają Linuxa, ale też dla tych, którzy chcą obrać swoją ścieżkę w IT.

Kilka lat temu sam przeszedłem tę drogę — po 17 latach prowadzenia firm przebranżowiłem się i dziś pracuję jako Cloud Linux Engineer.

Zanim to zrobiłem, spędziłem wiele godzin by znaleźć potwierdzenie, czy w wieku 40 lat jest to możliwe.

Dzisiaj jestem takim przykładem.
Utrzymałem się na rynku, wzmocniłem się finansowo i zaraziłem się nową pasją.

Miło mi Was poznać.
Kazoo

#linux #Cloud #sysops

Roni Rolle Laukkarinenrolle@mementomori.social
2025-10-10

How does a typical DDoS on a WordPress installation happen?

- A search-based DDoS attack by bypassing the cache
- Attacker sends a large volume of unique search queries so responses never hit the cache example ?s=something-xyz
- Each request becomes a cache miss, forwarded from network edge
- WordPress runs PHP + WP_Query for every request often triggering expensive database work.
- Repeated heavy queries exhaust CPU, memory and DB capacity so the website slows and eventually crashes.
- This is an Application-layer (Layer 7) HTTP flood that mimics normal user traffic.
- Key signals to look out for: huge spikes of /?s= requests in the logs, very high query entropy, cache-hit rate collapses.

Cache-busting search queries force every request through the database, turning cheap HTTP calls into expensive backend load.

Great Sysops lightning talk by Tiia Ohtokallio!

#WPSuomi #wpfi #WordPress #Sysops

A slide and press the presentation in WP Suomi seminar. Slide text: How does a typical DDoS on a WordPress installation happen?

- A search-based DDoS attack by bypassing the cache
- Attacker sends a large volume of unique search queries so responses never hit the cache example ?s=something-xyz
- Each request becomes a cache miss, forwarded from network edge
- WordPress runs PHP + WP_Query for every request often triggering expensive database work.
- Repeated heavy queries exhaust CPU, memory and DB capacity so the website slows and eventually crashes.
- This is an Application-layer (Layer 7) HTTP flood that mimics normal user traffic.
- Key signals to look out for: huge spikes of /?s= requests in the logs, very high query entropy, cache-hit rate collapses.

Cache-busting search queries force every request through the database, turning cheap HTTP calls into expensive backend load.
2025-09-23

A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: Two containers went rogue last night and starved the whole host.
devops1: What are we supposed to do?

ItSec (walking by): Set limits. It's not rocket science. Docker exposes cgroup controls for CPU, memory, I/O and PIDs. Use them.

The point is: availability is part of security too. Linux control groups allow you to cap, isolate and observe resource usage, which is exactly how Docker enforces container limits for CPU, memory, block I/O and process counts [1]. Let's make it tangible with a small lab. We'll spin a container, install stress-ng, and watch limits in action.

# On the Docker host
docker run -itd --name ubuntu-limits ubuntu:22.04
docker exec -it ubuntu-limits bash

# Inside the container
apt update && apt install -y stress-ng
stress-ng --version

Check how many cores you see, then drive them.

# Inside the container
nproc

# For my host nproc returns 4
stress-ng --cpu 4 --cpu-load 100 --timeout 30s

In another terminal, watch usage from the host.

docker stats

Now clamp CPU for the running container and see the throttle take effect.

docker update ubuntu-limits --cpus=1
docker stats

The --cpus flag is a wrapper over the Linux CFS period/quota; --cpus=1 caps the container at roughly one core worth of time on a multi‑core host.

Memory limits are similar. First tighten RAM and swap, then try to over‑allocate in the container.

# On the host
docker update ubuntu-limits --memory=128m --memory-swap=256m
docker stats
# Inside the container: stays under the cap
stress-ng --vm 1 --vm-bytes 100M --timeout 30s --vm-keep

# Inside the container: tries to exceed; you may see reclaim/pressure instead of success
stress-ng --vm 1 --vm-bytes 300M --timeout 30s --vm-keep

A few memory details matter. --memory is the hard ceiling; --memory-swap controls total RAM+swap available. Setting swap equal to memory disables swap for that container; leaving it unset often allows swap equal to the memory limit; setting -1 allows unlimited swap up to what the host provides.

docker run -it --rm \
--name demo \
--cpus=1 \
--memory=256m \
--memory-swap=256m \
--pids-limit=25 \
ubuntu:22.04 bash

For plain docker compose (non‑Swarm), set service‑level attributes. The Compose Services reference explicitly supports cpus, mem_limit, memswap_limit and pids_limit on services [2].

services:
api:
image: ubuntu:22.04
command: ["sleep","infinity"]
cpus: "1" # 50% of one CPU equivalent
mem_limit: "256m" # hard RAM limit
memswap_limit: "256m" # RAM+swap; equal to mem_limit disables swap
pids_limit: 50 # max processes inside the container

[1] docs.docker.com/engine/contain
[2] docs.docker.com/reference/comp

For more grumpy stories visit:
1) infosec.exchange/@reynardsec/1
2) infosec.exchange/@reynardsec/1
3) infosec.exchange/@reynardsec/1
4) infosec.exchange/@reynardsec/1
5) infosec.exchange/@reynardsec/1
6) infosec.exchange/@reynardsec/1

#appsec #devops #programming #webdev #docker #containers #cybersecurity #infosec #cloud #sysadmin #sysops #java #php #javascript #node

Grumpy Cat
2025-09-18

PSA at least one of us is going to be around datenspuren.de/2025/ #datenspuren in Dresden this weekend.

If you are experienced in #sysops , #email tech or have good #rust experience and general interest in #chatmail (chatmail.at ) maybe drop us a DM and let's meet and chat. Depending on circumstances there might also be spontaneous sessions you could lookout for.

2025-09-10

devops0: Our audit report says we must "enable Docker rootless mode". I have no clue what that even is...
devops1: Sounds like some another security BS. What's "rootless" supposed to do?

ItSec: Relax. Rootless mode runs the Docker daemon and containers as a regular, unprivileged user [1]. It uses a user namespace, so both the daemon and your containers live in "user space", not as root. That shrinks the blast radius if the daemon or a app in container is compromised, because a breakout wouldn't hand out root on the host.

devops1: Fine. If it's "not hard" to implement, we can consider this.

ItSec: Deal.

Note: this mode does have some limitations. You can review them in docs [2].

First, let's check which user the Docker daemon is currently running as.

ps -C dockerd -o pid,user,group,cmd --no-headers

You should see something like:

9250 root     root     /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Here's a clean, minimal path that matches the current docs. First, stop the rootful daemon.

sudo systemctl disable --now docker.service docker.socket

Then install the uid/gid mapping tools. On Ubuntu it's uidmap.

sudo apt update && sudo apt install -y uidmap

Docker provides a setup tool. If you installed official DEB/RPM packages, it's already in /usr/bin. Run it as your normal user.

dockerd-rootless-setuptool.sh install

If that command doesn't exist, install the extras package or use the official rootless script.

sudo apt-get install -y docker-ce-rootless-extras
# or, without package manager access:
curl -fsSL https://get.docker.com/rootless | sh

The tool creates a per-user systemd service, a "rootless" CLI context, and prints environment hints. You usually want your client to talk to the user-scoped socket permanently, so export DOCKER_HOST and persist it in your shell profile.

export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
echo 'export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock' >> ~/.bashrc

Enable auto-start for your user session and let services run even after logout ("linger").

systemctl --user enable docker
sudo loginctl enable-linger $(whoami)

Point the CLI at the new context and sanity-check.

docker context use rootless

Once more, check which privileges the Docker daemon is running with:

ps -C dockerd -o pid,user,group,cmd --no-headers

Now you will see something like:

10728 ubuntu   ubuntu   dockerd

And pssst! Podman runs containers in "rootless" mode by default [3].

[1] docs.docker.com/engine/securit
[2] docs.docker.com/engine/securit
[3] documentation.suse.com/en-us/s

For more grumpy stories visit:
1) infosec.exchange/@reynardsec/1
2) infosec.exchange/@reynardsec/1
3) infosec.exchange/@reynardsec/1
4) infosec.exchange/@reynardsec/1
5) infosec.exchange/@reynardsec/1

#appsec #devops #programming #webdev #java #javascript #python #php #docker #containers #k8s #cybersecurity #infosec #cloud #hacking #sysadmin #sysops

grumpy cat
2025-09-02

A grumpy ItSec guy walks through the office when he overhears an exchange of words.

devops0: These k8s security SaaS prices are wild.
devops1: Image scanning, policy engines, "enterprise tiers"... why are we paying so much?

ItSec (walking by): You pay for updates & support, probably, but you can do some of this yourselves with a bit of k8s hacking.

devops0: How, exactly?

Disclaimer: this is a PoC for learning, not a production-ready solution.

Kubernetes can ask an external webhook whether a given image should be allowed via Admission Controller, in this case ImagePolicyWebhook [1]. The webhook receives an ImageReview payload [2], initiates a scan, and returns "allowed: true/false".

We will write a Flask endpoint that invokes Trivy [3] for each image and denies pod creation process if HIGH or CRITICAL vuln appear.

Below is a minimal Flask service.

from flask import Flask, request, jsonify
import subprocess, json, shlex, re

app = Flask(__name__)

def is_valid_image_format(image: str) -> bool:
if not re.fullmatch(r"[A-Za-z0-9/_:.@+-]{1,300}", image):
return False
if image.startswith("-"):
return False
return True


def scan_with_trivy(image: str):
cmd = [
"trivy", "--quiet",
"--severity", "HIGH,CRITICAL",
"image", "--format", "json",
image
]
r = subprocess.run(cmd, capture_output=True, text=True)
try:
data = json.loads(r.stdout or "{}")
results = data.get("Results", [])
vulns = []
for res in results:
for v in res.get("Vulnerabilities", []) or []:
if v.get("Severity") in ("HIGH", "CRITICAL"):
vulns.append(v)
return vulns
except json.JSONDecodeError:
return None

@app.route("/scan", methods=["POST"])
def scan():
body = request.get_json(force=True, silent=True) or {}
containers = body.get("spec", {}).get("containers", [])
if not containers:
return jsonify({
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {"allowed": False, "reason": "No containers provided"}
})

results = []
decision = True
for c in containers:
image = c.get("image", "")
if not is_valid_image_format(image):
results.append({"image": image, "allowed": False, "reason": "Invalid image format"})
decision = False
continue
vulns = scan_with_trivy(shlex.quote(image))
if vulns is None:
results.append({"image": image, "allowed": False, "reason": "Scanner error"})
decision = False
continue
if vulns:
results.append({"image": image, "allowed": False, "reason": "HIGH/CRITICAL vulnerabilities detected"})
decision = False
else:
results.append({"image": image, "allowed": True})

return jsonify({
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"status": {"allowed": decision, "results": results}
})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)

Run the service wherever Trivy is available. Tip: warm up the trivy vulns db once so the first request will not timeout.

trivy image alpine:3.22 #warm up 
gunicorn -w 4 -b 0.0.0.0:5000 app:app

Test it with an ImageReview-like request. Replace the and URL and images as you wish/need.

curl -s -X POST http://127.0.0.1:5000/scan -H "Content-Type: application/json" -d '{
"apiVersion": "imagepolicy.k8s.io/v1alpha1",
"kind": "ImageReview",
"spec": {
"containers": [
{"image": "alpine:3.22"},
{"image": "nginx:latest"}
]
}
}' | jq .

Tell the API server to use ImagePolicyWebhook. The AdmissionConfiguration points at a kubeconfig for the webhook endpoint (/etc/kubernetes/admission-control-config.yaml).

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/webhook-kubeconfig.yaml
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false

The webhook kubeconfig targets your scanner's HTTP endpoint (/etc/kubernetes/webhook-kubeconfig.yaml). Edit "server" value for your case.

apiVersion: v1
kind: Config
clusters:
- name: webhook
cluster:
server: http://192.168.108.48:5000/scan
contexts:
- name: webhook
context:
cluster: webhook
user: ""
current-context: webhook

Mount the AdmissionConfiguration and enable the plugin in the API server manifest. Add the following flags and mount the config file; adjust paths and IPs to your environment (kube-apiserver.yaml):

---
apiVersion: v1
[...]
containers:
- command:
- kube-apiserver
[...]
- --admission-control-config-file=/etc/kubernetes/admission-control-config.yaml
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
[...]
volumeMounts:
[...]
- mountPath: /etc/kubernetes/admission-control-config.yaml
name: admission-control-config
readOnly: true
- mountPath: /etc/kubernetes/webhook-kubeconfig.yaml
name: webhook-kubeconfig
readOnly: true
volumes:
[...]
path: /etc/kubernetes/admission-control-config.yaml
type: FileOrCreate
- name: webhook-kubeconfig
hostPath:
path: /etc/kubernetes/webhook-kubeconfig.yaml
type: FileOrCreate

After the API server restarts, the cluster will begin asking app about images during pod creation. A quick check shows an allowed image and a blocked one:

kubectl run ok --image=docker.io/alpine:3.22
pod/ok created

kubectl run nope --image=docker.io/nginx:latest
Error from server (Forbidden): pods "nope" is forbidden: one or more images rejected by webhook backend

That's the whole trick. Kubernetes asks our Flask app. App calls Trivy. If HIGH or CRITICAL vulnerabilities are present, the admission decision is deny, and the pod never starts. It's not fancy and as I wrote before, it's not meant for production, but it illustrates exactly how admission can enforce image hygiene without buying an external SaaS.

[1] kubernetes.io/docs/reference/a
[2] kubernetes.io/docs/reference/a
[3] github.com/aquasecurity/trivy

For more grumpy stories visit:
1) infosec.exchange/@reynardsec/1
2) infosec.exchange/@reynardsec/1
3) infosec.exchange/@reynardsec/1
4) infosec.exchange/@reynardsec/1

#appsec #devops #kubernetes #programming #webdev #docker #containers #k8s #cybersecurity #infosec #cloud #hacking #sysadmin #sysops

grumpy cat

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst