Cheaper solution than pikvm, mine arrived earlier this month after the kickstarter ended.
Only downside is it's inconvenient to install the tailscale client.
iDrac feels like antiquated overkill by comparison.
🛡️#DevSecOps | #OpenSource | #Tech | #Security | #InfoSec | #Hacker | #Networking | #OpenBSD | #NetBSD | #FreeBSD | #Linux | #Homelab | #Selfhosted | #fedi22 🔐
:openbsd: :puffer: :netbsd: :freebsd: :archlinux: :linux: :alpinelinux: :slackware: :opensource: :vim: :rust: :python: :raspberry_pi: :redhat: :terminal: :gnu: :hecked: :hacked: 🦶
Cheaper solution than pikvm, mine arrived earlier this month after the kickstarter ended.
Only downside is it's inconvenient to install the tailscale client.
iDrac feels like antiquated overkill by comparison.
Cluster Rebuild Project
Everything deployed so far is in gitops, and renovate os functional
Current features/components:
* external-dns
* cert-manager
* cilium providing: Gateway-API, Ingress, load-balancer
* CloudNative-PG for postgres DBs
* Forgejo
* keycloak
* Kube-Prometheus-Stack which also deploys Grafana dashboard and Loki
* ArgoCD
* Renovate
* Rook-Ceph (object storage, block storage, and distributed filesystem)
That is the core of the cluster done, the heart of it. The next step is to get DB backups running properly, then I'll follow a flow of backing up the DB on the old cluster and restoring onto the new cluster. Data transfer via backup verification!
#HomeLab #Kubernetes
Combination of IaC in a self-hosted gitea, self-hosted markdown wiki (hedgedoc) for more organized documentation, and obsidian for local notes. Backup of wiki notes and obsidian notes on NAS.
Someone else mentioned netbox and that's pretty good for IPAM documentation, I use it as well for my lab.
As for diagrams I don't have a standard, but typically throw them in the markdown wiki and/or obsidian.
I have so many RSS feeds that I'm thinking of creating a RAG based workflow including an AI for summarization and prioritization. Even reading all the headlines in all categories takes too long so I'm hoping I can create more effective filtering with a setup like this.
I'm thinking of using n8n, a RAG DB, and integrating with ChatGPT to create a digest page that's easy to read and filters out or automatically condenses articles based on similar topics, for example if multiple news sources produce an article that day on the same issue.
With this it would be sort of like a dynamically generated up-to-date static webpage on each daily run.
I could also do things like auto translate if the feed produces articles in multiple languages, as well as sentiment flagging.
What if you had excellent IaC practices, and could deploy your edge VPS dynamically via an API with cost estimations built in across providers?
My requirements would be a list of providers with: An API, good documentation, good pricing, not a massive hyperscaler cloud provider, on-demand pricing, and low cost for performance and bandwidth.
Consider it in a VPN context, with TS/WG you're typically setting up a private network segment to accomplish remote connectivity.
Do you still need firewall rules? Well you can have host based firewalls on each host so yes, and instead of traditional firewall rules, since you don't necessarily have a firewall within a TS/WG private network, instead you're using ACLs to limit connectivity across the private subnet.
What do you gain over a local VLAN? It's a different use case. With VLANs and cross-VLAN connectivity you're typically using a firewall that's VLAN aware to control traffic, and ideally you're not using a VLAN for VPN-like connectivity. Though you can always throw all the assets you want into a dedicated VLAN, say a DMZ VLAN. It all depends on your goals and what you need remote access to. If you don't need any remote access an isolated VLAN is great for security, but if you need remote access, TS/WG is a great way to do it securely.
My favorite options:
1. Key-based authentication (easy and quick, password brute force is no longer feasible)
2. MFA for logins (a little more involved, and doesn't provide much over key-based auth unless your private key is compromised)
3. Move the listening public 22 port to a localhost 22 port, and enable wireguard or tailscale. Then access SSH over WG/TS. (a little more involved but completely removes the publicly accessible SSH port issue altogether, and you can still chain key-based auth and MFA on top for ultra paranoia)
@Fishd What I like about the mac solution is that it can fairly efficiently use its NPU and memory, but you're likely not going to ever get a higher max performance than to build your own GPU AI rig. I'm testing with ollama and various models on my nvidia card and it's working great, but I still find use in the local models on my macbook.
I'm hoping more solutions like exo come out that allow you to distribute the ai compute over multiple machines with various GPUs. For now exo itself doesn't seem extremely effective and to most efficiently use it the macs need a pretty high minimum which makes it not worth it for most people. If the models could load based on the resource constraints of each machine more optimally it would be great so I'm keeping an eye out on distributed solutions like this.
@Fishd You can use larger models with the additional memory. Try various models of different parameter sizes to test capacity. What limitations are you seeing with the M2 Max? What model parameter limits are you hitting? There are Neural Engine improvements in M3 and M4 later procs as well. Nice thing about macbooks for this is the way it uses system memory.
@gary_alderson
Not sure if you heard the bad news, but he recently passed. There's a donation address as well. He will be missed.
He did a great job. Don inspired me and a lot of us who share a passion in tech. We lost a good one. All the best to his family and community.
@train I'm sure but you can start slow, the basics are pretty straight forward, but you can do a lot of advanced things that gets in the weeds.
Just checked -- the update was Jan 16th, so yeah last week.
@train I have an older one but I noticed recently, I think in the last week or so, that there was a big update that modernized the UI and a bunch of other fixes. There are a ton of options with it and you can customize a bunch with the command line as well. Let us know how it goes.
@diffrentcolours I kinda see what's going on here but it looks like you're in for more work. The docker file that's on the pixelfed github is to build the image and the docker compose there is a template version with seeded environment variables that are in the .env.docker file.
https://github.com/pixelfed/pixelfed/blob/dev/.env.docker
https://github.com/pixelfed/pixelfed/blob/dev/docker-compose.yml
The way the docker compose is laid out is so that it launches separate containers for nginx-proxy, nginxproxy/acme-companion (ssl/tls via letsencrypt), the pixelfed app itself, a worker container, the mariadb database, and the redis cache.
Is there a reason you want to rebuild all of those separate container's functionality into one container? Seems like it would be a bit easier to modify the existing docker compose to your needs. Also you wouldn't have to hassle with supervisord since the containers themselves can be set to depend on each other. That and the healthchecks exist as well.
@akmartinez You mentioned a mixed Windows/Linux server, do you plan on running a hypervisor like Proxmox? If so you can use a tool like Proxmox Backup Server, which can also be a VM inside proxmox, to backup to a storage target and/or replicate to another Proxmox Backup Server elsewhere.
Cool, out of those I'm running Gitea, Home Assistant, and FreshRSS. Haven't heard of Zoraxy or Dawarich those look nice to try.
@estevez Nice I have a very similar setup, 3x NUC's and a Synology NAS for shared storage. It handles pretty much all of my selfhosting needs and other workloads. What do you want to end up running on it or what do you have already?
While you're right about who created the provider, it does what it needs to. Ideally it would be rewritten by another group but no one else has written a better Proxmox provider afaik.
As for your cluster comment, Terraform isn't cluster aware in that way. It's a provisioning tool. After it gets provisioned it's not tracking what host it gets sent to, how would it? It doesn't even do this in VMware. Usually you want a provisioning resource pool to allocate it to so it isn't moving when you deploy.
@tiff A simple and reliable way is to set up an NFS share and connect it to each node, and you're pretty much good to go after that. Makes the VM transfers happen much more quickly and with much less I/O used.
What? You could have a single Docker compose file that runs all of the required Mastodon services on the desktop/laptop.
Regarding the mobile performance: I have a lot of batteries and a fan. This is a dedicated rooted old Android, so I'm not trying to play Flappy Bird at the same time.
I'm sticking with not impossible by a long stretch of the imagination. Again impractical? Sure.