#Arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-12

Got distracted when I tested a slightly older backup created with Arq 6.2.54 (current major version is 7). The Arq 5 and 7 formats are "documented", the Arq 6 format is not. At first I thought this can't possibly work, but the extra undocumented byte I found in a metadata object in Arq 7 is not used in Arq 6. Other than that everything seems to work just fine.

Your backup repository can contain a mix of Arq 5,6 and 7 data. Arq 7 simply reuses data created by older versions.

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-12

I just realized that I can hash the restored files instead of writing them to disk. This way I can easily build a test mode which writes a text file with all files that would be restored with their respective hashes and compare that against a golden file. Could do the same with other file metadata too… 🤔

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-10

Yesterday I mostly did all kinds of refactorings, which was VERY rewarding! Most unwraps are gone, cleaned up a TON of “&str, String and PathBuf”-conversion mess and other unnecessarily complicated stuff.

I also caught myself trying to make sure not to accidentally mutate something… only to realize eventually, that the compiler won't let me unless I say so 😅

Slightly better error reporting to the user is next on my list, before I want to look at some basic integration tests.

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-03

oh, one more thing: There is plenty of meta data for files stored (like owner, group, permissions, extended attributes, …), but no checksums whatsoever 🙄 This means when a larger or slow restore gets interrupted there is no good way resume and safely skip files that are already provably correct restored.

That's not a good design IMO. And let me tell you, the oddities don't end there 😅

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-03

Today I did some testing using a read-only mount of one of my real backups using “rclone serve nfs” so I don't have to implement an SFTP client. This works quite well so far.

Also: I quickly learning that quite a few assumptions on the semantics of certain repository metadata objects are incorrect. As already mentioned, the docs are quite spotty to begin with, so this is basically reverse engineering now to figure out how this is meant to be used 🙄

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-01

I also reduced a good amount of “unwraps”, but I'm having a hard time figuring out what a good way is to handle various errors and how to structure the code accordingly.

For the restore, I have to do a ton of testing obviously. But I'll probably first need to improve the ergonomics of the CLI arguments so I can test restore subtrees of one of my larger "real" backups.

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-12-01

Today, I added an "inspect” CLI subcommand to simplify debugging. It's not super ergonomic for now, but it knows the types of files in a backup repository and also how to decrypt and decompress them for inspection. Some are just JSON others are in a custom binary format 🙄. The binary ones will be parsed and printed in JSON too.

More excitingly: A (very) minimal version of “restore" is also working now 🎉 Permission and custom attributes are ignored (for now).

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-11-20

It doesn't look like much, but wow is my brain not wired at all for Rust (yet) 😅

I can now parse enough of all relevant backup objects to dump a list of directories and files within a backup 🚀 The backup location and "snapshot” is hardcoded for now, so I can delay exploring the “CLI argument parsing”-rabbit hole.

Next step is a very simple "restore everything at given location”, ignoring file permissions, extended attributes etc.

#rust #arqbackup

Sebastian Cohnentisba@ruby.social
2025-11-19

My current pet project is to implement an open source Arq 7 compatible backup restore program. Mostly for fun, and in case they go bust or whatever.

I am -sure- the Rust code is HORRIBLE, but I can already derive the master encryption keys (PBKDF2-SHA256), decrypt objects (AES CBC) in the backup and decompress (LZ4 block format) them.

#rust #arqbackup

2025-08-09

Been researching on more backup tools usable on #macOS.
On thing that really stands out is the apparent lack (or refusal) to use #APFS filesystem snapshots as backup source even though the same tools support BTRFS or LVM2 snapshots just fine.

Would I like to try #BorgBackup or #Replit or #kopia - sure, but none of them supports working from snapshots so.. why bother :|

(I am not unhappy with #ArqBackup at all and I'll decide if I want to spend money on it once my 30 day trial is over but the three listed above are free, oss alternatives so why not check them)

2025-06-30

In case you thought Arq Backup had useless error messages, I bring you today's example from Synology Hyper Backup 4. There is no indication what caused this or what I'm supposed to do about it, therefore useless.

#ArqBackup #Synology #HyperBackup

A screen capture of 4 Hyper Backup error logs saying, "Exception occurred while backing up data" followed by a filename. Then other error saying it failed to run the backup.
2025-06-29

Oh, goody! Time for another useless #ArqBackup error message:

Error: The file “b984fcc0225de19309a55d5c38e3ddf2205b3a29db56a74e3eda2cfee2fbd6” couldn’t be opened because there is no such file.

Most of the time I just need to run the backup again as the errors are transient. But not this one. This error happens immediately and stops the backup. Now I get to do the mysterious voodoo dance to try to recover this so I don't have to start from scratch losing all history.

2025-05-15

More info from the same #ArqBackup log. It initialized 70 of 79 records. Why not the other 9? Maybe because of the 9 "object not found" errors? Then it reports 1,213 "unrecognized backup data objects". This is all meaningless to the user because we don't know what, if anything, we should do about it. Arq is the only program that touches its backup sets so any errors are caused by Arq itself. What other conclusion can I draw?

2025-05-15

While I use #ArqBackup and am willing to deal with its oddities, I can't recommend it to non-technical people because it issues too many bizarre error messages, and exhibits unexplained behaviour. For example, yesterday it ran a backup just fine. Today the same backup has to initialize some database. It's been working 5.5 hours at 100% CPU and it's only loaded 59 of 79 records (92 million items). It's felt the need to do this more frequently. People have been reporting this problem for years.

2025-04-27

Honestly, #ArqBackup did a great job just now:

- told it to restore on top of my home directory but left "overwrite existing files" unchecked
- took maybe 10 minutes tops to scan my entire homedir (including many gigglebytes' worth of media files)
- restored all the files git knew had been removed
- presumably also restored anything else older than 2am, which got caught up in that ~=1s run of the rm -rf

Still…a good lesson was learned 😰

2025-04-24

#ArqBackup has the most "interesting" error messages. Totally useless for the user though. Note that the dev can't decide on the time format. Basically I just wait to see if the error is transient. If not, I start a whole new backup set because there is no repair mechanism.

23-Apr-2025 20:46:16 EDT Error: Backup record 2024-05-18 14:50:17 +0000 of /Users/<redacted> (BF9AF519-F284-425B-99CA-5E8C8DF75A03): unpacked blob d7073d04aa9f8603458de37a1040f8401cd00b0d (STANDARD) not in backup set db

2025-04-12

AHA!! There is an easier way - thanks to the answer of the #arqbackup Support! You can:
- remove the Backup Plan (no data loss!)
- remove the Storage Location (no data loss!)
- add the new Storage Location
- open the new Location - you see the backups
- Right click on a Backup and select "Adopt" to get a new Backup Plan with this settings

So: Ther IS a way but it's hidden in the most artful way 😄

2025-04-11

**Holly Smoke** 🤪 I've found a solution to fix/change a path to a *Storage Location* in #ArqBackup … well … "solution" for #developer or #nerds that is 🫣
So, if your Synology suddenly is available with "name.location" or now without ".location"-suffix, how to change that?

/Library/Application Support/ArqAgent/server.db - copy to writable folder and make it writable. Open with SQLite browser. Change "json" in "storage_location"-Table. Copy Back. Force-Stop "ArqMonitor" and "ArqAgent". Done ✅

2025-04-11

Frage zu #arqbackup
Bisher wurde mein ziellaufwerk im Netz unter //server.local/ gefunden. Jetzt aber unter //server/

Wie zum Geier ändere ich das Ziel für die die Backups in arqbackup?! Ich werd noch bekloppt.

Please: how to chance url to destination in arqbackup if the backup server in my local network is now available with //server/ instead of //server.local/ ??

Bitte boost 🚀

@Taffer @kboyd ARQ backup supports S3 Compatible server. Wonder if that would support that.

#arqbackup

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst