Are excessive re-writes bad for SSD?
See "Avoid excessive writes" here https://darwinsdata.com/how-do-i-fix-a-bad-ssd-block/
I think, my external 2TB #SanDisk SSD just died.
I use it mainly for analysis of weather station data.
The usual workflows suspicious of "excessive writes" are:
once or twice per month download about 1,000 to 10,000 station files, re-format them via scripts. And these scripts delete and re-write each station file multiple times until the procedure is finished.
That workflow had caused performance degrading on a regular basis. Re-formatting (mostly to exFat) became necessary once every year, then every 6 months, and since December every third month.
I always just did the re-formatting, without checking for cause or "bad blocks" or anything.
Today's fail: I had just successfully re-formatted the SSD, this time to JHFS+. The reformatting took >1h, rather slow.
Afterwards, I tried a restore of a backup **– which luckily had finished successfully, altho also taking an unusually looong time.
The restore failed with error.
I then tried re-formatting to JHFS+ again. But this then failed with error. * Tried multiply times with same error.
Does this history mean I'll just have to replace the SSD every 3 years?
Can I extend its lifetime somehow?
Do I need to be cautious wrt frequency of backups to the other SSD? Really don't want to lose that one.. so far, I only write weather-data backups when performance degrades on the weather SSD. But I had thought of doing it more frequently so backups don't take so long.
* Disk Utility App (GUI) error: Could not write to last block. original in German: "Auf den letzten Block konnte nicht geschrieben werden."
** backup software: CarbonCopyCloner
German posting, same topic:
https://climatejustice.social/@anlomedad/114737265518700525