On different Linux based NAS I’ve had over the years, they all had ext4 and ended up getting corrupted.
Huh...I might need to look more into BTRFS then. Doesn't it add more wear to SSDs?
Btrfs generally causes more wear on SSDs than ext4 due to its Copy-on-Write (CoW) nature and metadata overhead, which can result in higher write amplification. However, Btrfs supports compression (e.g., zstd) which can significantly reduce total writes, potentially offsetting this wear. Ext4 is generally more efficient and reliable for basic, heavy-write desktop use.
I haven't used SSD based NAS, so I can't tell if the corruptions were caused by failing HDD (though SMART reports never showed any issue)