The solution has to be crafted in the context of the real world problem. Consider:
a. The probability of a hard disk failure.
b. The probability of a DNS box failure.
c. The probability of the back-up being not usable.
d. The probability of a tsunami/fire/earthquake hitting me
From my experience, it's like (rough numbers):
a. <0.001 and getting better all the time
b. not sure - but doesn't matter as data is still intact
c. 0.9
d. <0.0001
Hence, I don't bother to do back-ups anymore. I always duplicate data at file level and keep them online all the time. At terabytes level, doing back-ups can become impractical when the time of backing up is longer than the frequency the data gets changed. And it's even more impractical to verify that the integrity of the backup is ok to be used to restore any time.
Granted that on a good spinning hard disk, there is still some probability that an individual sector or track can fail. So it's good to get a second DNS box and zap the data over. Better still is to put the second box in another location as far away as possible.
When you have gigabytes and beyond, the important thing is to know that the backup copy of your data is intact and good. Storing on hard disks is a cheap and effective way as failures get your attention most of the time - a red light, an error message, loud noises, or some smoke, etc.
I have a DNS-323 with 2x1.5 disks. I have already got a second DNS-323 with another 2x1.5 disks. I haven't gotten round to finding a way to duplicate the first onto the second in an efficient manner over a narrow pipe WAN. Like most users, only a small portion of the data gets changed each day. Any suggestions? Maybe I should start a new thread for this.
Thanks.