To answer your question FurryNutz, yes there could be a problem with the disk or disks (we don't know if one or two are involved and if two haw they are setup). I have seen this a couple of times where the disk had a large cache that didn't get written to disk before the disk shut down. We solved that problem with a disk firmware update but that was a fer years ago now and as far as I know all newish disks firmware are free of that problem.
At the moment we are fumbling about in the dark because we don't have enough information. One or two disks and if two are they setup as a RAID configuration. If they are a RAID array it is possible that one of the disks has bad sectors and may have run out of relocated sectors and the two disk images no longer match. This can cause file errors when the disks come out of hibernation.
Our normal procedure in a case like this would be to pull the disk/s, put it/them in a SATA/USB caddy and run the manufacturers disk test tools against it/them. If everything passes the tests we then either image the array if it is RAID 0 or copy if RAID 1. The disk/s are then returned to the NAS and formatted and the data restored. We then run our normal read/write tests and if everything passes the unit is then returned to the client.
There is also a small possibility of a network problem but that usually corrupts data all the time.
The biggest problem we have is lack of information and until we get it there is little we can do.