July 14, 2011, 11:04:00 AM— Okay, my original instructions, below, though perhaps interesting, did not exactly work. Re-tested with firmware 1.08, the most important thing when converting a drive from standard to RAID1 is that, after copying the data to the new drive,
the drive with the data you want to preserve be in the
LEFT bay (when looking from the drive-entry side, with the cover). No matter what I did to
prep the drive (as described below), if that source drive is on the right, that is the one that is formatted! I do not know if the "zeroing-out" is even necessary.
July 19, 2011, 02:20:33 PM— Since I wrote the original post (below), it appears that, as of f/w 1.08, the method described may not work (as noted above). So this time, I put my new drive with the copied data on it in the left slot. Rebooted once to have it be recognized as the main drive. Then I put the new, zeroed drive into the right slot, rebooted, and formatted; this time the correct drive was formatted. I had turned off the auto-rebuild to keep that as a separate step. So it seems the
1.08 f/w will format the right drive when converting to RAID1, regardless of which drive has data.
When I rebooted after the format, there appeared to be no data on the volume and the share names were not accessible. I shutdown and checked the left (data) drive and it still had its data, so I think this is a weird conflict due to the raid array not having been rebuilt. In my attempt not to destroy the data again, I destroyed the data, so I was not able to finish this process. :-P Perhaps someone else can validate whether moving the data-filled drive to the left slot is sufficient.
Since I now have a SATA-->USB adapter, I reformatted the drives as a clean RAID1 configuration and am coping from the old drive to the array. I do not recommend this if you have a lot of data because this is VERY slow!
May 08, 2010, 09:18:35 PM— I have a theory about why the wrong drive is formatted when trying to convert a single drive setup to a RAID1 configuration (thereby losing all the data that you wanted to protect).
The normal steps that I went through as I upgraded my original RAID1 from 1TB drives to 1.5TB drives:
- Removed old drives.
- Installed new drive (in right bay, though I am not sure it matters).
- Start up NAS and format new drive.
- [possibly, optional] Restart the NAS to make sure that it "knows" the new drive's configuration.
- Shutdown and insert one of the old RAID drives (into the left bay). Restart. The NAS will start with two single drives.
- Copy data from old drive (Volume_2) to new drive (Volume_1) ... this can take days, unless you've set up the drive to log in with telnet or ssh, where you can use rsync or cp, directly.
- Shutdown the NAS and remove the old RAID drive. It is important to keep one of the two original drives safe until you are sure that the new configuration is stable.
- Install the other new drive to be added to the NAS and restart the NAS.
- [IMPORTANT: This step, described below, is the focus of this posting]
- Open the NAS configuration web page and follow the prompts to create a RAID1 configuration, formatting the new drive, in the process. Make sure that the RAID is "rebuilt."
This is how it is supposed to work (without the "IMPORTANT" step). And for most(?) you this might be fine. I was one of the unlucky ones that had my data copy of the drive reformatted (if you are upgrading from a single drive to dual, without a backup, then you'd be in big trouble... perhaps the following will help).
My theory is that the NAS recognizes the addition of a new drive but may not recognize which drive is "new." This might happen if the new drive is pre-formatted or the cursory check sees random data that it interprets to be formatted. If this is true, then we can take some steps to make sure that the NAS does not think that this is the case--i.e., make sure that the new drive absolutely recognized as "new."
So, the "IMPORTANT" steps follow:To do this, you will need or access to a machine that you can plug the new drive into: Windows with cygwin installed, Linux, Mac, or the NAS with Fonz FFP utilities installed. The key is command line access to run the 'dd' command (there may be other ways to do it, but that was the easiest, for me).
If are using the NAS itself to perform these steps...
Regardless of whether you're using the NAS device, as just described, with the drive mounted and access to a command line, "zero out" the leading part of the drive. 1 MB should be sufficient. To do this run the 'dd' command as follows:
dd if=/dev/zero of=/dev/sdb count=2048
It is very important that the 'of=' spec is the correct setting for the drive, you can do SERIOUS damage to a device or it's data if it is not correct.
If the drive is not already installed in the NAS, install it.
Restart the NAS.
Now continue on with the steps in the configuration web page to create a RAID1, reformatting the new drive.
Though this was a bit complex, I hope this helps avoid any problems for anyone else.
Bill...