D-Link Forums
The Graveyard - Products No Longer Supported => D-Link Storage => DNS-323 => Topic started by: Scay on April 28, 2008, 02:52:12 PM
-
Just bought the DNS-323 but didn't do my homework as good as I should have (if I had I would have bought the similiar and newer Zyxel instead...)
Anyways, it has obvisouly been a problem for many people with the 1.04 firmware that using a RAID 1 configuration causes the system to go into "Degraded mode", flagging one disk as faulty for some reason. I just bought my DNS-323 and I'm using it with 2x Seagate 7200.11 1TB discs in RAID 1. After every shutdown there is a 50% probability the system goes into "Degraded mode" forcing me to either reformat the entire system or use a telnet funplug re-insert the "faulty" drive into the RAID 1. VERY VERY VERY annoying, this is a known issue, when can we expect a fix for this. There are lots of theories regarding what's causing one theory being the use of disks larger than ~400GB.
More info can be found here: http://forum.dsmg600.info/t1703-shows-degraded-after-upgrade-1.04.html (http://forum.dsmg600.info/t1703-shows-degraded-after-upgrade-1.04.html)
Would like some input on this as soon as possible.
-
Nice to see ECF replying to quite a few topics! Too bad you forgot this one...
What's the current status on a firmware update that takes care of this problem?
-
So after the firmware upgrade, you format the drives and they operate normally untill you power the unit down, then you hit the degraded mode?
-
The issue you seem to be having has not been confirmed as an issue. I have not been able to replicate you problem of it going to degraded then back to working randomly. In some posts on this and others forums but they seem to reporting the DNS-323 showing it as degraded after a firmware upgrade only. When upgrading firmware on our support site it is stated:
"New firmware for the DNS-323 often affects the way hard drives are formatted and the way files are handled.
To avoid potential complications and loss of files, please back up all files to another location before upgrading firmware.
After the upgrade, re-format your hard drives if upgrading from Shipping firmware 1.00 or 1.01 and if hard drives are formatted in EXT3.
Once the hard drives have been re-formatted, it will be safe to move your files back onto the DNS."
Have you tried reformatting the drives using 1.04 firmware to see if the issue persists?
-
I have re-formatted the drives many times to get rid of this issue but still it comes back to degraded every now and then... And of course I re-formatted using the latest firmware installed.
Lycan: Correct, and it doesn't go back to degraded every time....seems a bit random. No custom applications used at the time either.
-
I have a theory as to a possible cause of this problem - however, since I have not experienced the problem, I can not prove or disprove the theory - perhaps someone who is experiencing the problem can do so.
First a bit of background - my first 250GB SATA drives were Maxtors, and they worked fine in my desktop, but not in the server I had originally purchased them for - the server never detected them at boot up, and after some searching I discovered that the problem was related to a "delayed spin up" function - the drives did not spin up immediately on power up, but would do so either on a command from the controller or after a random delay - this is done to reduce the startup load on the power supply, by staggering the drive spinup and spreading the current demand over a longer period - in the case of my server, the drives were never ready when the server checked for them, and the solution was a jumper setting that disabled the "delayed spin up" function.
Here's the theory now - is it possible, that, like my server, the DNS-323 is checking for the drive before the drive is operationally ready? It sees one drive, but not the other and so reports a degraded status?
And now - how to test - if you have been experiencing this problem, can you check your drive documentation to see if the drive offers a "delayed spin up" function and if it does, can you disable the function and see if the problem continues.
-
I agree. Good Post!
-
I'm also having the degraded pink / amber light issue using firmware 1.04. Below is what I've done so far:
I'm using two drives of 500 gigs each, Seagate ST3500320AS
I'm shutting down and boot the DNS using the blue square button on the front panel.
I've not use anything beside the DNS built-in features (no fun_plug or what's ever)
The DNS is plugged on a UPS so electric perturbations are greatly reduce. And no outage occur while testings.
I've bought the unit and the 2 Seagate drives.
Update the firmware to 1.04
Format RAID 1
Configure the DNS
Send stuff on it
Close it for the night
Degraded Status the next day.
Format a couple of time, each time same result.
I've notice however that standard formatting (stand alone drives in the dns) doesn't suffer the degraded status of RAID 1.
Went back to the store with the 2 drives and the dns. Technicians tested both hard drives and certified that they are in good condition (I bought them at the same time of the dns and I still have the invoice for technician's labor time)
Come back home with a new DNS (they replace it).
Redownloaded firmware 1.04, compare it with the previous one. Nothing wrong with the 2 downloads.
Updated to firmware 1.04
Format RAID 1
Configure the DNS
Send stuff on it
Close it for the night
Degraded Status the next day.
Reformat RAID 1 again...
I've notice that as long as I don't write any file on the DNS, the light / degraded issue is not trigger (shutting down it and booting up several times and with different delay).
As soon as I write something even as small as 50k, degraded status afterward, inevitably.
Downgraded to firmware 1.03, never had the issue up to now.
fordem:
I believe that I've answer your question. If the drives were not able to boot enough fast, I would have the degraded issue regardless if I write data on the drives or not. But that's not the case here.
Also in standard (stand alone drives) I do not suffer the degraded issue. If the drives would boot up more slowly, I would suffer the degraded issue but again, it's not the case.
Could someone share a bit of light on this issue please?
Thank you
-
Scay and Megistal,
Your descriptions of the problems you are experiencing with degraded Raid 1's on 1.04 firmware are the exact same as what I'm seeing. I have a hunch that this may be hardware version related (see my post here: http://forums.dlink.com/index.php?topic=1596.0 ).
Please look at the bottom of your DNS-323's and verify if you are running version A1 or B1.
-
fordem:
I believe that I've answer your question. If the drives were not able to boot enough fast, I would have the degraded issue regardless if I write data on the drives or not. But that's not the case here.
Also in standard (stand alone drives) I do not suffer the degraded issue. If the drives would boot up more slowly, I would suffer the degraded issue but again, it's not the case.
Could someone share a bit of light on this issue please?
Thank you
Megistal
I'll take a look at the issue you raise with needing to have data on the drive before the array becomes degraded - in the mean time - if your drives support delayed spinup and it is enabled, how about disabling it and letting us know what happens - the DNS-323 already has it's own mechanism for staggering spinup, so there is no need to have the disk do it.
Also - you can't have a degraded (array) issue unless you have an array.
Just so that it's clear - a RAID array (anything other than RAID0 which does not provide any redundancy) is considered "degraded" or "critical" when it no longer offers redundancy, which would be the case when a drive fails - or - if the data on the drives is out of "synch".
-
I have A1 revision on mine.
-
I have a B1 revision.
Also - you can't have a degraded (array) issue unless you have an array.
You're right, stand alone drives cannot be in a degraded mode. However, if one drive would not start correctly for what ever reason, it would display the amber light regardless of the format type which was applied to it.
A disk booting behavior is independent of the data written on it.
if your drives support delayed spinup and it is enabled, how about disabling it and letting us know what happens
Didn't find any information about that features on Seagate web site for my drives.
Are you aware of any Seagate approved tools that can do that?
-
Whether or not it will display an amber LED is dependent on how the DNS-323 sees it - suffice to say in firmware revs prior to 1.04, I found that the unit's drive failure sensing was somewhat lacking - drives that my desktop's BIOS would reject could be installed in the DNS-323 and the unit would attempt to partition and format them, and the only indication of an error would be the length of time it took to format.
FWIW - I have never seen the "amber" LED and both occasions on which I have seen the "pink" or "white" LED the problem was provoked by something I had done and did not relate to a physically failed drive or, as far as I can tell, a drive that was not being sensed as present.
As for your Seagate - it does support what Seagate terms "deferred spin" and it is enabled by default - it can be disabled by grounding pin 11 in the SATA data connector, which is not as easy as using a jumper - I am not certain if there is a jumper to disable it.
-
I have upgraded to firmware 1.05 and so far I don't suffer the degraded issue anymore... Still I'll continue testing.