D-Link Forums

The Graveyard - Products No Longer Supported => D-Link Storage => DNS-323 => Topic started by: wrlee on July 07, 2011, 12:03:54 PM

Title: Slow disk to disk copy
Post by: wrlee on July 07, 2011, 12:03:54 PM
I've set up my NAS to allow me to SSH into it. I am using rsync to copy one disk to the other. Though I have drives that support 3Gb/S transfer speeds, I am getting transfer rates, on large files, of only 4MB/s! If I loosely translate that to 40Mb/s, that is just over 1/10th the speed that it should operate at. Why is it so slow?

Bill...
Title: Re: Slow disk to disk copy
Post by: jhtopping on July 07, 2011, 12:31:41 PM
There are probably more post concerning speed than anything else.  Search for the word "slow".
Title: Re: Slow disk to disk copy
Post by: fordem on July 07, 2011, 03:30:53 PM
Your question is beyond the scope of this forum, since the default configuration does not allow you to ssh into the NAS and run rsync - to be able to do that you had to hack the system - you perhaps would be better off asking your questions in a more appropriate forum.

What I can suggest is that you research rsync & it's suitability for copying files - or - try running cp to copy the files - I suspect you'll find it significantly faster.

Whilst you're about it - research the sustained read/write speeds of your 3Gb/s drives - you're probably in for a rude awakening - the interface may be capable of transferring data at that speed - but the drive is incapable of writing data that fast, so on a large file transfer, once the buffer is filled, guess what happens...
Title: Re: Slow disk to disk copy
Post by: wrlee on July 19, 2011, 01:53:54 PM
Most of the problem was that I was using rsync and I had no idea that rsync had so much overhead; indeed cp is much, much faster.

I'd posted here because I thought others might have comment about the base speed of transfers, despite the rated speeds of the drives and hardware. I did come across a post that mentioned that external access to the NAS tops out at 23MB/s. I didn't see a description as to why.
Title: Re: Slow disk to disk copy
Post by: fordem on July 19, 2011, 04:03:36 PM
"External access" to the NAS at 50MB/sec is possible, as are disk transfers exceeding 27MB/sec.  I've gotten tired of explaining where the limitations are - use the forum search it's in there somewhere.
Title: Re: Slow disk to disk copy
Post by: Steve Pitts on July 21, 2011, 01:04:17 PM
I did come across a post that mentioned that external access to the NAS tops out at 23MB/s. I didn't see a description as to why.
The manual contains the fairly straightforward statement:

"High Performance Gigabit Ethernet Connectivity (Up to 23/15MBps or 184/120Mbps Read/Write)"

which is where the 23MB/s figure comes from. If you bought a car that advertised a top speed of 80 MPH would you take it to the garage and enquire as to why it won't do 100 MPH??
Title: Re: Slow disk to disk copy
Post by: wrlee on July 21, 2011, 05:37:21 PM
You guys are awfully critical.  The problem with forums is that you have to know what to search for. Depending on the context that prompts a post, the search "question" isn't always apparent (until the answer is understood). I wasn't taking a "car" to a garage, complaining, I was trying to understand what was going on because the performance differential did not make sense to me. I appreciate the informative side of your input, but I don't know that the attitude was necessary. I usually try to read through the stickies, where I'd expect common, repeated questions to be answered. I had not seen anything thing related to throughput performance. And I was doing a disk-disk copy, which ought to be fast. And, as I mentioned, I had no idea that rsync had so much overhead, so I did not think it was the culprit... thus, the performance, as I observed it, would not have been explained by the quoted numbers.

The question remains, what should the disk-disk copy performance be, relative to the speed of the drives?
Title: Re: Slow disk to disk copy
Post by: Pbalis on July 22, 2011, 05:14:07 PM
There is no real answer about what the speed "should be". It depends.

Do you have Jumbo Frame support on all the network devices? What software are you using. There really is no such thing as a hardware to hardware based copy without software. Some software will be faster than others. If you copy really small files it will be impossible to get it going really fast. I can rip a DVD from a good DVD (some are not so good) at 20 MB/sec. I can copy 5 GB of icon files in about 36 hours. The file system has overhead. You'll need the NIC, Switch and DNS set for Jumbo frame support. The switches that don't do it will mean you can't get the best performance. Your network card will have to be manually set to Jumbo frame as will the DNS. These don't support it by default. On a 10/100 network without jumbo frame support you are probably about where I would expect.

You can search on "jumbo frame" and find a lot more details.
Title: Re: Slow disk to disk copy
Post by: Steve Pitts on July 27, 2011, 01:18:17 AM
You guys are awfully critical
Oh don't be so precious. No one is being critical, merely pointing out the realities of the situation which are that a) there is no way to do a disk-to-disk copy directly on the device as supplied by the manufacturer (so support for that on the official forums is a little unlikely) and b) the manufacturer advertises the device as giving up to 23MB/s. Since gigabit ethernet is capable of far more than that, and the hard drives are probably much quicker than that too, it would appear that the limiting factor is the internal combination of hardware and software that drives the device.
Title: Re: Slow disk to disk copy
Post by: uubird on July 27, 2011, 02:04:11 AM
Most of the problem was that I was using rsync and I had no idea that rsync had so much overhead; indeed cp is much, much faster.
 :o
Title: Re: Slow disk to disk copy
Post by: bjdo on July 27, 2011, 09:27:53 PM
As to where the bottleneck is.

I have read that the bottleneck is slow diskcontrollers. dns323 simply cannot talk to the disks any faster. Testing raw diskio via hdparm gives around the figures fordem writes far below bottleneck of most (all) HDs. Testing raw ethernet IO without diskaccess gives higher speeds.


Sorry dont have a reference for this.