• February 25, 2025, 01:04:55 PM
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

This Forum Beta is ONLY for registered owners of D-Link products in the USA for which we have created boards at this time.

Author Topic: FW 2.02 - "Pipeline" effect; faster read performance on larger files  (Read 3538 times)

RicRoller

  • Level 2 Member
  • **
  • Posts: 45

Using DNS-320 firmware 2.02 official release, two WD30EZRX drives in RAID0

Something I observed when commencing the copy of a large file; the transfer starts out relatively slowly and the speed "ramps up" over the first few seconds of the copy as the NAS seems able to build up a buffer of stuff to send (the two disks in RAID0 are able to deliver the data faster than the NAS is able to get rid of it over the LAN). But if copying smaller files, that pipeline never seems to get a chance to fill, and the transfer rate is lower (in an extreme case, copying many tiny files of only a few kb, I have seen transfer rates go below 1MByte/sec!!).

BTW, that "pipeline" effect also shows up if I run Nastester and watch the network usage graph in the Windows task manager; with the default 400MB test sample size the transfer rate graph looked more like a triangle; the pipeline had not completely filled by the end of the transfer. But use a larger sample size (1GB) and it does have time to reach the "plateau".

Result sets from Nastester attached below, showing the progressive improvement in read speed with larger files once that pipeline has "filled up". I cannot test tiny files using Nastester as it only reads back a single file (which, having just been written, is still in the hard drive cache inside the NAS - so performance is lightning-fast!)

Code: [Select]
NAS performance tester 1.4 http://www.808.dk/?nastester
Running warmup...
Running a 100MB file write on drive Z: 5 times...
Iteration 1:     17.66 MB/sec
Iteration 2:     13.16 MB/sec
Iteration 3:     18.97 MB/sec
Iteration 4:     16.39 MB/sec
Iteration 5:     15.52 MB/sec
------------------------------
Average (W):     16.34 MB/sec
------------------------------
Running a 100MB file read on drive Z: 5 times...
Iteration 1:     27.16 MB/sec
Iteration 2:     28.24 MB/sec
Iteration 3:     32.38 MB/sec
Iteration 4:     29.27 MB/sec
Iteration 5:     36.63 MB/sec
------------------------------
Average (R):     30.74 MB/sec
------------------------------
Running warmup...
Running a 400MB file write on drive Z: 5 times...
Iteration 1:     19.25 MB/sec
Iteration 2:     20.51 MB/sec
Iteration 3:     20.45 MB/sec
Iteration 4:     22.07 MB/sec
Iteration 5:     19.80 MB/sec
------------------------------
Average (W):     20.42 MB/sec
------------------------------
Running a 400MB file read on drive Z: 5 times...
Iteration 1:     31.85 MB/sec
Iteration 2:     38.38 MB/sec
Iteration 3:     36.22 MB/sec
Iteration 4:     38.97 MB/sec
Iteration 5:     34.05 MB/sec
------------------------------
Average (R):     35.89 MB/sec
------------------------------
Running warmup...
Running a 1000MB file write on drive Z: 5 times...
Iteration 1:     20.12 MB/sec
Iteration 2:     19.69 MB/sec
Iteration 3:     19.98 MB/sec
Iteration 4:     19.42 MB/sec
Iteration 5:     19.89 MB/sec
------------------------------
Average (W):     19.82 MB/sec
------------------------------
Running a 1000MB file read on drive Z: 5 times...
Iteration 1:     45.46 MB/sec
Iteration 2:     43.05 MB/sec
Iteration 3:     44.95 MB/sec
Iteration 4:     43.46 MB/sec
Iteration 5:     45.43 MB/sec
------------------------------
Average (R):     44.47 MB/sec
------------------------------

Regards,
Richard
Logged

JavaLawyer

  • BETA Tester
  • Level 15 Member
  • *
  • Posts: 12190
  • D-Link Global Forum Moderator
    • FoundFootageCritic
Re: FW 2.02 - "Pipeline" effect; faster read performance on larger files
« Reply #1 on: December 20, 2012, 02:58:45 PM »

Your observations are generally true for copying files among most storage devices (more so for mechanical HDDs). Copying small files introduces additional overhead associated with initiating the copy which must repeat for each small file, as opposed to one time for a single large file. Additional latency is introduced from HDD head movement, which stems from the likelihood that a large file is stored in one contiguous space on the HDD, while an equally sized group of smaller files are more likely to be scattered on different platter locations.
Logged
Find answers here: D-Link ShareCenter FAQ I D-Link Network Camera FAQ
There's no such thing as too many backups FFC

RicRoller

  • Level 2 Member
  • **
  • Posts: 45
Re: FW 2.02 - "Pipeline" effect; faster read performance on larger files
« Reply #2 on: December 21, 2012, 01:31:13 PM »

Copying small files introduces additional overhead associated with initiating the copy which must repeat for each small file
Nastester's read test creates and reads back just one file (on empty hard disks in the NAS that file is highly likely to be contiguous) - so I was surprised that there was such a large difference in read speed between the 100MB file and the 1GB file. So IMO the "usual" overhead of accessing lots of small files (HDD seek latency etc.) shouldn't have been coming into play here...

The "ramping up" of the read rate in the first few seconds of reading a single file IMO is definitely weird; a 100MB file takes 3 to 4 seconds to transfer over a gigabit connection so the delay due to seek latency etc. at the beginning should be a very small part of that and hardly noticeable.

Regards,
Richard
Logged