Bug 423391 - dm stripe device has a worse reading speed - readahead is not optimal
dm stripe device has a worse reading speed - readahead is not optimal
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: device-mapper (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Zdenek Kabelac
Corey Marthaler
Depends On:
  Show dependency treegraph
Reported: 2007-12-13 08:26 EST by Milan Broz
Modified: 2013-02-28 23:06 EST (History)
13 users (show)

See Also:
Fixed In Version: RHBA-2008-0081
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-05-21 12:44:21 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Milan Broz 2007-12-13 08:26:40 EST
+++ This bug was initially created as a clone of Bug #147679 +++
RHEL5 clone

The dm device using stripe target need to set readahead of the dm device
properly (similarly to MD raid0 target).

Also there is currently ignored --readahead parameter in lvcreate/lvchange.

Computed or user defined readahead value should be propagated through
libdevmapper library to dm device (using GETRA ioctl).

Original bug report:

Description of problem:
LVM2 has a very bad reading performance using a larger disk stripe. Below is 
a single raw device out of a HP disk system (DS2405 to be exactly), connected 
via fibre channel (max 400 MB/s) to an HP DL145 (Opteron) machine:

--- snipp ---
[root@opteron ~]# time dd if=/dev/zero of=/dev/sdo bs=1024k count=10000
10000+0 records in
10000+0 records out

real    2m36.544s
user    0m0.017s
sys     0m16.983s
[root@opteron ~]#
[root@opteron ~]# time dd if=/dev/sdo of=/dev/null bs=1024k count=10000
10000+0 records in
10000+0 records out

real    2m20.435s
user    0m0.023s
sys     0m19.391s
[root@opteron ~]#
--- snapp ---

This results to 64 MB/s writing and 71 MB/s reading; that's okay.

Below is a disk stripe containing out of 14 HDDs with manual fibre channel 
multipath by LVM2 (at the same HP disk system):

--- snipp ---
[root@opteron ~]# time dd if=/dev/zero of=/dev/vg01/lv01 bs=1024k count=100000
100000+0 records in
100000+0 records out

real    4m44.290s
user    0m0.255s
sys     3m28.193s
[root@opteron ~]#
[root@opteron ~]# time dd if=/dev/vg01/lv01 of=/dev/null bs=1024k count=100000
100000+0 records in
100000+0 records out

real    9m32.262s
user    0m0.276s
sys     4m19.993s
[root@opteron ~]#
--- snapp ---

This results to 350 MB/s writing and 175 MB/s reading - a worse result for 
reading. If I use a HP-UX box, I get ~ 350/350 MB/s; a good result. So the 
problem is LVM2.

Version-Release number of selected component (if applicable):

How reproducible:
Everytime, see above.

Actual results:
Only 50% reading performance compared with the measured writing speed.

Expected results:
Same reading speed like at writing or even better.
Comment 2 RHEL Product and Program Management 2007-12-13 08:34:57 EST
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 5 Zdenek Kabelac 2008-01-09 10:29:29 EST
fixed upstream
Comment 6 Milan Broz 2008-01-17 17:49:57 EST
In device-mapper-1.02.24-1.el5.
Comment 9 errata-xmlrpc 2008-05-21 12:44:21 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.