Bug 245150 - dm stripe device has a worse reading speed - readahead is not optimal
Summary: dm stripe device has a worse reading speed - readahead is not optimal
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: device-mapper
Version: 4.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Zdenek Kabelac
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks: 147679
TreeView+ depends on / blocked
 
Reported: 2007-06-21 12:20 UTC by Milan Broz
Modified: 2013-03-01 04:05 UTC (History)
13 users (show)

Fixed In Version: RHBA-2008-0735
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-07-24 19:58:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2008:0735 0 normal SHIPPED_LIVE device-mapper bug fix and enhancement update 2008-07-23 17:06:53 UTC

Description Milan Broz 2007-06-21 12:20:03 UTC
+++ This bug was initially created as a clone of Bug #147679 +++

The dm device using stripe target need to set readahead of the dm device
properly (similarly to MD raid0 target).

Also there is currently ignored --readahead parameter in lvcreate/lvchange.

Computed or user defined readahead value should be propagated through
libdevmapper library to dm device (using GETRA ioctl).

Original bug report:

Description of problem:
LVM2 has a very bad reading performance using a larger disk stripe. Below is 
a single raw device out of a HP disk system (DS2405 to be exactly), connected 
via fibre channel (max 400 MB/s) to an HP DL145 (Opteron) machine:

--- snipp ---
[root@opteron ~]# time dd if=/dev/zero of=/dev/sdo bs=1024k count=10000
10000+0 records in
10000+0 records out

real    2m36.544s
user    0m0.017s
sys     0m16.983s
[root@opteron ~]#
[root@opteron ~]# time dd if=/dev/sdo of=/dev/null bs=1024k count=10000
10000+0 records in
10000+0 records out

real    2m20.435s
user    0m0.023s
sys     0m19.391s
[root@opteron ~]#
--- snapp ---

This results to 64 MB/s writing and 71 MB/s reading; that's okay.

Below is a disk stripe containing out of 14 HDDs with manual fibre channel 
multipath by LVM2 (at the same HP disk system):

--- snipp ---
[root@opteron ~]# time dd if=/dev/zero of=/dev/vg01/lv01 bs=1024k count=100000
100000+0 records in
100000+0 records out

real    4m44.290s
user    0m0.255s
sys     3m28.193s
[root@opteron ~]#
[root@opteron ~]# time dd if=/dev/vg01/lv01 of=/dev/null bs=1024k count=100000
100000+0 records in
100000+0 records out

real    9m32.262s
user    0m0.276s
sys     4m19.993s
[root@opteron ~]#
--- snapp ---

This results to 350 MB/s writing and 175 MB/s reading - a worse result for 
reading. If I use a HP-UX box, I get ~ 350/350 MB/s; a good result. So the 
problem is LVM2.

Version-Release number of selected component (if applicable):
lvm2-2.00.31-1.0.RHEL4

How reproducible:
Everytime, see above.

Actual results:
Only 50% reading performance compared with the measured writing speed.

Expected results:
Same reading speed like at writing or even better.

Comment 2 RHEL Program Management 2007-06-21 12:34:16 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 4 RHEL Program Management 2007-11-29 04:18:42 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 6 Zdenek Kabelac 2008-01-09 15:25:17 UTC
fixed upstream

Comment 10 errata-xmlrpc 2008-07-24 19:58:57 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0735.html


Note You need to log in before you can comment on or make changes to this bug.