Bug 450922 - very sub-optimal default readahead settings on lvm device
Summary: very sub-optimal default readahead settings on lvm device
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: rawhide
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Milan Broz
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-06-11 18:35 UTC by John Ellson
Modified: 2013-03-01 04:06 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-07-01 12:10:51 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description John Ellson 2008-06-11 18:35:20 UTC
Description of problem:
I saw some references to performance penalties of 20-30% recently for LVM, and
wondered what I was getting from running LVM over 4-way RAID-0.   I was
horrified to discovery a 50% penalty!   Googling around turned up this known fix
that restores just about all of the performance (as measured by hdparm -t):
    https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/129488

Version-Release number of selected component (if applicable):
lvm2-2.02.33-11.fc9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. hdparm -t /dev/mapper/VolGroup00-LogVol01
2. blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01
3. 
  
Actual results:

$ blockdev --getra /dev/mapper/VolGroup00-LogVol00
256

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
/dev/mapper/VolGroup00-LogVol01:
 Timing buffered disk reads:  374 MB in  3.01 seconds = 124.21 MB/sec

$ blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
/dev/mapper/VolGroup00-LogVol01:
 Timing buffered disk reads:  734 MB in  3.00 seconds = 244.57 MB/sec


Expected results:
No big performance penalties for LVM, at least not without big red flags to the
user!!

Additional info:
Now I'm wondering why I'm using LVM over raid0 ?   There is no way I'm going
to extend the logical partition to another device (4 disks).   Perhaps the right
solution for me is to drop LVM and run my file system directly on /dev/md0.

Comment 1 John Ellson 2008-06-11 18:43:13 UTC
Perhaps hdparm is misleading?    I get essentially no increase in performance
for my make jobs with this readahead change.

Comment 2 Milan Broz 2008-07-01 12:10:51 UTC
- hdaparm runs just synchronous reads of 2MB blocks, basically it should return
similar values like
blockdev --flushbufs $DEV ; dd iflag=sync if=$DEV of=/dev/null bs=2048k count=100

- readhead setting is now properly set for striped LVs (RADI0) in lvm2 2.0.2.39
(valuses should be similar to MD subsystem)

# lvcreate -i4 -L 100G -n lv_s2 vg_test
  Using default stripesize 64.00 KB
  Logical volume "lv_s2" created
# lvs -o +devices
  LV    VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv_s2 vg_test -wi-a- 100.00G                                      
/dev/sdb(0),/dev/sdc(0),/dev/sdd(0),/dev/sde(0)

# blockdev --getra /dev/sdb
256
# blockdev --getra /dev/vg_test/lv_s2
1024

- there are still some issues if stacking devices (lvm over md), see bug 232843



Note You need to log in before you can comment on or make changes to this bug.