Bug 450922

Summary: very sub-optimal default readahead settings on lvm device
Product: [Fedora] Fedora Reporter: John Ellson <john.ellson>
Component: lvm2Assignee: Milan Broz <mbroz>
Status: CLOSED RAWHIDE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: medium Docs Contact:
Priority: low    
Version: rawhideCC: agk, bmarzins, bmr, dwysocha, mbroz, prockai, pvrabec
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-07-01 12:10:51 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description John Ellson 2008-06-11 18:35:20 UTC
Description of problem:
I saw some references to performance penalties of 20-30% recently for LVM, and
wondered what I was getting from running LVM over 4-way RAID-0.   I was
horrified to discovery a 50% penalty!   Googling around turned up this known fix
that restores just about all of the performance (as measured by hdparm -t):
    https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/129488

Version-Release number of selected component (if applicable):
lvm2-2.02.33-11.fc9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. hdparm -t /dev/mapper/VolGroup00-LogVol01
2. blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01
3. 
  
Actual results:

$ blockdev --getra /dev/mapper/VolGroup00-LogVol00
256

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
/dev/mapper/VolGroup00-LogVol01:
 Timing buffered disk reads:  374 MB in  3.01 seconds = 124.21 MB/sec

$ blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01

$ hdparm -t /dev/mapper/VolGroup00-LogVol01
/dev/mapper/VolGroup00-LogVol01:
 Timing buffered disk reads:  734 MB in  3.00 seconds = 244.57 MB/sec


Expected results:
No big performance penalties for LVM, at least not without big red flags to the
user!!

Additional info:
Now I'm wondering why I'm using LVM over raid0 ?   There is no way I'm going
to extend the logical partition to another device (4 disks).   Perhaps the right
solution for me is to drop LVM and run my file system directly on /dev/md0.

Comment 1 John Ellson 2008-06-11 18:43:13 UTC
Perhaps hdparm is misleading?    I get essentially no increase in performance
for my make jobs with this readahead change.

Comment 2 Milan Broz 2008-07-01 12:10:51 UTC
- hdaparm runs just synchronous reads of 2MB blocks, basically it should return
similar values like
blockdev --flushbufs $DEV ; dd iflag=sync if=$DEV of=/dev/null bs=2048k count=100

- readhead setting is now properly set for striped LVs (RADI0) in lvm2 2.0.2.39
(valuses should be similar to MD subsystem)

# lvcreate -i4 -L 100G -n lv_s2 vg_test
  Using default stripesize 64.00 KB
  Logical volume "lv_s2" created
# lvs -o +devices
  LV    VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv_s2 vg_test -wi-a- 100.00G                                      
/dev/sdb(0),/dev/sdc(0),/dev/sdd(0),/dev/sde(0)

# blockdev --getra /dev/sdb
256
# blockdev --getra /dev/vg_test/lv_s2
1024

- there are still some issues if stacking devices (lvm over md), see bug 232843