Description of problem: I saw some references to performance penalties of 20-30% recently for LVM, and wondered what I was getting from running LVM over 4-way RAID-0. I was horrified to discovery a 50% penalty! Googling around turned up this known fix that restores just about all of the performance (as measured by hdparm -t): https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/129488 Version-Release number of selected component (if applicable): lvm2-2.02.33-11.fc9.x86_64 How reproducible: 100% Steps to Reproduce: 1. hdparm -t /dev/mapper/VolGroup00-LogVol01 2. blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01 3. Actual results: $ blockdev --getra /dev/mapper/VolGroup00-LogVol00 256 $ hdparm -t /dev/mapper/VolGroup00-LogVol01 /dev/mapper/VolGroup00-LogVol01: Timing buffered disk reads: 374 MB in 3.01 seconds = 124.21 MB/sec $ blockdev --setra 8192 /dev/mapper/VolGroup00-LogVol01 $ hdparm -t /dev/mapper/VolGroup00-LogVol01 /dev/mapper/VolGroup00-LogVol01: Timing buffered disk reads: 734 MB in 3.00 seconds = 244.57 MB/sec Expected results: No big performance penalties for LVM, at least not without big red flags to the user!! Additional info: Now I'm wondering why I'm using LVM over raid0 ? There is no way I'm going to extend the logical partition to another device (4 disks). Perhaps the right solution for me is to drop LVM and run my file system directly on /dev/md0.
Perhaps hdparm is misleading? I get essentially no increase in performance for my make jobs with this readahead change.
- hdaparm runs just synchronous reads of 2MB blocks, basically it should return similar values like blockdev --flushbufs $DEV ; dd iflag=sync if=$DEV of=/dev/null bs=2048k count=100 - readhead setting is now properly set for striped LVs (RADI0) in lvm2 2.0.2.39 (valuses should be similar to MD subsystem) # lvcreate -i4 -L 100G -n lv_s2 vg_test Using default stripesize 64.00 KB Logical volume "lv_s2" created # lvs -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices lv_s2 vg_test -wi-a- 100.00G /dev/sdb(0),/dev/sdc(0),/dev/sdd(0),/dev/sde(0) # blockdev --getra /dev/sdb 256 # blockdev --getra /dev/vg_test/lv_s2 1024 - there are still some issues if stacking devices (lvm over md), see bug 232843