This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 149321 - LVM Volume Groups created on /dev/vpathxx do not work correctly
LVM Volume Groups created on /dev/vpathxx do not work correctly
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: lvm (Show other bugs)
3.0
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Heinz Mauelshagen
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-02-22 08:44 EST by Dr. Stephan Wonczak
Modified: 2007-11-30 17:07 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-02-22 10:03:43 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Dr. Stephan Wonczak 2005-02-22 08:44:48 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.2)
Gecko/20030708

Description of problem:
The newest release (lvm-1.0.8-9) can handle IBMsdd-vpath-devices as
physical volumes. It does even work....until the next reboot. At that
point LVM switches from using /dev/vpathxx as physical volume to one
of the four underlying SCSI devices /dev/sdxx, thus losing all
redundancy provided by the sdd driver.

Version-Release number of selected component (if applicable):
lvm-1.0.8-9

How reproducible:
Always

Steps to Reproduce:
1.[root@lvr15 root]# pvcreate /dev/vpatha3
pvcreate -- physical volume "/dev/vpatha3" successfully created

[root@lvr15 root]# vgcreate share_vg /dev/vpatha3
vgcreate -- INFO: using default physical extent size 32 MB
vgcreate -- INFO: maximum logical volume size is 2 Terabyte
vgcreate -- doing automatic backup of volume group "share_vg"
vgcreate -- volume group "share_vg" successfully created and activated
[root@lvr15 root]# vgdisplay -v share_vg
--- Volume group ---
VG Name               share_vg
VG Access             read/write
VG Status             available/resizable
VG #                  1
MAX LV                256
Cur LV                0
Open LV               0
MAX LV Size           2 TB
Max PV                256
Cur PV                1
Act PV                1
VG Size               299.91 GB
PE Size               32 MB
Total PE              9597
Alloc PE / Size       0 / 0
Free  PE / Size       9597 / 299.91 GB
VG UUID               DjZlZD-YiKv-zDp9-lG6a-GIzu-BTMu-pMVheB

--- No logical volumes defined in "share_vg" ---


--- Physical volumes ---
PV Name (#)           /dev/vpatha3 (1)
PV Status             available / allocatable
Total PE / Free PE    9597 / 9597

2.reboot

3.[root@lvr15 root]# vgdisplay -v share_vg
--- Volume group ---
VG Name               share_vg
VG Access             read/write
VG Status             available/resizable
VG #                  1
MAX LV                256
Cur LV                0
Open LV               0
MAX LV Size           2 TB
Max PV                256
Cur PV                1
Act PV                1
VG Size               299.91 GB
PE Size               32 MB
Total PE              9597
Alloc PE / Size       0 / 0
Free  PE / Size       9597 / 299.91 GB
VG UUID               DjZlZD-YiKv-zDp9-lG6a-GIzu-BTMu-pMVheB

--- No logical volumes defined in "share_vg" ---


--- Physical volumes ---
PV Name (#)           /dev/sde3 (1)
PV Status             available / allocatable
Total PE / Free PE    9597 / 9597

    

Actual Results:  share_vg is using /dev/sde3 as physical volume.

Expected Results:  share_vg should use /dev/vpatha3 as physical volume

Additional info:

According to our analysis LVM is setup very early in the bootup
process (in rc.sysinit), before IBMsdd is loaded. Unfortunately, the
drivers are located on yet another logical volume group and thus are
not accessible at that point in time.
Either the volume group 'share_vg' should not be activated since the
physical volume indicated by its configuration is missing, or it
should be possible to change the physical volume path from one device
to another (i.e. /dev/sde3 to /dev/vpatha3) while the system is up
(having share_vg offline for this would be OK)
Anyway, with this bug unfixed it is not feasible to use vpath-devices
as LVM physical volumes.
Comment 1 Heinz Mauelshagen 2005-02-22 10:03:43 EST
Fixed in 1.0.8-12.2 with RHEL3 U5.
Comment 2 Dr. Stephan Wonczak 2005-02-22 10:10:41 EST
Nice... Now how do we get this package for testing? Can you give an URL?
Comment 3 Heinz Mauelshagen 2005-02-22 10:24:49 EST
RHEL3 U5 GA is planned for May 4.
Comment 4 Dr. Stephan Wonczak 2005-02-22 10:30:33 EST
Well, this does not help us very much. We need to put a few systems
into production, and we have a few other machines (in production) as
well which currently suffer from this problem! 
Waiting for two months is no soultion! (And a pointer on just how this
problem is resolved would be good as well). We need to test *now*!
Comment 5 Heinz Mauelshagen 2005-02-23 08:36:41 EST
Resolved by introducing a filter which avoids access to the devices making up a
vpath one.
If you *need* early access to 1.0.8-12.2, please contact your TAM.
Comment 6 heath seals 2005-04-22 16:51:57 EDT
This is still an issue after upgrading to lvm-1.0.8-12.2 :

$ rpm -qa lvm
lvm-1.0.8-12.2

$ sudo /sbin/pvcreate /dev/vpatha
pvcreate -- physical volume "/dev/vpatha" successfully created

$ sudo /sbin/vgcreate vg-test /dev/vpatha
vgcreate -- INFO: using default physical extent size 32 MB
vgcreate -- INFO: maximum logical volume size is 2 Terabyte
vgcreate -- doing automatic backup of volume group "vg-test"
vgcreate -- volume group "vg-test" successfully created and activated

$ sudo /sbin/vgdisplay -v vg-test
--- Volume group ---
VG Name               vg-test
VG Access             read/write
VG Status             available/resizable
VG #                  1
MAX LV                256
Cur LV                0
Open LV               0
MAX LV Size           2 TB
Max PV                256
Cur PV                1
Act PV                1
VG Size               5.94 GB
PE Size               32 MB
Total PE              190
Alloc PE / Size       0 / 0
Free  PE / Size       190 / 5.94 GB
VG UUID               cpM0d1-Rkky-WQ36-XP3I-U7D8-3SLn-i2nCTp

--- No logical volumes defined in "vg-test" ---


--- Physical volumes ---
PV Name (#)           /dev/vpatha (1)
PV Status             available / allocatable
Total PE / Free PE    190 / 190



-----------REBOOT------------


$ sudo /sbin/vgdisplay -v vg-test
--- Volume group ---
VG Name               vg-test
VG Access             read/write
VG Status             available/resizable
VG #                  1
MAX LV                256
Cur LV                0
Open LV               0
MAX LV Size           2 TB
Max PV                256
Cur PV                1
Act PV                1
VG Size               5.94 GB
PE Size               32 MB
Total PE              190
Alloc PE / Size       0 / 0
Free  PE / Size       190 / 5.94 GB
VG UUID               cpM0d1-Rkky-WQ36-XP3I-U7D8-3SLn-i2nCTp

--- No logical volumes defined in "vg-test" ---


--- Physical volumes ---
PV Name (#)           /dev/sdh (1)
PV Status             available / allocatable
Total PE / Free PE    190 / 190
Comment 7 Dr. Stephan Wonczak 2005-04-25 02:50:14 EDT
For the record:
the sequence 

vgchange -a n <volume-group> ; vgscan ; vgchange -a y <volume-group> 

fixes this problem and restores the /dev/vpath-device to the volume group. SDD
has to be running at that time, of course. 
Comment 12 John P Dwyer 2005-05-04 13:46:58 EDT
The volume groups are activated first in the linuxrc inside the initrd when the
system is first booted. That is why we may have all partitions except /boot on
lvm. But how can LVM know which LVs may be activated then and which should wait?
/etc is not mounted yet.

I discovered this after experimenting with using mdadm for multipathing, putting
an lv on top of /dev/md0, only to discover that on reboot, that lv was now on
/dev/sdg1. I did note that the vgchange/vgscan/vgchange suggested by Dr Wonczak
worked. 

Note You need to log in before you can comment on or make changes to this bug.