Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
LVMcache currently calls lvm pvs with each pv as an argument, this can lead to inconsistent results if the first pv does not hold an active mda (LVM BZ#83663)
Description of problem:
When creating or extending a V1 or V2 storage domain the LVMcache within vdsm provides an incorrect listing of PVs within a given VG due to a bug within the pvs command (BZ#836663). This then leads to incorrect or missing PV* entries within the metadata for a storage domain. This will become an issue when we attempt to create a vdisk with it's LV spanning or residing on this PV as we are unable to find the PV within our own storage domain metadata. This results in a 'meta data mapping failed' or MetaDataMappingError - 754 error from vdsm.
Version-Release number of selected component (if applicable):
All versions of vdsm where lvm 2.02.69 or greater is also present on the system.
How reproducible:
Always
Steps to Reproduce:
1. Create or extend a SD to hold more than a single PV.
2. Review the MD present for the PVs.
3. PV data is either missing or incorrect.
Actual results:
LVMcache provides LvMetadataRW or VgMetadataRW with an incorrect listing of the actual PVs present within the storage domain. This leads to either incorrect or missing PV entries in the metadata of either V1 or V2 based SDs.
Expected results:
As this is a pvs bug there are a number of things we could change here :
- Ensure pvs is always called with the PV with the active mda within the VG first.
- Stop calling pvs with the individual PVs as arguments. This would result in a complete scan but would require additional logic to ensure we only look at PVs in the VG we care about.
- Switch to using vgs -o +pv_name ${vguuid}.
- Add additional checking within LVMcache to ensure that the number of PVs reported from pvs is what we are expecting.
Additional info:
The following example is for a V2 storage domain with 5 PVs :
# vgs -o +pv_name
VG #PV #LV #SN Attr VSize VFree PV
480a7532-cf70-4fc3-9341-65b9f3a0fa19 5 8 0 wz--n- 123.12g 114.25g /dev/mapper/1IET_00010002
480a7532-cf70-4fc3-9341-65b9f3a0fa19 5 8 0 wz--n- 123.12g 114.25g /dev/mapper/1IET_0001000a
480a7532-cf70-4fc3-9341-65b9f3a0fa19 5 8 0 wz--n- 123.12g 114.25g /dev/mapper/1IET_00010009
480a7532-cf70-4fc3-9341-65b9f3a0fa19 5 8 0 wz--n- 123.12g 114.25g /dev/mapper/1IET_00010001
480a7532-cf70-4fc3-9341-65b9f3a0fa19 5 8 0 wz--n- 123.12g 114.25g /dev/mapper/1IET_00010004
HostVG 1 2 0 wz--n- 1.23g 0 /dev/mapper/1ATA_QEMU_HARDDISK_QM00001p4
In my case only a single PV line was added to the VG tags :
# vgs -o +tags | grep MDT | sed -e 's/\,/\n/g' | grep PV
MDT_PV0=pv:1IET_00010002&44&uuid:hXHZco-TkdH-KDpI-XZks-AuW7-6tfQ-X9j0ws&44&pestart:0&44&pecount:197&44&mapoffset:0
When calling only pvs we see :
# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/1ATA_QEMU_HARDDISK_QM00001p4 HostVG lvm2 a-- 1.23g 0
/dev/mapper/1IET_00010001 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_00010002 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 15.75g
/dev/mapper/1IET_00010004 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_00010009 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_0001000a 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
When calling pvs with each pv as an argument in the same manor as the LVMcache and also providing the PV with the MDA last we get :
# pvs /dev/mapper/1IET_0001000{1,4,9,a,2}
PV VG Fmt Attr PSize PFree
/dev/mapper/1IET_00010001 lvm2 a-- 25.00g 25.00g
/dev/mapper/1IET_00010002 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 15.75g
/dev/mapper/1IET_00010004 lvm2 a-- 25.00g 25.00g
/dev/mapper/1IET_00010009 lvm2 a-- 25.00g 25.00g
/dev/mapper/1IET_0001000a lvm2 a-- 25.00g 25.00g
Calling pvs with this PV first gives us the correct results :
# pvs /dev/mapper/1IET_0001000{2,4,9,a,1}
PV VG Fmt Attr PSize PFree
/dev/mapper/1IET_00010001 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_00010002 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 15.75g
/dev/mapper/1IET_00010004 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_00010009 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
/dev/mapper/1IET_0001000a 480a7532-cf70-4fc3-9341-65b9f3a0fa19 lvm2 a-- 24.62g 24.62g
Haim, there is something I don't understand here.
In V2 storage domains we store this info for BC purposes only, but functionally this should not affect anything. Can you get a MetaDataMappingError using V2?
My limited testing today against V2 domains with missing MDT_PVn lines appears to confirm that they are not impacted by this. I was able to create a vdisk with it's LV spanning multiple PVs without any errors being reported.
# lvs -o seg_all 480a7532-cf70-4fc3-9341-65b9f3a0fa19/7490f674-1878-4b48-96c4-dedae978b28c
Type #Str Stripe Stripe Region Region Chunk Chunk Start Start SSize Seg Tags PE Ranges Devices
linear 1 0 0 0 0 0 0 0 0 24.62g /dev/mapper/1IET_0001000a:0-196 /dev/mapper/1IET_0001000a(0)
linear 1 0 0 0 0 0 0 24.62g 197 24.62g /dev/mapper/1IET_00010009:0-196 /dev/mapper/1IET_00010009(0)
linear 1 0 0 0 0 0 0 49.25g 394 24.62g /dev/mapper/1IET_00010001:0-196 /dev/mapper/1IET_00010001(0)
linear 1 0 0 0 0 0 0 73.88g 591 24.62g /dev/mapper/1IET_00010004:0-196 /dev/mapper/1IET_00010004(0)
linear 1 0 0 0 0 0 0 98.50g 788 1.50g /dev/mapper/1IET_00010002:71-82 /dev/mapper/1IET_00010002(71)
Comment 7Eduardo Warszawski
2012-07-03 15:32:05 UTC
*** This bug has been marked as a duplicate of bug 798635 ***