+++ This bug was initially created as a clone of Bug #820237 +++ Clone for lvm2-cluster build Description of problem: If there is PV with zero PE count in the VG, vgcfgrestore on the VG fails saying "Floating point exception", which is caused by division by zero. Background: We are using LVM heavily for storage of block device backups. Because the PVs are usually very busy and metadata manipulation requires direct access to the metadata sector (may take considerable time to complete), we have created a metadata-only PV on a separate drive. Version-Release number of selected component (if applicable): LVM version: 2.02.87(2)-RHEL6 (2011-10-12) Library version: 1.02.66-RHEL6 (2011-10-12) Driver version: 4.22.6 How reproducible: Always Steps to Reproduce: create /dev/<metadata_device> with size 200M create /dev/<data_device> of any size pvcreate --metadatasize 128m /dev/<metadata_device> pvcreate --metadatasize 128m /dev/<data_device> vgcreate -s 128m myVG /dev/<metadata_device> /dev/<data_device> vgcfgbackup myVG vgcfgrestore myVG Actual results: Last command fails with "Floating point exception" error. Expected results: vgcfgrestore should successfully write VG metadata Additional info: This issue may be worked around by omitting the zero-length PV from the backup file. See file contents below. We have solved this for our case using the above workaround and making the metadata PV bigger and setting allocable to N afterwards. Backup file contents: # Generated by LVM2 version 2.02.87(2)-RHEL6 (2011-10-12): Wed May 9 10:13:28 2012 contents = "Text Format Volume Group" version = 1 description = "vgcfgbackup -f myvg.vg myvg" creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.32-220.el6.x86_64 #1 SMP Tue Dec 6 19:48:22 GMT 2011 x86_64 creation_time = 1336551208 # Wed May 9 10:13:28 2012 myvg { id = "ouGQeP-8bPl-udT8-lm8a-Nd3A-WprB-pOGXfj" seqno = 1 status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 262144 # 128 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "hbMvQA-zTeD-CKPc-WE8o-umlm-Z6GJ-iyALRr" device = "/dev/vg_test/pv1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 20971520 # 10 Gigabytes pe_start = 264192 pe_count = 78 # 9.75 Gigabytes } pv1 { id = "K1BwfU-2LuG-8aeW-IXHR-RLf7-d36V-DMERDr" device = "/dev/vg_test/pv_metadata" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 409600 # 200 Megabytes pe_start = 264192 pe_count = 0 # 0 Kilobytes } } } Backup file for workaround: # Generated by LVM2 version 2.02.87(2)-RHEL6 (2011-10-12): Wed May 9 10:13:28 2012 contents = "Text Format Volume Group" version = 1 description = "vgcfgbackup -f myvg.vg myvg" creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.32-220.el6.x86_64 #1 SMP Tue Dec 6 19:48:22 GMT 2011 x86_64 creation_time = 1336551208 # Wed May 9 10:13:28 2012 myvg { id = "ouGQeP-8bPl-udT8-lm8a-Nd3A-WprB-pOGXfj" seqno = 1 status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 262144 # 128 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "hbMvQA-zTeD-CKPc-WE8o-umlm-Z6GJ-iyALRr" device = "/dev/vg_test/pv1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 20971520 # 10 Gigabytes pe_start = 264192 pe_count = 78 # 9.75 Gigabytes } } }
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux release for currently deployed products. This request is not yet committed for inclusion in a release.
Fixed in lvm2-cluster-2.02.88-8.el5
Tested with: lvm2-cluster-2.02.88-8.el5 cmirror-1.1.39-15.el5 device-mapper-multipath-0.4.7-49.el5 device-mapper-1.02.67-2.el5 device-mapper-event-1.02.67-2.el5 The creation of a suggested VG and PVs, and executing backup/restore procedure produced no errors, as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0024.html