Hide Forgot
Description of problem: [root@host-073 ~]# pvscan PV /dev/vda2 VG rhel_host-073 lvm2 [7.51 GiB / 40.00 MiB free] PV /dev/sda1 VG VG lvm2 [12.50 GiB / 12.50 GiB free] PV /dev/sdb1 VG VG lvm2 [12.50 GiB / 12.50 GiB free] Total: 3 [32.51 GiB] / in use: 3 [32.51 GiB] / in no VG: 0 [0 ] [root@host-073 ~]# pvchange -x n /dev/sda1 Physical volume "/dev/sda1" changed 1 physical volume changed / 0 physical volumes not changed [root@host-073 ~]# pvchange -x n /dev/sda1 Physical volume "/dev/sda1" is already unallocatable. Physical volume /dev/sda1 not changed 0 physical volumes changed / 1 physical volume not changed [root@host-073 ~]# vgcfgbackup -f bar VG Volume group "VG" successfully backed up. physical_volumes { pv0 { id = "1pAC4Z-iYID-7x6n-J2Wi-KMxo-qYi0-TGW6Kf" device = "/dev/sda1" # Hint only status = [] flags = [] dev_size = 26218017 # 12.5017 Gigabytes pe_start = 2048 pe_count = 3200 # 12.5 Gigabytes } pv1 { id = "RUXSeH-oo0l-oddQ-8bae-o7JB-fJkx-Gl2tHI" device = "/dev/sdb1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 26218017 # 12.5017 Gigabytes pe_start = 2048 pe_count = 3200 # 12.5 Gigabytes } } [root@host-073 ~]# pvs -a -o +devices /dev/sda1 /dev/sdb1 PV VG Fmt Attr PSize PFree Devices /dev/sda1 VG lvm2 --- 12.50g 12.50g <--- note the loss of the 'a' attr /dev/sdb1 VG lvm2 a-- 12.50g 12.50g [root@host-073 ~]# vgcfgrestore -f bar VG Restored volume group VG [root@host-073 ~]# pvs -a -o +devices /dev/sda1 /dev/sdb1 PV VG Fmt Attr PSize PFree Devices /dev/sda1 VG lvm2 a-- 12.50g 12.50g /dev/sdb1 VG lvm2 a-- 12.50g 12.50g [root@host-073 ~]# vgcfgbackup -f bar2 VG Volume group "VG" successfully backed up. physical_volumes { pv0 { id = "1pAC4Z-iYID-7x6n-J2Wi-KMxo-qYi0-TGW6Kf" device = "/dev/sda1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 26218017 # 12.5017 Gigabytes pe_start = 2048 pe_count = 3200 # 12.5 Gigabytes } pv1 { id = "RUXSeH-oo0l-oddQ-8bae-o7JB-fJkx-Gl2tHI" device = "/dev/sdb1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 26218017 # 12.5017 Gigabytes pe_start = 2048 pe_count = 3200 # 12.5 Gigabytes } } Version-Release number of selected component (if applicable): 3.10.0-327.4.4.el7.x86_64 lvm2-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-libs-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-cluster-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-persistent-data-0.5.5-1.el7 BUILT: Thu Aug 13 09:58:10 CDT 2015 How reproducible: Everytime
Apparently a bug dating back to the early days of LVM2! Restore always sets every PV to allocatable. Fixed by moving this code outside pv_setup and only doing it for new PVs, not when restoring backups. https://git.fedorahosted.org/cgit/lvm2.git/patch/?id=01228b692be6850645b91811bbf30366241b036c https://www.redhat.com/archives/lvm-devel/2016-January/msg00041.html
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
Fix verified in the latest rpms. 3.10.0-497.el7.x86_64 lvm2-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 lvm2-libs-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 lvm2-cluster-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 device-mapper-1.02.133-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 device-mapper-libs-1.02.133-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 device-mapper-event-1.02.133-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 device-mapper-event-libs-1.02.133-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.164-3.el7 BUILT: Wed Aug 24 05:20:41 CDT 2016 [root@host-116 ~]# vgcreate VG /dev/sd[ab]1 Volume group "VG" successfully created [root@host-116 ~]# pvscan PV /dev/vda2 VG rhel_host-116 lvm2 [7.00 GiB / 0 free] PV /dev/sda1 VG VG lvm2 [9.99 GiB / 9.99 GiB free] PV /dev/sdb1 VG VG lvm2 [9.99 GiB / 9.99 GiB free] Total: 3 [26.98 GiB] / in use: 3 [26.98 GiB] / in no VG: 0 [0 ] [root@host-116 ~]# pvchange -x n /dev/sda1 Physical volume "/dev/sda1" changed 1 physical volume changed / 0 physical volumes not changed [root@host-116 ~]# pvchange -x n /dev/sda1 Physical volume "/dev/sda1" is already unallocatable. Physical volume /dev/sda1 not changed 0 physical volumes changed / 1 physical volume not changed [root@host-116 ~]# vgcfgbackup -f bar VG Volume group "VG" successfully backed up. physical_volumes { pv0 { id = "gnOCVA-YeaN-Wgjw-8g97-sX3c-bcBy-icE1yR" device = "/dev/sda1" # Hint only status = [] flags = [] dev_size = 20964762 # 9.99678 Gigabytes pe_start = 2048 pe_count = 2558 # 9.99219 Gigabytes } pv1 { id = "Dz3aCf-2mRT-dwLv-9Bqg-8xw6-2xcf-Tktvsx" device = "/dev/sdb1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 20964762 # 9.99678 Gigabytes pe_start = 2048 pe_count = 2558 # 9.99219 Gigabytes } [root@host-116 ~]# pvs -a -o +devices /dev/sda1 /dev/sdb1 PV VG Fmt Attr PSize PFree Devices /dev/sda1 VG lvm2 u-- 9.99g 9.99g /dev/sdb1 VG lvm2 a-- 9.99g 9.99g [root@host-116 ~]# vgcfgrestore -f bar VG Restored volume group VG [root@host-116 ~]# pvs -a -o +devices /dev/sda1 /dev/sdb1 PV VG Fmt Attr PSize PFree Devices /dev/sda1 VG lvm2 u-- 9.99g 9.99g <-- * Still unallocatable * /dev/sdb1 VG lvm2 a-- 9.99g 9.99g [root@host-116 ~]# vgcfgbackup -f bar2 VG Volume group "VG" successfully backed up. physical_volumes { pv0 { id = "gnOCVA-YeaN-Wgjw-8g97-sX3c-bcBy-icE1yR" device = "/dev/sda1" # Hint only status = [] <-- * Still unallocatable * flags = [] dev_size = 20964762 # 9.99678 Gigabytes pe_start = 2048 pe_count = 2558 # 9.99219 Gigabytes } pv1 { id = "Dz3aCf-2mRT-dwLv-9Bqg-8xw6-2xcf-Tktvsx" device = "/dev/sdb1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 20964762 # 9.99678 Gigabytes pe_start = 2048 pe_count = 2558 # 9.99219 Gigabytes } }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html