Hide Forgot
Description of problem: [root@host-125 ~]# lvcreate --type raid10 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-125 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices raidlv test rwi-a-r--- 108.00m 100.00 raidlv_rimage_0(0),raidlv_rimage_1(0),raidlv_rimage_2(0),raidlv_rimage_3(0),raidlv_rimage_4(0),raidlv_rimage_5(0) [raidlv_rimage_0] test iwi-aor--- 36.00m /dev/sda1(1) [raidlv_rimage_1] test iwi-aor--- 36.00m /dev/sdb1(1) [raidlv_rimage_2] test iwi-aor--- 36.00m /dev/sdc1(1) [raidlv_rimage_3] test iwi-aor--- 36.00m /dev/sdd1(1) [raidlv_rimage_4] test iwi-aor--- 36.00m /dev/sde1(1) [raidlv_rimage_5] test iwi-aor--- 36.00m /dev/sdf1(1) [raidlv_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raidlv_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raidlv_rmeta_2] test ewi-aor--- 4.00m /dev/sdc1(0) [raidlv_rmeta_3] test ewi-aor--- 4.00m /dev/sdd1(0) [raidlv_rmeta_4] test ewi-aor--- 4.00m /dev/sde1(0) [raidlv_rmeta_5] test ewi-aor--- 4.00m /dev/sdf1(0) [root@host-125 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-125 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-125 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found. test/raidlv_cdata: raid10 segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found. Logical volume test/origin is now cached. [root@host-125 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Cpy%Sync Devices [lvol0_pmspare] test ewi------- 12.00m /dev/sda1(63) origin test Cwi-a-C--- 200.00m [raidlv] 0.00 100.00 origin_corig(0) [origin_corig] test owi-aoC--- 200.00m /dev/sda1(10) [raidlv] test Cwi---C--- 108.00m raidlv_cdata(0) [raidlv_cdata] test Cwi-aor--- 108.00m 100.00 raidlv_cdata_rimage_0(0),raidlv_cdata_rimage_1(0),raidlv_cdata_rimage_2(0),raidlv_cdata_rimage_3(0),raidlv_cdata_rimage_4(0),raidlv_cdata_rimage_5(0) [raidlv_cdata_rimage_0] test iwi-aor--- 36.00m /dev/sda1(1) [raidlv_cdata_rimage_1] test iwi-aor--- 36.00m /dev/sdb1(1) [raidlv_cdata_rimage_2] test iwi-aor--- 36.00m /dev/sdc1(1) [raidlv_cdata_rimage_3] test iwi-aor--- 36.00m /dev/sdd1(1) [raidlv_cdata_rimage_4] test iwi-aor--- 36.00m /dev/sde1(1) [raidlv_cdata_rimage_5] test iwi-aor--- 36.00m /dev/sdf1(1) [raidlv_cdata_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(0) [raidlv_cdata_rmeta_1] test ewi-aor--- 4.00m /dev/sdb1(0) [raidlv_cdata_rmeta_2] test ewi-aor--- 4.00m /dev/sdc1(0) [raidlv_cdata_rmeta_3] test ewi-aor--- 4.00m /dev/sdd1(0) [raidlv_cdata_rmeta_4] test ewi-aor--- 4.00m /dev/sde1(0) [raidlv_cdata_rmeta_5] test ewi-aor--- 4.00m /dev/sdf1(0) [raidlv_cmeta] test ewi-ao---- 12.00m /dev/sda1(60) Version-Release number of selected component (if applicable): 3.10.0-493.el7.bz1367223.x86_64 lvm2-2.02.164-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 lvm2-libs-2.02.164-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 lvm2-cluster-2.02.164-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 device-mapper-1.02.133-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 device-mapper-libs-1.02.133-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 device-mapper-event-1.02.133-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 device-mapper-event-libs-1.02.133-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.164-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.164-2.el7 BUILT: Tue Aug 16 05:43:50 CDT 2016
Same issue w/ raid1 pool volumes [root@host-125 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test Logical volume "raidlv" created. [root@host-125 ~]# vcreate -n origin -L 200M test -bash: vcreate: command not found [root@host-125 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-125 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-125 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. test/raidlv_cdata: raid1 segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. Logical volume test/origin is now cached.
Appears to be all raid types. # raid6 [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. test/raidlv_cdata: raid6 segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. Logical volume test/origin is now cached. # raid0 [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. test/raidlv_cdata: raid0_meta segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. Logical volume test/origin is now cached.
The first lvconvert is leaving the new LVs inactive. The second lvconvert seems unable to cope with this.
The errors are happening when internal LVs are activated for wiping. Monitoring should be disabled for this.
Addressed with upstream commit: https://www.redhat.com/archives/lvm-devel/2016-August/msg00107.html
Marking verified in the latest rpms. 3.10.0-501.el7.x86_64 lvm2-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-libs-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-cluster-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 [root@host-118 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid10 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid0 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid6 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html