Bug 1368272
Summary: | "Failed to monitor" errors when caching origin volume using a raid pool volume | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, zkabelac |
Version: | 7.3 | Keywords: | Regression |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.164-4.el7 | Doc Type: | No Doc Update |
Doc Text: |
In-release bug fixed.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-04 04:18:01 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2016-08-18 23:27:20 UTC
Same issue w/ raid1 pool volumes [root@host-125 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test Logical volume "raidlv" created. [root@host-125 ~]# vcreate -n origin -L 200M test -bash: vcreate: command not found [root@host-125 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-125 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-125 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. test/raidlv_cdata: raid1 segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found. Logical volume test/origin is now cached. Appears to be all raid types. # raid6 [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. test/raidlv_cdata: raid6 segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found. Logical volume test/origin is now cached. # raid0 [root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. test/raidlv_cdata: raid0_meta segment monitoring function failed. Failed to monitor test/raidlv_cdata _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found. Logical volume test/origin is now cached. The first lvconvert is leaving the new LVs inactive. The second lvconvert seems unable to cope with this. The errors are happening when internal LVs are activated for wiping. Monitoring should be disabled for this. Addressed with upstream commit: https://www.redhat.com/archives/lvm-devel/2016-August/msg00107.html Marking verified in the latest rpms. 3.10.0-501.el7.x86_64 lvm2-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-libs-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-cluster-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 [root@host-118 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid10 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid0 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. [root@host-118 ~]# lvcreate --type raid6 -i 3 -n raidlv -L 100M test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents). Logical volume "raidlv" created. [root@host-118 ~]# lvcreate -n origin -L 200M test Logical volume "origin" created. [root@host-118 ~]# lvcreate -n meta -L 12M test Logical volume "meta" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted test/raidlv to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin Logical volume test/origin is now cached. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html |