RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1368272 - "Failed to monitor" errors when caching origin volume using a raid pool volume
Summary: "Failed to monitor" errors when caching origin volume using a raid pool volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-18 23:27 UTC by Corey Marthaler
Modified: 2021-09-03 12:52 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.164-4.el7
Doc Type: No Doc Update
Doc Text:
In-release bug fixed.
Clone Of:
Environment:
Last Closed: 2016-11-04 04:18:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2016-08-18 23:27:20 UTC
Description of problem:
[root@host-125 ~]# lvcreate --type raid10 -i 3 -n raidlv -L 100M test
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents).
  Logical volume "raidlv" created.

[root@host-125 ~]# lvs -a -o +devices
  LV                VG   Attr       LSize   Cpy%Sync Devices
  raidlv            test rwi-a-r--- 108.00m 100.00   raidlv_rimage_0(0),raidlv_rimage_1(0),raidlv_rimage_2(0),raidlv_rimage_3(0),raidlv_rimage_4(0),raidlv_rimage_5(0)
  [raidlv_rimage_0] test iwi-aor---  36.00m          /dev/sda1(1)
  [raidlv_rimage_1] test iwi-aor---  36.00m          /dev/sdb1(1)
  [raidlv_rimage_2] test iwi-aor---  36.00m          /dev/sdc1(1)
  [raidlv_rimage_3] test iwi-aor---  36.00m          /dev/sdd1(1)
  [raidlv_rimage_4] test iwi-aor---  36.00m          /dev/sde1(1)
  [raidlv_rimage_5] test iwi-aor---  36.00m          /dev/sdf1(1)
  [raidlv_rmeta_0]  test ewi-aor---   4.00m          /dev/sda1(0)
  [raidlv_rmeta_1]  test ewi-aor---   4.00m          /dev/sdb1(0)
  [raidlv_rmeta_2]  test ewi-aor---   4.00m          /dev/sdc1(0)
  [raidlv_rmeta_3]  test ewi-aor---   4.00m          /dev/sdd1(0)
  [raidlv_rmeta_4]  test ewi-aor---   4.00m          /dev/sde1(0)
  [raidlv_rmeta_5]  test ewi-aor---   4.00m          /dev/sdf1(0)

[root@host-125 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.

[root@host-125 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.

[root@host-125 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.

[root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found.
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found.
  test/raidlv_cdata: raid10 segment monitoring function failed.
  Failed to monitor test/raidlv_cdata
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6FcRBi2VwnSzUAxhqQNncC3t3LpALKqd4-cdata: device not found.
  Logical volume test/origin is now cached.

[root@host-125 ~]# lvs -a -o +devices
  LV                      VG    Attr       LSize   Pool     Origin Data% Cpy%Sync Devices
  [lvol0_pmspare]         test  ewi-------  12.00m                                /dev/sda1(63)
  origin                  test  Cwi-a-C--- 200.00m [raidlv]        0.00  100.00   origin_corig(0)
  [origin_corig]          test  owi-aoC--- 200.00m                                /dev/sda1(10)
  [raidlv]                test  Cwi---C--- 108.00m                                raidlv_cdata(0)
  [raidlv_cdata]          test  Cwi-aor--- 108.00m                       100.00   raidlv_cdata_rimage_0(0),raidlv_cdata_rimage_1(0),raidlv_cdata_rimage_2(0),raidlv_cdata_rimage_3(0),raidlv_cdata_rimage_4(0),raidlv_cdata_rimage_5(0)
  [raidlv_cdata_rimage_0] test  iwi-aor---  36.00m                                /dev/sda1(1)
  [raidlv_cdata_rimage_1] test  iwi-aor---  36.00m                                /dev/sdb1(1)
  [raidlv_cdata_rimage_2] test  iwi-aor---  36.00m                                /dev/sdc1(1)
  [raidlv_cdata_rimage_3] test  iwi-aor---  36.00m                                /dev/sdd1(1)
  [raidlv_cdata_rimage_4] test  iwi-aor---  36.00m                                /dev/sde1(1)
  [raidlv_cdata_rimage_5] test  iwi-aor---  36.00m                                /dev/sdf1(1)
  [raidlv_cdata_rmeta_0]  test  ewi-aor---   4.00m                                /dev/sda1(0)
  [raidlv_cdata_rmeta_1]  test  ewi-aor---   4.00m                                /dev/sdb1(0)
  [raidlv_cdata_rmeta_2]  test  ewi-aor---   4.00m                                /dev/sdc1(0)
  [raidlv_cdata_rmeta_3]  test  ewi-aor---   4.00m                                /dev/sdd1(0)
  [raidlv_cdata_rmeta_4]  test  ewi-aor---   4.00m                                /dev/sde1(0)
  [raidlv_cdata_rmeta_5]  test  ewi-aor---   4.00m                                /dev/sdf1(0)
  [raidlv_cmeta]          test  ewi-ao----  12.00m                                /dev/sda1(60)


Version-Release number of selected component (if applicable):
3.10.0-493.el7.bz1367223.x86_64

lvm2-2.02.164-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
lvm2-libs-2.02.164-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
lvm2-cluster-2.02.164-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
device-mapper-1.02.133-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
device-mapper-libs-1.02.133-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
device-mapper-event-1.02.133-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
device-mapper-event-libs-1.02.133-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.164-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.164-2.el7    BUILT: Tue Aug 16 05:43:50 CDT 2016

Comment 1 Corey Marthaler 2016-08-18 23:29:44 UTC
Same issue w/ raid1 pool volumes

[root@host-125 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test
  Logical volume "raidlv" created.
[root@host-125 ~]# vcreate -n origin -L 200M test
-bash: vcreate: command not found
[root@host-125 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.
[root@host-125 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.
[root@host-125 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.

[root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found.
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found.
  test/raidlv_cdata: raid1 segment monitoring function failed.
  Failed to monitor test/raidlv_cdata
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6HuzYwkL5IDEpKqcyBjDiYMH9BuIZpFRd-cdata: device not found.
  Logical volume test/origin is now cached.

Comment 4 Corey Marthaler 2016-08-18 23:33:12 UTC
Appears to be all raid types.

# raid6
[root@host-125 ~]#  lvconvert --yes --type cache --cachepool test/raidlv test/origin
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found.
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found.
  test/raidlv_cdata: raid6 segment monitoring function failed.
  Failed to monitor test/raidlv_cdata
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv6XOy2Z5HGgbdsbUxpmN3NQWooLdn2eN6N-cdata: device not found.
  Logical volume test/origin is now cached.


# raid0
[root@host-125 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found.
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found.
  test/raidlv_cdata: raid0_meta segment monitoring function failed.
  Failed to monitor test/raidlv_cdata
  _get_device_info: LVM-2I8lI3TnFnvLpRCBLx7FP9NzUxoHGrv69wpgyWoB8HpFinEZ7GX9zb3UevCRFNe9-cdata: device not found.
  Logical volume test/origin is now cached.

Comment 5 Alasdair Kergon 2016-08-22 13:20:19 UTC
The first lvconvert is leaving the new LVs inactive.
The second lvconvert seems unable to cope with this.

Comment 6 Alasdair Kergon 2016-08-22 16:43:40 UTC
The errors are happening when internal LVs are activated for wiping.  Monitoring should be disabled for this.

Comment 7 Zdenek Kabelac 2016-08-31 14:08:38 UTC
Addressed with upstream commit:

https://www.redhat.com/archives/lvm-devel/2016-August/msg00107.html

Comment 9 Corey Marthaler 2016-09-07 22:44:03 UTC
Marking verified in the latest rpms.

3.10.0-501.el7.x86_64

lvm2-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-libs-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-cluster-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016





[root@host-118 ~]# lvcreate --type raid1 -m 1 -n raidlv -L 100M test
  Logical volume "raidlv" created.
[root@host-118 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.
[root@host-118 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  Logical volume test/origin is now cached.



[root@host-118 ~]# lvcreate --type raid10 -i 3 -n raidlv -L 100M test
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents).
  Logical volume "raidlv" created.
[root@host-118 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.
[root@host-118 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  Logical volume test/origin is now cached.



[root@host-118 ~]# lvcreate --type raid0 -i 3 -n raidlv -L 100M test
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents).
  Logical volume "raidlv" created.
[root@host-118 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.
[root@host-118 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  Logical volume test/origin is now cached.




[root@host-118 ~]# lvcreate --type raid6 -i 3 -n raidlv -L 100M test
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB (27 extents).
  Logical volume "raidlv" created.
[root@host-118 ~]# lvcreate -n origin -L 200M test
  Logical volume "origin" created.
[root@host-118 ~]# lvcreate -n meta -L 12M test
  Logical volume "meta" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachemode writethrough --poolmetadata test/meta test/raidlv
  WARNING: Converting logical volume test/raidlv and test/meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted test/raidlv to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool test/raidlv test/origin
  Logical volume test/origin is now cached.

Comment 11 errata-xmlrpc 2016-11-04 04:18:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.