Bug 1382141
Summary: | exclusive activation of cached thin pool device is not maintained when attempting to merge thin snaps | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | ||||
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | ||||
lvm2 sub component: | Thin Provisioning | QA Contact: | cluster-qe <cluster-qe> | ||||
Status: | CLOSED WONTFIX | Docs Contact: | |||||
Severity: | high | ||||||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, thornber, zkabelac | ||||
Version: | 7.3 | ||||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2020-12-15 07:46:54 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Corey Marthaler
2016-10-05 21:09:17 UTC
Another result (82 and 83 are the non exclusively active cluster nodes): [...] lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n invalid2 Filling snapshot /dev/snapper_thinp/invalid2 dd if=/dev/zero of=/dev/snapper_thinp/invalid2 bs=1M count=101 dd: error writing ‘/dev/snapper_thinp/invalid2’: No space left on device 101+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 6.44011 s, 16.3 MB/s Attempt to merge back an invalidated snapshot volume lvconvert --merge /dev/snapper_thinp/invalid2 --yes Error locking on node 3: Check of pool snapper_thinp/cpool failed (status:1). Manual repair required! Error locking on node 2: Check of pool snapper_thinp/cpool failed (status:1). Manual repair required! Failed to reactivate origin snapper_thinp/origin. couldn't merge invalidated snap Oct 5 15:44:04 host-082 kernel: device-mapper: space map common: bitmap check failed: blocknr 34359738877 != wanted 5 Oct 5 15:44:04 host-082 kernel: device-mapper: block manager: sm_bitmap validator check failed for block 5 Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: metadata operation 'dm_cache_set_dirty' failed: error = -15 Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: aborting current metadata transaction Oct 5 15:44:04 host-082 kernel: device-mapper: space map common: index_check failed: csum 828282714 != wanted 828193943 Oct 5 15:44:04 host-082 kernel: device-mapper: block manager: index validator check failed for block 19 Oct 5 15:44:04 host-082 kernel: device-mapper: transaction manager: couldn't open metadata space map Oct 5 15:44:04 host-082 kernel: device-mapper: cache metadata: tm_open_with_sm failed Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: failed to abort metadata transaction Oct 5 15:44:04 host-082 kernel: device-mapper: cache: unable to read needs_check flag, setting failure mode Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: switching cache to fail mode Oct 5 15:44:04 host-082 kernel: device-mapper: cache: unable to read needs_check flag, setting failure mode Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: could not write dirty bitset Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: could not write discard bitset Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: could not write hints Oct 5 15:44:04 host-082 kernel: device-mapper: cache: 253:6: could not write cache metadata Oct 5 15:44:04 host-083 kernel: device-mapper: space map common: bitmap check failed: blocknr 34359738877 != wanted 5 Oct 5 15:44:04 host-083 kernel: device-mapper: block manager: sm_bitmap validator check failed for block 5 Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: metadata operation 'dm_cache_set_dirty' failed: error = -15 Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: aborting current metadata transaction Oct 5 15:44:04 host-083 kernel: device-mapper: space map common: index_check failed: csum 828282714 != wanted 828193943 Oct 5 15:44:04 host-083 kernel: device-mapper: block manager: index validator check failed for block 19 Oct 5 15:44:04 host-083 kernel: device-mapper: transaction manager: couldn't open metadata space map Oct 5 15:44:04 host-083 kernel: device-mapper: cache metadata: tm_open_with_sm failed Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: failed to abort metadata transaction Oct 5 15:44:04 host-083 kernel: device-mapper: cache: unable to read needs_check flag, setting failure mode Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: switching cache to fail mode Oct 5 15:44:04 host-083 kernel: device-mapper: cache: unable to read needs_check flag, setting failure mode Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: could not write dirty bitset Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: could not write discard bitset Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: could not write hints Oct 5 15:44:04 host-083 kernel: device-mapper: cache: 253:6: could not write cache metadata Simpler set of cmds to reproduce: [root@host-081 ~]# pcs status Cluster name: STSRHTS555 Stack: corosync Current DC: host-083 (version 1.1.15-11.el7-e174ec8) - partition with quorum Last updated: Wed Oct 5 17:08:07 2016 Last change: Wed Oct 5 15:50:05 2016 by root via cibadmin on host-081 3 nodes and 9 resources configured Online: [ host-081 host-082 host-083 ] Full list of resources: fence-host-081 (stonith:fence_xvm): Started host-081 fence-host-082 (stonith:fence_xvm): Started host-082 fence-host-083 (stonith:fence_xvm): Started host-083 Clone Set: dlm-clone [dlm] Started: [ host-081 host-082 host-083 ] Clone Set: clvmd-clone [clvmd] Started: [ host-081 host-082 host-083 ] Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled [root@host-081 ~]# pvcreate /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdg1 /dev/sdc1 /dev/sdb1 /dev/sdd1 Physical volume "/dev/sdf1" successfully created. Physical volume "/dev/sda1" successfully created. Physical volume "/dev/sdh1" successfully created. Physical volume "/dev/sdg1" successfully created. Physical volume "/dev/sdc1" successfully created. Physical volume "/dev/sdb1" successfully created. Physical volume "/dev/sdd1" successfully created. [root@host-081 ~]# vgcreate snapper_thinp /dev/sdf1 /dev/sda1 /dev/sdh1 /dev/sdg1 /dev/sdc1 /dev/sdb1 /dev/sdd1 Clustered volume group "snapper_thinp" successfully created [root@host-081 ~]# lvcreate --activate ey --profile thin-performance --zero y -L 4M -n meta snapper_thinp /dev/sda1 Logical volume "meta" created. [root@host-081 ~]# lvcreate --activate ey --profile thin-performance --zero y -L 500M -n POOL snapper_thinp /dev/sda1 Logical volume "POOL" created. [root@host-081 ~]# lvcreate --activate ey --zero y -L 400M -n cpool snapper_thinp /dev/sdb1 Logical volume "cpool" created. [root@host-081 ~]# lvcreate --activate ey --zero y -L 8M -n cpool_meta snapper_thinp /dev/sdb1 Logical volume "cpool_meta" created. [root@host-081 ~]# lvconvert --yes --type cache-pool --poolmetadata snapper_thinp/cpool_meta snapper_thinp/cpool WARNING: Converting logical volume snapper_thinp/cpool and snapper_thinp/cpool_meta to cache pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted snapper_thinp/cpool to cache pool. [root@host-081 ~]# lvconvert --yes --type cache --cachepool snapper_thinp/cpool snapper_thinp/POOL Logical volume snapper_thinp/POOL is now cached. [root@host-081 ~]# lvconvert --zero y --thinpool snapper_thinp/POOL --poolmetadata meta --yes WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted snapper_thinp/POOL to thin pool. [root@host-081 ~]# lvcreate --activate ey --virtualsize 100M -T snapper_thinp/POOL -n origin Using default stripesize 64.00 KiB. Logical volume "origin" created. [root@host-081 ~]# lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n invalid1 Using default stripesize 64.00 KiB. Logical volume "invalid1" created. # EXCLUSIVE ACTIVE [root@host-081 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi-aotz-- 500.00m 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi-aoC--- 500.00m [cpool] [POOL_tdata_corig] 0.03 1.37 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi-aoC--- 500.00m /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sda1(0) [cpool] snapper_thinp Cwi---C--- 400.00m 0.03 1.37 0.00 cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi-ao---- 400.00m /dev/sdb1(0) [cpool_cmeta] snapper_thinp ewi-ao---- 8.00m /dev/sdb1(100) invalid1 snapper_thinp Vwi-a-tz-- 100.00m POOL origin 0.00 [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sdf1(0) origin snapper_thinp Vwi-a-tz-- 100.00m POOL 0.00 # INACTIVE [root@host-082 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi---tz-- 500.00m POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi---C--- 500.00m [cpool] [POOL_tdata_corig] POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi---C--- 500.00m /dev/sdf1(1) [POOL_tmeta] snapper_thinp ewi------- 4.00m /dev/sdf1(0) [cpool] snapper_thinp Cwi---C--- 400.00m cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi------- 400.00m /dev/sdh1(0) [cpool_cmeta] snapper_thinp ewi------- 8.00m /dev/sdh1(100) invalid1 snapper_thinp Vwi---tz-- 100.00m POOL origin [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sda1(0) origin snapper_thinp Vwi---tz-- 100.00m POOL # INACTIVE [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi---tz-- 500.00m POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi---C--- 500.00m [cpool] [POOL_tdata_corig] POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi---C--- 500.00m /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi------- 4.00m /dev/sda1(0) [cpool] snapper_thinp Cwi---C--- 400.00m cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi------- 400.00m /dev/sdb1(0) [cpool_cmeta] snapper_thinp ewi------- 8.00m /dev/sdb1(100) invalid1 snapper_thinp Vwi---tz-- 100.00m POOL origin [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sdh1(0) origin snapper_thinp Vwi---tz-- 100.00m POOL # After this all nodes will report the volumes as active [root@host-081 ~]# lvconvert --merge /dev/snapper_thinp/invalid1 --yes Merging of thin snapshot snapper_thinp/origin will occur on next activation of snapper_thinp/invalid1. # ACTIVE [root@host-081 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi-aotz-- 500.00m 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi-aoC--- 500.00m [cpool] [POOL_tdata_corig] 0.03 1.37 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi-aoC--- 500.00m /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sda1(0) [cpool] snapper_thinp Cwi---C--- 400.00m 0.03 1.37 0.00 cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi-ao---- 400.00m /dev/sdb1(0) [cpool_cmeta] snapper_thinp ewi-ao---- 8.00m /dev/sdb1(100) [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sdf1(0) origin snapper_thinp Vwi-a-tz-- 100.00m POOL 0.00 # ACTIVE [root@host-082 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi---tz-- 500.00m 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi-aoC--- 500.00m [cpool] [POOL_tdata_corig] 0.03 1.37 100.00 POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi-aoC--- 500.00m /dev/sdf1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdf1(0) [cpool] snapper_thinp Cwi---C--- 400.00m 0.03 1.37 100.00 cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi-ao---- 400.00m /dev/sdh1(0) [cpool_cmeta] snapper_thinp ewi-ao---- 8.00m /dev/sdh1(100) [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sda1(0) origin snapper_thinp Vwi-a-tz-- 100.00m POOL 0.00 # ACTIVE [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL snapper_thinp twi---tz-- 500.00m 0.00 0.98 POOL_tdata(0) [POOL_tdata] snapper_thinp Cwi-aoC--- 500.00m [cpool] [POOL_tdata_corig] 0.03 1.37 100.00 POOL_tdata_corig(0) [POOL_tdata_corig] snapper_thinp owi-aoC--- 500.00m /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sda1(0) [cpool] snapper_thinp Cwi---C--- 400.00m 0.03 1.37 100.00 cpool_cdata(0) [cpool_cdata] snapper_thinp Cwi-ao---- 400.00m /dev/sdb1(0) [cpool_cmeta] snapper_thinp ewi-ao---- 8.00m /dev/sdb1(100) [lvol0_pmspare] snapper_thinp ewi------- 8.00m /dev/sdh1(0) origin snapper_thinp Vwi-a-tz-- 100.00m POOL 0.00 Created attachment 1207702 [details]
verbose lvconvert attempt
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |