Bug 1330933
Summary: | mirror creation with tag fails during zeroing when activation volume_list has corresponding tag restriction | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | michal novacek <mnovacek> |
Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
lvm2 sub component: | Changing Logical Volumes (RHEL6) | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, prockai, rbednar, zkabelac |
Version: | 6.8 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.143-9.el6 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-21 12:02:47 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1331889 |
Description
michal novacek
2016-04-27 10:45:19 UTC
> [root@virt-145 ~]# \
> lvcreate -ay --addtag abcd \
> --config activation{volume_list=[\"@$(hostname -f)\"]} \
> --name raidlv --type raid1 --extents 100%VG --nosync raidvg
>
There is a mistake in lvcreate command. The --addtag parameter should not be "abcd" but "$(hostname -f). It got lost in bug creation.
So the case reported is behaving correctly - that command will fail because it expects to be able to activate the LV but the configuration specified is preventing that. Now, if we look at the case where the *same* tag is provided, the first activation for zeroing fails - the device needs to have the tag temporarily applied then removed afterwards. Also need to check all other code that performs temporary activations for similar purposes. appears related to bug 1161347. Yep - as workaround - for initial volume creation use empty volume list and deactivate, add tag and activate later again. Workaround for now, continuing to respect tags. Sequence of operations/failure path remains poor. https://www.redhat.com/archives/lvm-devel/2016-April/msg00147.html https://git.fedorahosted.org/cgit/lvm2.git/patch/?id=c76df666c903b59f069292c4c1507b1ac37a5590 lvcreate --addtag tag1 now works if volume_list allows tag1 to be activated. A wider review is pushed to upstream bug 1331889 This fix will also fix the lvconvert scenarios that fail wrt to HA lvm volumes? # here the existing mirror log failed with a log fault policy of allocate, so a new log should have been properly added. May 5 10:01:27 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events. May 5 10:01:27 host-077 lvm[5887]: Mirror status: 1 of 3 images failed. May 5 10:01:27 host-077 lvm[5887]: Mirror log status: 1 of 1 images failed. May 5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 1 logs. May 5 10:01:27 host-077 lvm[5887]: Volume "revolution_9/mirror_1_mlog" is not active locally. May 5 10:01:27 host-077 lvm[5887]: Aborting. Failed to wipe mirror log. May 5 10:01:27 host-077 lvm[5887]: Failed to initialise mirror log. May 5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 0 logs. May 5 10:01:35 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events. May 5 10:01:35 host-077 lvm[5887]: WARNING: Failed to replace 1 of 1 logs in volume mirror_1 May 5 10:01:36 host-077 lvm[5887]: Repair of mirrored device revolution_9-mirror_1 finished successfully. I'm not sure exactly what sequence of commands HA LVM is using. There's also bug 1332909 which might have some connection - or not. If you find similar failures on lvconvert commands, after using the new code here, please open fresh bug(s) and I'll see if a similar fix can be applied to other code paths. Marking verified with latest rpms. Activation of mirror lv now works properly when volume_list value matches a tag. Workaround involving using empty volume list for LV creation is no longer needed. ----------------------------------------------------------------------- Before fix: lvm2-2.02.143-7.el6_8.1 # lvcreate -ay --addtag $(hostname -f) \ > --config activation{volume_list=[\"@$(hostname -f)\"]} \ > --name raidlv --type raid1 --extents 100%VG --nosync vg WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Volume "vg/raidlv_rmeta_0" is not active locally. Failed to zero vg/raidlv_rmeta_0 ----------------------------------------------------------------------- After fix: lvm2-2.02.143-9.el6 # lvcreate -ay --addtag $(hostname -f) \ > --config activation{volume_list=[\"@$(hostname -f)\"]} \ > --name raidlv --type raid1 --extents 100%VG --nosync vg WARNING: New raid1 won't be synchronised. Don't read what you didn't write! # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raidlv vg Rwi-a-r--- 39.99g 100.00 [raidlv_rimage_0] vg iwi-aor--- 39.99g [raidlv_rimage_1] vg iwi-aor--- 39.99g [raidlv_rmeta_0] vg ewi-aor--- 4.00m [raidlv_rmeta_1] vg ewi-aor--- 4.00m ... ----------------------------------------------------------------------- 2.6.32-663.el6.x86_64 lvm2-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 lvm2-libs-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 lvm2-cluster-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 udev-147-2.73.el6_8.2 BUILT: Tue Aug 30 15:17:19 CEST 2016 device-mapper-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-libs-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-event-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-event-libs-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-persistent-data-0.6.2-0.1.rc7.el6 BUILT: Tue Mar 22 14:58:09 CET 2016 cmirror-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0798.html |