Hide Forgot
Description of problem: I was unsuccessful in activating raid LV in HALVM setup after its creation. See additional info for commands. I'm logging this as lvm bug because there is no pacemaker resource agent involved yet. This involves also other types of --type raidX. Version-Release number of selected component (if applicable): lvm2-2.02.143-7.el6.x86_64 How reproducible: always Steps to Reproduce: 1. See additional info Actual results: Mirrored lvm cannot be activated. Expected results: Mirrored lvm active. Additional info: [root@virt-145 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vg_virt163 -wi-ao---- 6.79g lv_swap vg_virt163 -wi-ao---- 824.00m [root@virt-145 ~]# vgs -a VG #PV #LV #SN Attr VSize VFree raidvg 6 0 0 wz--n- 29.95g 29.95g vg_virt163 1 2 0 wz--n- 7.59g 0 [root@virt-145 ~]# lvcreate -ay --addtag abcd \ --config activation{volume_list=[\"@$(hostname -f)\"]} \ --name raidlv --type raid1 --extents 100%VG --nosync raidvg WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Volume "raidvg/raidlv_rmeta_0" is not active locally. Failed to zero raidvg/raidlv_rmeta_0 [root@virt-145 ~]# echo $? 5 [root@virt-145 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raidlv raidvg Rwi---r--- 0 [raidlv_rimage_0] raidvg Iwi---r--- 14.97g [raidlv_rimage_1] raidvg Iwi---r--- 14.97g raidlv_rmeta_0 raidvg ewi---r--- 4.00m raidlv_rmeta_1 raidvg ewi---r--- 4.00m lv_root vg_virt163 -wi-ao---- 6.79g lv_swap vg_virt163 -wi-ao---- 824.00m [root@virt-145 ~]# lvs @$(hostname -f) LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raidlv raidvg Rwi---r--- 0 [root@virt-145 ~]# lvchange -ay --config activation{volume_list=[\"@$(hostname -f)\"]} raidvg/raidlv [root@virt-145 ~]# echo $? 0 [root@virt-145 ~]# lvdisplay /dev/vg/raidlv | grep 'LV Status' LV Status NOT available --- [root@virt-145 ~]# lvmconfig config { checks=1 abort_on_errors=0 profile_dir="/etc/lvm/profile" } local { } dmeventd { mirror_library="libdevmapper-event-lvm2mirror.so" snapshot_library="libdevmapper-event-lvm2snapshot.so" thin_library="libdevmapper-event-lvm2thin.so" } activation { volume_list=["vg_virt163","@virt-145.cluster-qe.lab.eng.brq.redhat.com"] checks=0 udev_sync=1 udev_rules=1 verify_udev_operations=0 retry_deactivation=1 missing_stripe_filler="error" use_linear_target=1 reserved_stack=64 reserved_memory=8192 process_priority=-18 raid_region_size=512 readahead="auto" raid_fault_policy="warn" mirror_image_fault_policy="remove" mirror_log_fault_policy="allocate" snapshot_autoextend_threshold=100 snapshot_autoextend_percent=20 thin_pool_autoextend_threshold=100 thin_pool_autoextend_percent=20 use_mlockall=0 monitoring=1 polling_interval=15 activation_mode="degraded" } global { umask=63 test=0 units="h" si_unit_consistency=1 suffix=1 activation=1 proc="/proc" etc="/etc" locking_type=1 wait_for_locks=1 fallback_to_clustered_locking=1 fallback_to_local_locking=1 locking_dir="/var/lock/lvm" prioritise_write_locks=1 abort_on_internal_errors=0 detect_internal_vg_cache_corruption=0 metadata_read_only=0 mirror_segtype_default="mirror" raid10_segtype_default="mirror" sparse_segtype_default="snapshot" use_lvmetad=0 use_lvmlockd=0 system_id_source="none" use_lvmpolld=0 } shell { history_size=100 } backup { backup=1 backup_dir="/etc/lvm/backup" archive=1 archive_dir="/etc/lvm/archive" retain_min=10 retain_days=30 } log { verbose=0 silent=0 syslog=1 overwrite=0 level=0 indent=1 command_names=0 prefix=" " activation=0 debug_classes=["memory","devices","activation","allocation","lvmetad","metadata","cache","locking","lvmpolld"] } allocation { maximise_cling=1 use_blkid_wiping=0 wipe_signatures_when_zeroing_new_lvs=1 mirror_logs_require_separate_pvs=0 cache_pool_metadata_require_separate_pvs=0 thin_pool_metadata_require_separate_pvs=0 } devices { dir="/dev" scan="/dev" obtain_device_list_from_udev=0 external_device_info_source="none" preferred_names=["^/dev/mpath/","^/dev/mapper/mpath","^/dev/[hs]d"] cache_dir="/etc/lvm/cache" cache_file_prefix="" write_cache_state=1 sysfs_scan=1 multipath_component_detection=1 md_component_detection=1 fw_raid_component_detection=0 md_chunk_alignment=1 data_alignment_detection=1 data_alignment=0 data_alignment_offset_detection=1 ignore_suspended_devices=0 ignore_lvm_mirrors=1 disable_after_error_count=0 require_restorefile_with_uuid=1 pv_min_size=2048 issue_discards=0 }
> [root@virt-145 ~]# \ > lvcreate -ay --addtag abcd \ > --config activation{volume_list=[\"@$(hostname -f)\"]} \ > --name raidlv --type raid1 --extents 100%VG --nosync raidvg > There is a mistake in lvcreate command. The --addtag parameter should not be "abcd" but "$(hostname -f). It got lost in bug creation.
So the case reported is behaving correctly - that command will fail because it expects to be able to activate the LV but the configuration specified is preventing that. Now, if we look at the case where the *same* tag is provided, the first activation for zeroing fails - the device needs to have the tag temporarily applied then removed afterwards. Also need to check all other code that performs temporary activations for similar purposes.
appears related to bug 1161347.
Yep - as workaround - for initial volume creation use empty volume list and deactivate, add tag and activate later again.
Workaround for now, continuing to respect tags. Sequence of operations/failure path remains poor. https://www.redhat.com/archives/lvm-devel/2016-April/msg00147.html https://git.fedorahosted.org/cgit/lvm2.git/patch/?id=c76df666c903b59f069292c4c1507b1ac37a5590
lvcreate --addtag tag1 now works if volume_list allows tag1 to be activated.
A wider review is pushed to upstream bug 1331889
This fix will also fix the lvconvert scenarios that fail wrt to HA lvm volumes? # here the existing mirror log failed with a log fault policy of allocate, so a new log should have been properly added. May 5 10:01:27 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events. May 5 10:01:27 host-077 lvm[5887]: Mirror status: 1 of 3 images failed. May 5 10:01:27 host-077 lvm[5887]: Mirror log status: 1 of 1 images failed. May 5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 1 logs. May 5 10:01:27 host-077 lvm[5887]: Volume "revolution_9/mirror_1_mlog" is not active locally. May 5 10:01:27 host-077 lvm[5887]: Aborting. Failed to wipe mirror log. May 5 10:01:27 host-077 lvm[5887]: Failed to initialise mirror log. May 5 10:01:27 host-077 lvm[5887]: Trying to up-convert to 2 images, 0 logs. May 5 10:01:35 host-077 lvm[5887]: Monitoring mirror device revolution_9-mirror_1 for events. May 5 10:01:35 host-077 lvm[5887]: WARNING: Failed to replace 1 of 1 logs in volume mirror_1 May 5 10:01:36 host-077 lvm[5887]: Repair of mirrored device revolution_9-mirror_1 finished successfully.
I'm not sure exactly what sequence of commands HA LVM is using. There's also bug 1332909 which might have some connection - or not.
If you find similar failures on lvconvert commands, after using the new code here, please open fresh bug(s) and I'll see if a similar fix can be applied to other code paths.
Marking verified with latest rpms. Activation of mirror lv now works properly when volume_list value matches a tag. Workaround involving using empty volume list for LV creation is no longer needed. ----------------------------------------------------------------------- Before fix: lvm2-2.02.143-7.el6_8.1 # lvcreate -ay --addtag $(hostname -f) \ > --config activation{volume_list=[\"@$(hostname -f)\"]} \ > --name raidlv --type raid1 --extents 100%VG --nosync vg WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Volume "vg/raidlv_rmeta_0" is not active locally. Failed to zero vg/raidlv_rmeta_0 ----------------------------------------------------------------------- After fix: lvm2-2.02.143-9.el6 # lvcreate -ay --addtag $(hostname -f) \ > --config activation{volume_list=[\"@$(hostname -f)\"]} \ > --name raidlv --type raid1 --extents 100%VG --nosync vg WARNING: New raid1 won't be synchronised. Don't read what you didn't write! # lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert raidlv vg Rwi-a-r--- 39.99g 100.00 [raidlv_rimage_0] vg iwi-aor--- 39.99g [raidlv_rimage_1] vg iwi-aor--- 39.99g [raidlv_rmeta_0] vg ewi-aor--- 4.00m [raidlv_rmeta_1] vg ewi-aor--- 4.00m ... ----------------------------------------------------------------------- 2.6.32-663.el6.x86_64 lvm2-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 lvm2-libs-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 lvm2-cluster-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 udev-147-2.73.el6_8.2 BUILT: Tue Aug 30 15:17:19 CEST 2016 device-mapper-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-libs-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-event-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-event-libs-1.02.117-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016 device-mapper-persistent-data-0.6.2-0.1.rc7.el6 BUILT: Tue Mar 22 14:58:09 CET 2016 cmirror-2.02.143-9.el6 BUILT: Thu Nov 10 10:21:10 CET 2016
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0798.html