+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1474566 +++ ====================================================================== Description of problem: The VDSM SOS plugin, in vdsm/lib/sos/vdsm.py.in currently (even in master) does this for capture sosreport LVM information: self.collectExtOutput("/sbin/lvm vgs -v -o +tags") self.collectExtOutput("/sbin/lvm lvs -v -o +tags") self.collectExtOutput("/sbin/lvm pvs -v -o +all") With the recent changes on lvm filter and lvmetad, this needs to be reviewed so that sosreports contain correct and meaningful data. At least two things concern me: 1) In 4.0.6 and lower, lvmetad might be running. This can yield in the sosreport showing cached, old LVM metadata, leading to incorrect support decisions. 2) Where customer manually setup lvm filtering (as recommended), or in the future where the filter might be automagic, these commands may fail to capture relevant data. See also https://gerrit.ovirt.org/#/c/79698/ (Originally by Germano Veit Michel)
Is this on track to 4.1.5? (Originally by Yaniv Kaul)
Hi Ala, Following up from your email... I think we need: 1) lvmetad=0 on each command, so that we don't ready stale metadata in RHV 4.0.6 and older. 2) filter to add the devices of the RHV Storage Domains (because the host will be blacklisting them in lvm.conf). Otherwise sosreports might miss important data and we need to ask the customer again. I think we need to add something similar to what VDSM does on each LVM command (in LVMCONF_TEMPLATE in vdsm/storage/lvm.py). As an example, I think each of those 3 commands in comment #0 should have a config like the below appended: --config 'devices { filter = [ \'a|<RHV LUNS HERE?>', \'r|.*|\' ] } global {use_lvmetad=0}' Or maybe use a "add all" filter to simplify it in case it won't cause problems. I hope this clarifies the BZ. Thanks (Originally by Germano Veit Michel)
Thanks Germano. I uploaded two patches targeting these requirements. (Originally by Ala Hino)
If there's another 4.1.5 this should be included in it, but I won't block on it. (Originally by Allon Mureinik)
Is this on track to 4.1.6?
verified on vdsm 4.19.33-1 . Current output: > ll lvm_* -rw-r--r--. 1 root root 13519 Oct 15 11:15 lvm_lvs_-v_-o_tags_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r -rw-r--r--. 1 root root 13531 Oct 15 11:15 lvm_pvs_-v_-o_all_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r -rw-r--r--. 1 root root 6622 Oct 15 11:15 lvm_vgs_-v_-o_tags_--config_global_locking_type_0_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.mapper.._r
Bug was verified; removing the needinfo request
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3139