RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2093993 - vdsm plugin does not collect LVM information of RHV SDs if the host uses LVM devices
Summary: vdsm plugin does not collect LVM information of RHV SDs if the host uses LVM ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: sos
Version: 8.6
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Pavel Moravec
QA Contact: Miroslav Hradílek
URL:
Whiteboard:
: 2121254 (view as bug list)
Depends On:
Blocks: 902971 2121774
TreeView+ depends on / blocked
 
Reported: 2022-06-06 14:12 UTC by Juan Orti
Modified: 2022-11-08 12:39 UTC (History)
13 users (show)

Fixed In Version: sos-4.3-3.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2121774 (view as bug list)
Environment:
Last Closed: 2022-11-08 10:50:39 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github sosreport sos pull 2952 0 None open [vdsm] Set LVM option use_devicesfile=0 2022-06-06 14:47:49 UTC
Red Hat Issue Tracker RHELPLAN-124366 0 None None None 2022-06-06 14:14:02 UTC
Red Hat Knowledge Base (Solution) 6963687 0 None None None 2022-06-17 13:43:42 UTC
Red Hat Product Errata RHBA-2022:7732 0 None None None 2022-11-08 10:51:24 UTC

Description Juan Orti 2022-06-06 14:12:07 UTC
Description of problem:
Since RHV 4.4 SP1, the RHV hypervisors are using LVM devices instead of filters to control the access to the block devices, see:

Bug 2012830 - [RFE] Use lvm devices instead of lvm filter on RHEL 8.6 / CentOS Stream 9 
https://bugzilla.redhat.com/show_bug.cgi?id=2012830

After this change, the vdsm plugin no longer collects the LVM information of the volume groups used for RHV storage domains.

Version-Release number of selected component (if applicable):
sos-4.2-19.el8_6.noarch

How reproducible:
Always

Steps to Reproduce:
1. In a RHV environment, upgrade a hypervisor to RHEL 8.6, vdsm-4.50.0.13-1.el8ev.x86_64
2. Verify that LVM is configured with lvmdevices

    /etc/lvm/lvm.conf:

    devices {
        use_devicesfile = 1
    }

3. Generate a sosreport.

Actual results:
These LVM commands are run:

2022-06-06 13:13:56,336 INFO: [plugin:vdsm] added cmd output 'lvm vgs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-06-06 13:13:56,337 INFO: [plugin:vdsm] added cmd output 'lvm lvs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-06-06 13:13:56,337 INFO: [plugin:vdsm] added cmd output 'lvm pvs -v -o +all --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''

We are only getting the information of the system VG, and not other VGs used for RHV storage domains: 

$ cat sos_commands/vdsm/lvm_lvs_-v_-o_tags_--config_global_locking_type_0_metadata_read_only_1_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_filter_a_.dev.disk.by-id.dm-uuid-mpath-_r 
  Reloading config files
  WARNING: locking_type (0) is deprecated, using --nolocking.
  WARNING: File locking is disabled.
  Please remove the lvm.conf filter, it is ignored with the devices file.
  LV   VG          #Seg Attr       LSize   Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                                LProfile LV Tags
  home rhel_unused    1 -wi-ao---- 223.87g  -1  -1  253   12                                                     846ZeV-yiAs-BV6d-fcGe-lqei-yQPC-tiEpkI                 
  root rhel_unused    1 -wi-ao----  50.00g  -1  -1  253    0                                                     mGFC86-iuhN-c0Oi-emKW-TUIo-K8z0-Fga2PN                 
  swap rhel_unused    1 -wi-ao----   4.00g  -1  -1  253    1                                                     DpWVqy-d6gf-Tye2-1Quf-Hn0n-F2M5-cyanKU                 
  Reloading config files

Expected results:
Getting all the information of all existing VGs, LVs and PVs as before.

Additional info:
Having this LVM information is critical for support as it's very commonly used to debug storage issues.

The LVM command should be updated to include the option "use_devicesfile=0", for example:

lvm vgs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { use_devicesfile=0 preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }'
lvm lvs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { use_devicesfile=0 preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }'
lvm pvs -v -o +all --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { use_devicesfile=0 preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }'

Comment 1 Pavel Moravec 2022-06-07 06:14:06 UTC
Thanks for raising the upstream PR. Assuming it will be merged within a few weeks, the bugfix will be automatically backported to RHEL8.8. No sooner backport is planned due to QE capacity.

Comment 4 Marina Kalinin 2022-06-17 14:27:00 UTC
Chris, it is not urgent, but highly important.
This is the new way vdsm works with lvm in the latest version of RHV (RHV 4.4 SP1 released last month). And there are already issues related to this new functionality that customers reports, but we have zero information that can help resolving the issue in the sosreport. This results in extra iteration between us and the customer to resolve the issue.
Have we had this patch merge, we would have this information in the sosreport right away and we would be able to provide resolution faster. Type of issues we are talking about can range anywhere from production down to can't deploy day 1 operation. So I consider this high as priority to fix.

Thank you!

Comment 7 Pavel Moravec 2022-08-03 20:27:52 UTC
Hello,
sos in RHEL8.8 is expected to fix the bug.

To help sos QE resources, would you be able to verify if a candidate package does fix the bug properly?

I expect a candidate build to be ready in several weeks and there will be no rush to execute the verification.

Thanks in advance for potential cooperation.

Comment 8 Juan Orti 2022-08-22 13:21:30 UTC
(In reply to Pavel Moravec from comment #7)

> To help sos QE resources, would you be able to verify if a candidate package
> does fix the bug properly?

Yes, no problem. If you provide me the rpm package, I have several environments to test test it easily

Comment 9 Marcus West 2022-08-25 01:32:47 UTC
*** Bug 2121254 has been marked as a duplicate of this bug. ***

Comment 21 Germano Veit Michel 2022-08-31 01:15:04 UTC
Works here:

[root@rhvh-1 sosreport-rhvh-1-2022-08-31-vskwwov]# cat sos_commands/vdsm/lvm_lvs_-v_-o_tags_--config_global_locking_type_0_metadata_read_only_1_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_use_devicesfile_0_filter_a_.dev.disk.by-id.dm-uuid-mpath-_ 
  Reloading config files
  WARNING: locking_type (0) is deprecated, using --nolocking.
  WARNING: File locking is disabled.
  LV                                   VG                                   #Seg Attr       LSize   Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                                LProfile LV Tags                                                                             
  0663a83f-64e1-4570-af99-48d500989f86 f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi------- 128.00m  -1  -1   -1   -1                                                     BVPFLO-NHb0-t27i-ky7Q-1FVz-foN9-quIHvK          IU_eae7f395-20f6-4e68-8745-4d6d92182b9a,MD_1,PU_00000000-0000-0000-0000-000000000000
  cc33fab2-15ab-4e49-b3b1-6eea9b17026b f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi------- 128.00m  -1  -1   -1   -1                                                     kBUnrn-4zEA-piwl-oPcs-AFfD-VS9r-ueWwMa          IU_eed5b967-81f0-4891-b2da-a9fca6bdb539,MD_2,PU_00000000-0000-0000-0000-000000000000
  ids                                  f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-ao---- 128.00m  -1  -1  253    1                                                     odbDqc-tNKu-FisV-f8eM-lzeS-Yq5r-MgqWY4                                                                                              
  inbox                                f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a----- 128.00m  -1  -1  253    4                                                     Ojj5cx-zdJs-ck7D-uQST-BDwa-ziaS-uNnOID                                                                                              
  leases                               f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a-----   2.00g  -1  -1  253    2                                                     Uzl9Ey-MHg8-D18n-VfI9-o6fp-9TDh-SBP2Jz                                                                                              
  master                               f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a-----   1.00g  -1  -1  253    7                                                     YWye6m-JD25-khZH-Pn7X-M1lv-JgE3-xIhNNU                                                                                              
  metadata                             f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a----- 128.00m  -1  -1  253    3                                                     0MCwCh-QQk8-c69G-myX5-CY1m-kptE-qvQlcF                                                                                              
  outbox                               f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a----- 128.00m  -1  -1  253    5                                                     4XOqab-pIiF-pwuR-K9R9-M8Dl-Vcyh-N2oc63                                                                                              
  xleases                              f357bd0d-12d5-43a3-b793-239fe57516c3    1 -wi-a-----   1.00g  -1  -1  253    6                                                     x0sD7v-JxFZ-ggTs-xI8F-bpNP-r59H-6cGsGA                                                                                              
  Reloading config files

[root@rhvh-1 sosreport-rhvh-1-2022-08-31-vskwwov]# rpm -q sos
sos-4.3-3.el8.noarch

Comment 23 Juan Orti 2022-08-31 11:00:21 UTC
Works for me too:

# cat sos_commands/vdsm/lvm_lvs_-v_-o_tags_--config_global_locking_type_0_metadata_read_only_1_use_lvmetad_0_devices_preferred_names_.dev.mapper._ignore_suspended_devices_1_write_cache_state_0_disable_after_error_count_3_use_devicesfile_0_filter_a_.dev.disk.by-id.dm-uuid-mpath-_ 
  Reloading config files
  WARNING: locking_type (0) is deprecated, using --nolocking.
  WARNING: File locking is disabled.
  LV                                   VG                                   #Seg Attr       LSize   Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID                                LProfile LV Tags                                                                                         
  97275924-03c8-417d-9dfe-ea16f7d7b044 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a-----   1.00g  -1  -1  253   27                                                     iEk8az-gXZd-1l6q-ZSYY-VTDm-J2AK-oWmCWU          IU_b8e4504d-408b-4e05-837c-45390ab1839e,MD_6,PU_00000000-0000-0000-0000-000000000000            
  bf0f7ed6-7c94-468b-9065-1dc9e04affa9 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-ao----   1.00g  -1  -1  253   28                                                     LXEvpy-nuWA-wrpE-R5sW-HL9Y-Y5iS-fFwv6e          IU_4739de3a-f140-455f-990f-ac0de0803dc5,MD_4,PU_00000000-0000-0000-0000-000000000000            
  c7100117-bf10-4086-b8e5-3972ba5859a9 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a-----   1.00g  -1  -1  253   31                                                     eIPlo2-OyKN-C2vj-mOQk-YpSe-pPBe-UodRbE          IU_1c53b91f-1655-4963-9ecd-cbc8a5749954,MD_5,PU_00000000-0000-0000-0000-000000000000            
  c8d642ce-4f78-4c0c-a9e0-f8c569ec7e5e 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a----- 128.00m  -1  -1  253   29                                                     T7oDfy-ZgGk-Pf8W-mbJU-461Y-2khF-rpwcJD          IU_a1a00f7b-bd10-48e8-9249-739130c919f7,MD_1,PU_00000000-0000-0000-0000-000000000000            
  cfff2166-d2d5-473b-8b02-249d19908e59 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-------   1.25g  -1  -1   -1   -1                                                     sAOvvI-m7DM-orJO-Q0vE-xk8C-98VV-KeUfvt          IU_bfbeb711-0330-48f0-8c95-a25a452a42dc,MD_7,PU_00000000-0000-0000-0000-000000000000            
  d55dbb4b-8095-4ce5-a87d-799215dd97e2 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-ao----  59.00g  -1  -1  253   32                                                     yVlWn4-9qOi-4i9V-JyGD-OIpP-fJb8-6uoM5f          IU_57749b70-5e0d-46ee-b710-f06813d520eb,MD_3,PU_00000000-0000-0000-0000-000000000000            
  d7235d72-4c8a-4c36-8e0b-31ea552ff565 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a----- 128.00m  -1  -1  253   30                                                     Aqsfky-cAEk-RrOX-cyeU-ECUm-dWFG-Zg0hOy          IU_5d8dcfe9-8436-43bb-9784-e14b7f643f5c,MD_2,PU_00000000-0000-0000-0000-000000000000            
  dd875669-9c1e-44f3-acec-7acf3e74440f 1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-------   7.62g  -1  -1   -1   -1                                                     vRSJhi-lIG2-Ilks-tJOM-ZexP-2VFp-vY3gMl          IU_1cbf3037-d439-40fc-a25b-6706930189fa,MD_8,PU_00000000-0000-0000-0000-000000000000            
  ids                                  1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-ao---- 128.00m  -1  -1  253   20                                                     k1BvhE-VKEF-Asjf-YNqH-AsaB-DhTX-4g2Tnu                                                                                                          
  inbox                                1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a----- 128.00m  -1  -1  253   23                                                     EY1mnh-BfMV-BWXG-qEQZ-f30s-Nb4d-l99JPD                                                                                                          
  leases                               1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a-----   2.00g  -1  -1  253   21                                                     oG61CT-bWaC-5HJa-CtSm-zKfr-aGZP-J9nbtw                                                                                                          
  master                               1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a-----   1.00g  -1  -1  253   26                                                     0xACSw-MHBW-30wf-IKcC-Q5Mu-eewr-t8xMae                                                                                                          
  metadata                             1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a----- 128.00m  -1  -1  253   22                                                     FV4U6E-pQOV-n3YV-zLkX-Gfys-eQ9P-y5Okc7                                                                                                          
  outbox                               1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a----- 128.00m  -1  -1  253   24                                                     Afwjfo-xRvY-oBZ6-Tz3c-Al5f-sobB-n2yhf0                                                                                                          
  xleases                              1ace8186-df46-4795-9f7c-6552283a3690    1 -wi-a-----   1.00g  -1  -1  253   25                                                     n5PWe2-Bfln-xhzI-GLbJ-arTL-bQBj-XDQ22s                                                                                                          

# grep 'plugin:vdsm' sos_logs/sos.log |grep lvm
2022-08-31 10:54:02,341 INFO: [plugin:vdsm] added cmd output 'lvm vgs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-08-31 10:54:02,341 INFO: [plugin:vdsm] added cmd output 'lvm lvs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-08-31 10:54:02,341 INFO: [plugin:vdsm] added cmd output 'lvm pvs -v -o +all --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-08-31 10:55:24,722 INFO: [plugin:vdsm] collecting output of 'lvm vgs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-08-31 10:55:24,952 INFO: [plugin:vdsm] collecting output of 'lvm lvs -v -o +tags --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''
2022-08-31 10:55:25,076 INFO: [plugin:vdsm] collecting output of 'lvm pvs -v -o +all --config 'global { locking_type=0 metadata_read_only=1 use_lvmetad=0 } devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 use_devicesfile=0 filter=["a|^/dev/disk/by-id/dm-uuid-mpath-|", "r|.+|"] }''

# rpm -q sos
sos-4.3-3.el8.noarch

Comment 28 errata-xmlrpc 2022-11-08 10:50:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sos bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7732


Note You need to log in before you can comment on or make changes to this bug.