Bug 1147217 - LVM metad is providing incorrect informations
Summary: LVM metad is providing incorrect informations
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 3.5.0
Assignee: Ryan Barry
QA Contact: Virtualization Bugs
URL:
Whiteboard: node
Depends On:
Blocks: rhevh-7.0 rhev35betablocker rhev35rcblocker rhev35gablocker
TreeView+ depends on / blocked
 
Reported: 2014-09-28 07:27 UTC by cshao
Modified: 2023-09-14 02:48 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-11 21:02:03 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ovirt.log (42.62 KB, text/plain)
2014-09-28 07:27 UTC, cshao
no flags Details
ovirt-node.log (36.13 KB, text/plain)
2014-09-28 07:27 UTC, cshao
no flags Details
ovirt.log (46.38 KB, text/plain)
2014-09-29 02:18 UTC, Ying Cui
no flags Details
pvs -o+vg_uuid (13.48 KB, text/plain)
2014-09-29 18:15 UTC, Ryan Barry
no flags Details
pvs -o+pv_uuid (13.48 KB, text/plain)
2014-09-29 18:15 UTC, Ryan Barry
no flags Details
pvscan -vvvv (33.80 KB, text/plain)
2014-09-29 18:16 UTC, Ryan Barry
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2015:0160 0 normal SHIPPED_LIVE ovirt-node bug fix and enhancement update 2015-02-12 01:34:52 UTC
oVirt gerrit 33540 0 master MERGED Don't use lvmetad Never

Description cshao 2014-09-28 07:27:05 UTC
Created attachment 941956 [details]
ovirt.log

Description of problem:
Run "lvs; pvs; vgs" can't list the output info of LVM

# lvs
  No device found for PV fowJcF-UfzI-1WP8-TSfy-d94F-UV4C-pqk5tM.
  No volume groups found
# pvs
  No device found for PV fowJcF-UfzI-1WP8-TSfy-d94F-UV4C-pqk5tM.
# vgs
  No device found for PV fowJcF-UfzI-1WP8-TSfy-d94F-UV4C-pqk5tM.
  No volume groups found
# pvscan 
  No device found for PV fowJcF-UfzI-1WP8-TSfy-d94F-UV4C-pqk5tM.
  No device found for PV fowJcF-UfzI-1WP8-TSfy-d94F-UV4C-pqk5tM.
  No matching physical volumes found

# multipath -ll
Sep 28 07:10:10 | multipath.conf +5, invalid keyword: getuid_callout
Sep 28 07:10:10 | multipath.conf +18, invalid keyword: getuid_callout
Hitachi_HDT721032SLA380_STA2L7MT1ZZRKB dm-0 ATA     ,Hitachi HDT72103
size=298G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:0:0:0 sda 8:0 active ready running


Version-Release number of selected component (if applicable):
rhev-hypervisor7-7.0-20140926.0.iso
ovirt-node-3.1.0-0.17.20140925git29c3403.el7.noarch

How reproducible:
60%

Steps to Reproduce:
1. Install RHEV-H to a physical machine.
2. Configuration network.
3. Run "lvs; pvs; vgs; mulltipath -ll"

Actual results:
1. Run "lvs; pvs; vgs" can't list the output info of LVM
2. Invalid keyword: getuid_callout pop-up when run mulltipath -ll

Expected results:
1. Run "lvs; pvs; vgs" can list the output info of LVM
2. No error pop-up when run mulltipath -ll

Additional info:

Comment 1 cshao 2014-09-28 07:27:37 UTC
Created attachment 941957 [details]
ovirt-node.log

Comment 2 Ying Cui 2014-09-29 02:05:25 UTC
@Shao Chen, clean install or dirty install to encounter this issue?

Comment 3 Ying Cui 2014-09-29 02:17:37 UTC
I reproduce this issue during rhevh reinstall, providing the log info here:
This problem may involve from bug #1095081 fix, please check the relation.

====ovirt.log=====

2014-09-28 09:44:20,950 - INFO - storage - Setting value for 256 to 256
2014-09-28 09:44:20,950 - DEBUG - ovirtfunctions - Translating: /dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS
2014-09-28 09:44:20,950 - DEBUG - ovirtfunctions - Translating: /dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS
2014-09-28 09:44:21,024 - DEBUG - ovirtfunctions - Translating: /dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS
2014-09-28 09:44:21,024 - INFO - ovirtfunctions - Wiping partitions on: /dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS->/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS
2014-09-28 09:44:21,024 - INFO - ovirtfunctions - Removing HostVG
2014-09-28 09:44:21,024 - INFO - ovirtfunctions - Wiping old boot sector
Error: Partition(s) on /dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS are being used.
/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54
/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS: calling ioclt to re-read partition table: Invalid argument
2014-09-28 09:44:21,328 - DEBUG - ovirtfunctions - sync
2014-09-28 09:44:21,328 - DEBUG - ovirtfunctions -
2014-09-28 09:44:21,332 - DEBUG - ovirtfunctions - kpartx -a '/dev/mapper/TOSHIBA_DT01ACA100_33A7V28MS'

... ...
... ...
... ...
Sep 29 01:54:30 Starting ovirt-awake.
Sep 29 01:54:30 Completed ovirt-awake: RETVAL=0
Sep 29 01:54:30 Starting ovirt
Sep 29 01:54:30 Completed ovirt
Sep 29 01:54:30 Starting ovirt-post
ERROR: unable to read system id.
Sep 29 01:54:30 Synchronizing log files
Loading /lib/kbd/keymaps/i386/qwerty/us.map.gz
2014-09-29 01:54:34,168 - INFO - ovirtfunctions - Won't update /etc/hosts, it's not empty.
Sep 29 01:54:34 Hardware virtualization detected
  No device found for PV 86It3f-0evW-0PnH-G03j-7Olu-HOtb-K0WVpY.

Comment 4 Ying Cui 2014-09-29 02:18:26 UTC
Created attachment 942120 [details]
ovirt.log

Comment 5 cshao 2014-09-29 06:09:38 UTC
(In reply to Ying Cui from comment #2)
> @Shao Chen, clean install or dirty install to encounter this issue?

ycui,

Both clean and dirty install can encounter this issue.

Comment 8 Ryan Barry 2014-09-29 18:15:35 UTC
Created attachment 942432 [details]
pvs -o+vg_uuid

Comment 9 Ryan Barry 2014-09-29 18:15:59 UTC
Created attachment 942433 [details]
pvs -o+pv_uuid

Comment 10 Ryan Barry 2014-09-29 18:16:14 UTC
Created attachment 942434 [details]
pvscan -vvvv

Comment 11 Ryan Barry 2014-09-29 18:19:39 UTC
Peter -

We're experiencing a possible duplicate of #1018852 on EL7 RHEV-H images.

Avoiding lvmetad (use_lvmetad = 0) in lvm.conf makes lvm work as expected.

Any ideas?

Comment 17 Petr Rockai 2014-09-30 07:09:23 UTC
As far as I can tell, this is basically the same problem as bug 1139216 -- the filtering in pvscan --cache was being setup incorrectly, so lvmetad ends up using the mpath *component* device, but the client code has the filter set up as it should so it ignores that component device. But lvmetad won't offer any other, so you get an empty intersection. This should be fixed at 80ac8f37d6ac5f8c5228678d4ee07187b5d4db7b upstream. Anyway, not related to ovirt, but to multipathing.

Comment 18 Fabian Deutsch 2014-09-30 11:41:14 UTC
Petr, is there already a request to pull the fix into RHEL 7.0. Once it lands there we can pull it into RHEV-H?

Comment 19 Fabian Deutsch 2014-09-30 12:55:25 UTC
vdsm is explicitly disabling lvmetad so we can block/disable it as well.

Comment 20 Nir Soffer 2014-09-30 13:13:26 UTC
(In reply to Fabian Deutsch from comment #19)
> vdsm is explicitly disabling lvmetad so we can block/disable it as well.

Vdsm is using this command line options when running lvm, disabling usage of lvm_metad:

    lvm <command> --config "global {use_lvmetad=0}" [options] args

But since RHEVH controls lvm.conf, you can just set use_lvmetad = 0 in lvm.conf instead.

Comment 21 cshao 2014-10-08 10:10:03 UTC
Test version:
rhev-hypervisor7-7.0-20141006.0.el7ev
ovirt-node-3.1.0-0.20.20141006gitc421e04.el7.noarch

Test steps:
1. Install RHEV-H to a physical machine.
2. Configuration network.
3. Run "lvs; pvs; vgs; mulltipath -ll"

Test result:
# lvs
  LV      VG     Attr       LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  Config  HostVG -wi-ao----  8.00m                                             
  Data    HostVG -wi-ao---- 28.84g                                             
  Logging HostVG -wi-ao----  2.00g                                             
  Swap    HostVG -wi-ao----  7.71g                                             
[root@dhcp-8-231 admin]# pvs
  PV                                                  VG     Fmt  Attr PSize   PFree  
  /dev/mapper/Hitachi_HDT721032SLA380_STA2L7MT1ZZRKB4 HostVG lvm2 a--  297.37g 258.81g
[root@dhcp-8-231 admin]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree  
  HostVG   1   4   0 wz--n- 297.37g 258.81g

Test result:
LVM metad is providing correct informations. so the bug is fixed with above version.

Comment 23 cshao 2014-12-18 07:20:58 UTC
Test version:
rhev-hypervisor7-7.0-20141212.0.iso
ovirt-node-3.1.0-0.34.20141210git0c9c493.el7.noarch

Test result:
# lvs
    LV      VG     Attr       LSize Pool Origin Data%  Move Log Cpy%Sync Convert
  Config  HostVG -wi-ao---- 8.00m                                             
  Data    HostVG -wi-ao---- 5.49g                                             
  Logging HostVG -wi-ao---- 2.00g                                             
  Swap    HostVG -wi-ao---- 3.87g                                             
[root@dell-pet105-02 admin]# pvs
  PV         VG     Fmt  Attr PSize  PFree  
  /dev/sdb4  HostVG lvm2 a--  11.76g 400.00m
[root@dell-pet105-02 admin]# vgs
  VG     #PV #LV #SN Attr   VSize  VFree  
  HostVG   1   4   0 wz--n- 11.76g 400.00m

# multipath -ll
Dec 18 07:20:02 | multipath.conf +5, invalid keyword: getuid_callout
Dec 18 07:20:02 | multipath.conf +18, invalid keyword: getuid_callout
Dec 18 07:20:02 | multipath.conf +37, invalid keyword: getuid_callout
WDC_WD2502ABYS-18B7A0_WD-WCAT19558392 dm-6 ATA     ,WDC WD2502ABYS-1
size=233G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:0:0:0 sda 8:0  active ready running
36090a038d0f731381e035566b2497f85 dm-7 EQLOGIC ,100E-00         
size=30G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 4:0:1:0 sdc 8:32 active ready running


Test result:
LVM metad is providing correct informations. so the bug is fixed with above version.

So the bug is fixed with above build now, change bug status to VERIFIED.

Comment 25 errata-xmlrpc 2015-02-11 21:02:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2015-0160.html

Comment 26 Red Hat Bugzilla 2023-09-14 02:48:17 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.