Bug 1223538 - VDSM reports "lvm vgs failed" warning when DC contains ISO domain
Summary: VDSM reports "lvm vgs failed" warning when DC contains ISO domain
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.5.1
Hardware: Unspecified
OS: Linux
unspecified
low
Target Milestone: ovirt-4.1.1
: ---
Assignee: Fred Rolland
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-20 19:37 UTC by Gordon Watson
Modified: 2019-08-15 04:40 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-04-25 00:42:00 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0998 0 normal SHIPPED_LIVE VDSM bug fix and enhancement update 4.1 GA 2017-04-18 20:11:39 UTC
oVirt gerrit 71139 0 None None None 2017-01-25 09:55:52 UTC
oVirt gerrit 72142 0 None None None 2017-02-13 10:22:51 UTC

Description Gordon Watson 2015-05-20 19:37:10 UTC
Description of problem:

If an ISO domain is added to a Data Center then the following messages are reported by VDSM;

Thread-78::DEBUG::2015-05-20 15:26:22,328::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  Volume group "354d508d-149d-47a3-8c62-a3dacc9d598b" not found\n  Skipping volume group 354d508d-149d-47a3-8c62-a3dacc9d598b\n'; <rc> = 5
Thread-78::WARNING::2015-05-20 15:26:22,331::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  Volume group "354d508d-149d-47a3-8c62-a3dacc9d598b" not found', '  Skipping volume group 354d508d-149d-47a3-8c62-a3dacc9d598b']

where '354d508d-149d-47a3-8c62-a3dacc9d598b' is the uuid of the ISO domain.


If the ISO domain is removed, the messages above are not reported.



Version-Release number of selected component (if applicable):

RHEV 3.5.1
RHEL 6/x/7.x hosts with 'vdsm-4.16'


How reproducible:

Every time for me.


Steps to Reproduce:
1. Create a DC with a host and an NFS storage domain, for example.
2. Add an ISO domain.
3. Place host in maintenance mode.
4. Re-activate the host.
5. Above messages are reported.
6. Remove the ISO domain.
7. Repeat steps 3 and 4.
8. Messages are not reported.


Actual results:


Expected results:


Additional info:

Comment 3 Yaniv Lavi 2016-11-30 08:53:56 UTC
Can you test this with the latest 4.x?

Comment 4 Raz Tamir 2016-11-30 16:13:16 UTC
Thanks Kevin for testing.
Yaniv,
bug still exist in 4.1

Comment 5 Yaniv Kaul 2017-02-13 07:08:43 UTC
This is merged to master. Is there a backport for 4.1?

Comment 6 Fred Rolland 2017-02-13 07:41:06 UTC
No, there is no backport.

Do you want it there ?

Comment 7 Yaniv Kaul 2017-02-13 07:49:48 UTC
(In reply to Fred Rolland from comment #6)
> No, there is no backport.
> 
> Do you want it there ?

That's what the target milestone says... If it's safe and useful (and it seems to be, coming from the field), yes please.

Comment 9 Kevin Alon Goldblatt 2017-02-22 16:58:54 UTC
Verified with the following code:
----------------------------------
ovirt-engine-4.1.1-0.1.el7.noarch
rhevm-4.1.1-0.1.el7.noarch
vdsm-4.19.5-1.el7ev.x86_64


Verified with the following scenario:
-------------------------------------------------
Steps to Reproduce:
1. Create a DC with a host and an NFS storage domain, for example.
2. Add an ISO domain.
3. Place host in maintenance mode.
4. Re-activate the host.
5. Above messages are NO LONGER REPORTED
6. Remove the ISO domain.
7. Repeat steps 3 and 4.
8. Messages are not reported.


Moving to VERIFIED!


Note You need to log in before you can comment on or make changes to this bug.