Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1223538

Summary: VDSM reports "lvm vgs failed" warning when DC contains ISO domain
Product: Red Hat Enterprise Virtualization Manager Reporter: Gordon Watson <gwatson>
Component: vdsmAssignee: Fred Rolland <frolland>
Status: CLOSED ERRATA QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: low Docs Contact:
Priority: unspecified    
Version: 3.5.1CC: bazulay, frolland, gklein, kgoldbla, lpeer, lsurette, ratamir, rtamir, srevivo, tnisan, ykaul, ylavi
Target Milestone: ovirt-4.1.1   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-04-25 00:42:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Gordon Watson 2015-05-20 19:37:10 UTC
Description of problem:

If an ISO domain is added to a Data Center then the following messages are reported by VDSM;

Thread-78::DEBUG::2015-05-20 15:26:22,328::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  Volume group "354d508d-149d-47a3-8c62-a3dacc9d598b" not found\n  Skipping volume group 354d508d-149d-47a3-8c62-a3dacc9d598b\n'; <rc> = 5
Thread-78::WARNING::2015-05-20 15:26:22,331::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  Volume group "354d508d-149d-47a3-8c62-a3dacc9d598b" not found', '  Skipping volume group 354d508d-149d-47a3-8c62-a3dacc9d598b']

where '354d508d-149d-47a3-8c62-a3dacc9d598b' is the uuid of the ISO domain.


If the ISO domain is removed, the messages above are not reported.



Version-Release number of selected component (if applicable):

RHEV 3.5.1
RHEL 6/x/7.x hosts with 'vdsm-4.16'


How reproducible:

Every time for me.


Steps to Reproduce:
1. Create a DC with a host and an NFS storage domain, for example.
2. Add an ISO domain.
3. Place host in maintenance mode.
4. Re-activate the host.
5. Above messages are reported.
6. Remove the ISO domain.
7. Repeat steps 3 and 4.
8. Messages are not reported.


Actual results:


Expected results:


Additional info:

Comment 3 Yaniv Lavi 2016-11-30 08:53:56 UTC
Can you test this with the latest 4.x?

Comment 4 Raz Tamir 2016-11-30 16:13:16 UTC
Thanks Kevin for testing.
Yaniv,
bug still exist in 4.1

Comment 5 Yaniv Kaul 2017-02-13 07:08:43 UTC
This is merged to master. Is there a backport for 4.1?

Comment 6 Fred Rolland 2017-02-13 07:41:06 UTC
No, there is no backport.

Do you want it there ?

Comment 7 Yaniv Kaul 2017-02-13 07:49:48 UTC
(In reply to Fred Rolland from comment #6)
> No, there is no backport.
> 
> Do you want it there ?

That's what the target milestone says... If it's safe and useful (and it seems to be, coming from the field), yes please.

Comment 9 Kevin Alon Goldblatt 2017-02-22 16:58:54 UTC
Verified with the following code:
----------------------------------
ovirt-engine-4.1.1-0.1.el7.noarch
rhevm-4.1.1-0.1.el7.noarch
vdsm-4.19.5-1.el7ev.x86_64


Verified with the following scenario:
-------------------------------------------------
Steps to Reproduce:
1. Create a DC with a host and an NFS storage domain, for example.
2. Add an ISO domain.
3. Place host in maintenance mode.
4. Re-activate the host.
5. Above messages are NO LONGER REPORTED
6. Remove the ISO domain.
7. Repeat steps 3 and 4.
8. Messages are not reported.


Moving to VERIFIED!