Bug 741093

Summary: VDSM - Storage: getStoragePoolInfo takes several minutes when NFS master domain is blocked
Product: Red Hat Enterprise Virtualization Manager Reporter: Daniel Paikov <dpaikov>
Component: vdsmAssignee: Ayal Baron <abaron>
Status: CLOSED WORKSFORME QA Contact: yeylon <yeylon>
Severity: medium Docs Contact:
Priority: medium    
Version: unspecifiedCC: abaron, amureini, bazulay, hateya, iheim, lpeer, scohen, srevivo, ykaul
Target Milestone: ---   
Target Release: 3.1.0   
Hardware: Unspecified   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-04-04 10:52:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm.log none

Description Daniel Paikov 2011-09-25 09:54:54 UTC
On a setup with multiple NFS storage domains, when the master domain is blocked, getStoragePoolInfo takes 3-4 minutes. This makes RHEVM mark all domains in the DC as status = Unknown, even though the DC is Up and there's an elected SPM.

[root@orange-vdsf data-center]# time vdsClient -s 0 getStoragePoolInfo 2a81780e-6197-4c29-ac48-d68a308ea924
        name = NFS-Local
        isoprefix = 
        pool_status = connected
        lver = 0
        domains = bb0cdb60-d2b6-43a9-9422-df681aa6da9e:Active,6ef7f818-3a17-4551-b0a4-9a395da649ca:Active
        master_uuid = 6ef7f818-3a17-4551-b0a4-9a395da649ca
        version = 0
        spm_id = 1
        type = NFS
        master_ver = 662
        bb0cdb60-d2b6-43a9-9422-df681aa6da9e = {'status': 'Active', 'alerts': []}
        6ef7f818-3a17-4551-b0a4-9a395da649ca = {'status': 'Active', 'diskfree': '7242776576', 'alerts': [], 'disktotal': '15733620736'}


real    3m44.208s
user    0m0.099s
sys     0m0.042s

Comment 1 Daniel Paikov 2011-09-25 09:55:50 UTC
Created attachment 524780 [details]
vdsm.log

Comment 3 Dan Kenigsberg 2011-10-03 16:47:27 UTC
could it be that you are using a pre- 2.6.32-198 kernel? which is your glibc version? bug 689223 may have caused this.

Comment 4 Daniel Paikov 2011-10-05 08:51:06 UTC
(In reply to comment #3)
> could it be that you are using a pre- 2.6.32-198 kernel? which is your glibc
> version? bug 689223 may have caused this.

No, ilvovsky already thought of that and we tested with the newest kernel/glibc at the time.

Comment 5 RHEL Program Management 2011-10-07 16:10:51 UTC
Since RHEL 6.2 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 6 RHEL Program Management 2012-05-03 04:57:51 UTC
Since RHEL 6.3 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 9 Sean Cohen 2013-04-03 11:50:58 UTC
Please check if still relevant

Comment 10 Daniel Paikov 2013-04-04 10:41:28 UTC
(In reply to comment #9)
> Please check if still relevant

Doesn't reproduce anymore. Closing the BZ.