On a setup with multiple NFS storage domains, when the master domain is blocked, getStoragePoolInfo takes 3-4 minutes. This makes RHEVM mark all domains in the DC as status = Unknown, even though the DC is Up and there's an elected SPM. [root@orange-vdsf data-center]# time vdsClient -s 0 getStoragePoolInfo 2a81780e-6197-4c29-ac48-d68a308ea924 name = NFS-Local isoprefix = pool_status = connected lver = 0 domains = bb0cdb60-d2b6-43a9-9422-df681aa6da9e:Active,6ef7f818-3a17-4551-b0a4-9a395da649ca:Active master_uuid = 6ef7f818-3a17-4551-b0a4-9a395da649ca version = 0 spm_id = 1 type = NFS master_ver = 662 bb0cdb60-d2b6-43a9-9422-df681aa6da9e = {'status': 'Active', 'alerts': []} 6ef7f818-3a17-4551-b0a4-9a395da649ca = {'status': 'Active', 'diskfree': '7242776576', 'alerts': [], 'disktotal': '15733620736'} real 3m44.208s user 0m0.099s sys 0m0.042s
Created attachment 524780 [details] vdsm.log
could it be that you are using a pre- 2.6.32-198 kernel? which is your glibc version? bug 689223 may have caused this.
(In reply to comment #3) > could it be that you are using a pre- 2.6.32-198 kernel? which is your glibc > version? bug 689223 may have caused this. No, ilvovsky already thought of that and we tested with the newest kernel/glibc at the time.
Since RHEL 6.2 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
Since RHEL 6.3 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
Please check if still relevant
(In reply to comment #9) > Please check if still relevant Doesn't reproduce anymore. Closing the BZ.