Bug 721399 - [vdsm][performance]After fencing vdsm it takes it 4 minutes to come up
Summary: [vdsm][performance]After fencing vdsm it takes it 4 minutes to come up
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Eduardo Warszawski
QA Contact: Haim
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-14 13:57 UTC by Moran Goldboim
Modified: 2014-01-13 00:49 UTC (History)
8 users (show)

Fixed In Version: vdsm-4.9-92
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-06 07:31:35 UTC
Target Upstream Version:


Attachments (Terms of Use)
vdsm log (1.51 MB, application/x-gzip)
2011-07-14 13:57 UTC, Moran Goldboim
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2011:1782 0 normal SHIPPED_LIVE new packages: vdsm 2011-12-06 11:55:51 UTC

Description Moran Goldboim 2011-07-14 13:57:48 UTC
Created attachment 512904 [details]
vdsm log

Description of problem:
After fencing vdsm it takes it 4 minutes to come up - issue happened on 50 SDs FC deployment, sending an offline status to master domain. caused the vdsm to be fenced and afterwards it took it 4 minutes to come up.

Version-Release number of selected component (if applicable):
vdsm-4.9-81.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1.make master SD offline- make sure all paths are offline( echo offline > /sys/block/sdby/device/state)
2.vdsm will then fence.
3.
  
Actual results:


Expected results:


Additional info:
MainThread::INFO::2011-07-14 15:19:43,268::vdsm::71::vds::(run) I am the actual 
vdsm 4.9-81
Thread-11::DEBUG::2011-07-14 15:23:45,274::supervdsm::43::SuperVdsmProxy::(__init__) Connected to Super Vdsm

Comment 2 Dan Kenigsberg 2011-07-15 18:54:44 UTC
I don't think there's anything we can do about it, as most of the time is wasted inside pvs:

MainThread::DEBUG::2011-07-14 15:19:43,512::lvm::359::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { pref
MainThread::DEBUG::2011-07-14 15:23:38,006::lvm::359::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  /dev/mapper/36006016066102900184f9fa5

However, connectStoragePool keeps taking ages (6 minutes) to complete, which confuses rhevm. The log shows 34 connectStoragePool interlaced with 10 disconnectStoragePool. Omer, why are disconnectStoragePool sent at all?

Comment 3 Omer Frenkel 2011-07-28 09:09:59 UTC
its hard to say without rhevm log, but guessing the possible flow, i'd say this is the only up host in the pool, so trying to select spm every 10 secs will do the connectStoragePool. 
about the disconnectStoragePool - since there are only 10, i can guess they are caused or by the user with maintenance, or reconstruct master which requires disconnectStoragePool, or user tries to activate a host (not the first one - less likely), and it moves to non-operational

Comment 5 Eduardo Warszawski 2011-08-02 10:39:45 UTC
http://gerrit.usersys.redhat.com/#change,775

Comment 6 Haim 2011-09-25 15:56:43 UTC
(In reply to comment #5)
> http://gerrit.usersys.redhat.com/#change,775

verified on 4.9-104. 

vdsm now initialize faster by responding to getVdsCaps with regardless of storage connection, this ease the pain described above.

Comment 7 errata-xmlrpc 2011-12-06 07:31:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2011-1782.html


Note You need to log in before you can comment on or make changes to this bug.