Bug 843798 - SPM host after setting SPM priority to -1 and subsequent vdsm restart still acts as SPM.
Summary: SPM host after setting SPM priority to -1 and subsequent vdsm restart still a...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.1.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Nobody's working on this, feel free to take it
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-07-27 11:53 UTC by Petr Dufek
Modified: 2013-02-27 06:38 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Release Note
Doc Text:
Hosts must be moved to maintenance mode before changing their Storage Pool Manager (SPM) priority to '-1', otherwise the change will not take effect. An SPM priority of '-1' indicates the host must never be considered for the SPM role.
Clone Of:
Environment:
Last Closed: 2012-07-28 19:40:27 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (1.48 MB, application/x-gzip)
2012-07-27 11:53 UTC, Petr Dufek
no flags Details

Description Petr Dufek 2012-07-27 11:53:11 UTC
Created attachment 600750 [details]
logs

Description of problem:

SPM host after setting SPM priority to -1 and subsequent vdsm restart still acts as SPM.

Version-Release number of selected component (if applicable):

Host:   vdsm-4.9.6-21.0.el6_3.x86_64
        libvirt-0.9.10-21.el6_3.3.x86_64
RHEVM:  rhevm-3.1.0-6.el6ev.noarch


Steps to Reproduce:
1. install 3 hosts
2. set SPM priority - (-1, -1, 2)
3. deactivate/activate hosts
4. wait until 3. host become SPM
5. set 3. host's SPM priority to -1 in DB (all hosts now have -1 SPM priority)
6. restart VDSM on 3. host
  
Actual results:
- 3. host is still SPM

Expected results:
- no host acts as SPM

Additional info:
attached logs from host: vdsm.log, spm-lock.log
attached logs from RHEVM: engine.log

spm-logck.log:
----------------
[2012-07-27 13:18:35] releasing lease spUUID=04875b86-a253-4c98-9053-c8d711c75847 id=3 lease_path=/rhev/data-center/mnt/10.34.63.199:_pd01/04875b86-a253-4c98-9053-c8d711c75847/dom_md/leases

vdsm.log:
---------------
MainThread::INFO::2012-07-27 13:18:35,488::vmChannels::135::vds::(stop) VM channels listener was stopped.$
MainThread::DEBUG::2012-07-27 13:18:35,503::task::588::TaskManager.Task::(_updateState) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::moving from state init -> state preparing$
MainThread::INFO::2012-07-27 13:18:35,503::logUtils::37::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None)$
Thread-11::DEBUG::2012-07-27 13:18:35,505::storageServer::617::ConnectionMonitor::(_monitorConnections) Monitoring stopped$
MainThread::WARNING::2012-07-27 13:18:35,654::hsm::2979::Storage.HSM::(__releaseLocks) Found lease locks, releasing$
MainThread::DEBUG::2012-07-27 13:18:36,720::taskManager::80::TaskManager::(prepareForShutdown) Request to stop all tasks$
MainThread::INFO::2012-07-27 13:18:36,725::logUtils::39::dispatcher::(wrapper) Run and protect: prepareForShutdown, Return response: None$
MainThread::DEBUG::2012-07-27 13:18:36,725::task::1172::TaskManager.Task::(prepare) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::finished: None$
MainThread::DEBUG::2012-07-27 13:18:36,726::task::588::TaskManager.Task::(_updateState) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::moving from state preparing -> state finished

engine.log:
----2012-07-27 13:19:06,236 INFO  [org.ovirt.engine.core.bll.LoginUserCommand] (ajp-/0.0.0.0:8009-4) Running command: LoginUserCommand internal: false.$
2012-07-27 13:19:06,246 WARN  [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp-/0.0.0.0:8009-4) calling GetConfigurationValueQuery (ApplicationMode) with null version, using default general for version$
2012-07-27 13:19:08,514 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (QuartzScheduler_Worker-49) [1510efa9] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: 30a479e2-8dc6-4c98-9f2e-f254d4aa477b Type: StoragePool$
2012-07-27 13:19:08,529 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-49) [1510efa9] IrsBroker::Failed::GetStoragePoolInfoVDS due to: ConnectException: Connection refused$
2012-07-27 13:19:09,382 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-63) [760cc54d] ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 1b6a40d0-d7c7-11e1-a6ab-001a4a013f69 : 10.34.63.137, VDS Network Error, continuing.$
VDSNetworkException: $
2012-07-27 13:19:11,392 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-44) [182bc200] ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 1b6a40d0-d7c7-11e1-a6ab-001a4a013f69 : 10.34.63.137, VDS Network Error, continuing.$
VDSNetworkException:-----------

Comment 1 Itamar Heim 2012-07-28 19:40:27 UTC
I assume the logic which detects the last SPM will resume with this one.
I don't think its too much of an issue, and can be specified as a release note that when setting a host spm priority to -1, it should be moved to maint first for this to take effect

Comment 2 Stephen Gordon 2012-08-09 14:33:34 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Hosts must be moved to maintenance mode before changing their Storage Pool Manager (SPM) priority to '-1', otherwise the change will not take effect. An SPM priority of '-1' indicates the host must never be considered for the SPM role.

Comment 3 Stephen Gordon 2012-09-12 11:26:15 UTC
Adding flag in line with Miki's request to use it for filtering release notes.


Note You need to log in before you can comment on or make changes to this bug.