Bug 843798 - SPM host after setting SPM priority to -1 and subsequent vdsm restart still acts as SPM.
SPM host after setting SPM priority to -1 and subsequent vdsm restart still a...
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.1.0
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Nobody's working on this, feel free to take it
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-07-27 07:53 EDT by Petr Dufek
Modified: 2013-02-27 01:38 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Release Note
Doc Text:
Hosts must be moved to maintenance mode before changing their Storage Pool Manager (SPM) priority to '-1', otherwise the change will not take effect. An SPM priority of '-1' indicates the host must never be considered for the SPM role.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-07-28 15:40:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (1.48 MB, application/x-gzip)
2012-07-27 07:53 EDT, Petr Dufek
no flags Details

  None (edit)
Description Petr Dufek 2012-07-27 07:53:11 EDT
Created attachment 600750 [details]
logs

Description of problem:

SPM host after setting SPM priority to -1 and subsequent vdsm restart still acts as SPM.

Version-Release number of selected component (if applicable):

Host:   vdsm-4.9.6-21.0.el6_3.x86_64
        libvirt-0.9.10-21.el6_3.3.x86_64
RHEVM:  rhevm-3.1.0-6.el6ev.noarch


Steps to Reproduce:
1. install 3 hosts
2. set SPM priority - (-1, -1, 2)
3. deactivate/activate hosts
4. wait until 3. host become SPM
5. set 3. host's SPM priority to -1 in DB (all hosts now have -1 SPM priority)
6. restart VDSM on 3. host
  
Actual results:
- 3. host is still SPM

Expected results:
- no host acts as SPM

Additional info:
attached logs from host: vdsm.log, spm-lock.log
attached logs from RHEVM: engine.log

spm-logck.log:
----------------
[2012-07-27 13:18:35] releasing lease spUUID=04875b86-a253-4c98-9053-c8d711c75847 id=3 lease_path=/rhev/data-center/mnt/10.34.63.199:_pd01/04875b86-a253-4c98-9053-c8d711c75847/dom_md/leases

vdsm.log:
---------------
MainThread::INFO::2012-07-27 13:18:35,488::vmChannels::135::vds::(stop) VM channels listener was stopped.$
MainThread::DEBUG::2012-07-27 13:18:35,503::task::588::TaskManager.Task::(_updateState) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::moving from state init -> state preparing$
MainThread::INFO::2012-07-27 13:18:35,503::logUtils::37::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None)$
Thread-11::DEBUG::2012-07-27 13:18:35,505::storageServer::617::ConnectionMonitor::(_monitorConnections) Monitoring stopped$
MainThread::WARNING::2012-07-27 13:18:35,654::hsm::2979::Storage.HSM::(__releaseLocks) Found lease locks, releasing$
MainThread::DEBUG::2012-07-27 13:18:36,720::taskManager::80::TaskManager::(prepareForShutdown) Request to stop all tasks$
MainThread::INFO::2012-07-27 13:18:36,725::logUtils::39::dispatcher::(wrapper) Run and protect: prepareForShutdown, Return response: None$
MainThread::DEBUG::2012-07-27 13:18:36,725::task::1172::TaskManager.Task::(prepare) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::finished: None$
MainThread::DEBUG::2012-07-27 13:18:36,726::task::588::TaskManager.Task::(_updateState) Task=`17285ebb-d33d-4638-8a7d-1829b8ed4fb9`::moving from state preparing -> state finished

engine.log:
----2012-07-27 13:19:06,236 INFO  [org.ovirt.engine.core.bll.LoginUserCommand] (ajp-/0.0.0.0:8009-4) Running command: LoginUserCommand internal: false.$
2012-07-27 13:19:06,246 WARN  [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp-/0.0.0.0:8009-4) calling GetConfigurationValueQuery (ApplicationMode) with null version, using default general for version$
2012-07-27 13:19:08,514 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (QuartzScheduler_Worker-49) [1510efa9] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: 30a479e2-8dc6-4c98-9f2e-f254d4aa477b Type: StoragePool$
2012-07-27 13:19:08,529 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (QuartzScheduler_Worker-49) [1510efa9] IrsBroker::Failed::GetStoragePoolInfoVDS due to: ConnectException: Connection refused$
2012-07-27 13:19:09,382 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-63) [760cc54d] ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 1b6a40d0-d7c7-11e1-a6ab-001a4a013f69 : 10.34.63.137, VDS Network Error, continuing.$
VDSNetworkException: $
2012-07-27 13:19:11,392 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-44) [182bc200] ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 1b6a40d0-d7c7-11e1-a6ab-001a4a013f69 : 10.34.63.137, VDS Network Error, continuing.$
VDSNetworkException:-----------
Comment 1 Itamar Heim 2012-07-28 15:40:27 EDT
I assume the logic which detects the last SPM will resume with this one.
I don't think its too much of an issue, and can be specified as a release note that when setting a host spm priority to -1, it should be moved to maint first for this to take effect
Comment 2 Stephen Gordon 2012-08-09 10:33:34 EDT
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Hosts must be moved to maintenance mode before changing their Storage Pool Manager (SPM) priority to '-1', otherwise the change will not take effect. An SPM priority of '-1' indicates the host must never be considered for the SPM role.
Comment 3 Stephen Gordon 2012-09-12 07:26:15 EDT
Adding flag in line with Miki's request to use it for filtering release notes.

Note You need to log in before you can comment on or make changes to this bug.