Bug 989544 - hypervisors rotate SPM constantly
hypervisors rotate SPM constantly
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
All Linux
high Severity high
: ---
: 3.5.0
Assigned To: Sergey Gotliv
Aharon Canan
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2013-07-29 09:44 EDT by Allan Voss
Modified: 2016-02-10 12:12 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-05-07 04:57:43 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Allan Voss 2013-07-29 09:44:29 EDT
Description of problem:
When an OVF file was added to an export domain manually from a third party source, hypervisors constantly rotated SPM status

Version-Release number of selected component (if applicable):
vdsm 4.10

How reproducible:

Steps to Reproduce:
1. Insert OVF file into export domain
2. Activate export domain

Actual results:
As soon as a hypervisor takes SPM role, SPM fails and another host contends for SPM

Expected results:
No SPM failures even if there's something wrong with the export domain

Additional info:
Comment 9 Ayal Baron 2013-10-13 04:26:21 EDT
SPM rotation seems unrelated to the ovf changes, the logs show that you have operations take a *very* long time.
For example vgchange (and other operations) is taking more than 5! minutes in this setup (other lvm commands are running fine):
  1 9a631a12-8ff5-4599-9ae7-047de6d9ddb3::DEBUG::2013-08-15 21:25:31,954::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgchange --config " devices { pr    eferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a%360a980004433374b485d444149685574|360a980004    433374b485d444149685633%\', \'r%.*%\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --deltag M    DT_POOL_SPM_ID=-1 --deltag MDT__SHA_CKSUM=7074e67f094d9140ed99d4f5aa36b671f96f0c5f --deltag MDT_POOL_SPM_LVER=21 --addtag MDT_POOL_SPM_ID=1 --addtag MDT__SHA_CKSUM=9e19c427    0492f4fccddb27c8cfc2a0b7e5de35df --addtag MDT_POOL_SPM_LVER=22 2a7361c7-71a5-4f11-b8fa-372bf3dfdeee' (cwd None)
  2 9a631a12-8ff5-4599-9ae7-047de6d9ddb3::DEBUG::2013-08-15 21:30:35,905::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0

First time I see it is: 2013-08-15 17:00:57,789::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgchange
Comment 10 Eyal Edri 2014-02-10 04:51:22 EST
moving to 3.3.2 since 3.3.1 was built and moved to QE.

Note You need to log in before you can comment on or make changes to this bug.