Bug 989544 - hypervisors rotate SPM constantly
Summary: hypervisors rotate SPM constantly
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.1.5
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: 3.5.0
Assignee: Sergey Gotliv
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-29 13:44 UTC by Allie DeVolder
Modified: 2019-05-20 11:04 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-07 08:57:43 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Allie DeVolder 2013-07-29 13:44:29 UTC
Description of problem:
When an OVF file was added to an export domain manually from a third party source, hypervisors constantly rotated SPM status

Version-Release number of selected component (if applicable):
vdsm 4.10

How reproducible:
Very

Steps to Reproduce:
1. Insert OVF file into export domain
2. Activate export domain

Actual results:
As soon as a hypervisor takes SPM role, SPM fails and another host contends for SPM

Expected results:
No SPM failures even if there's something wrong with the export domain

Additional info:

Comment 9 Ayal Baron 2013-10-13 08:26:21 UTC
SPM rotation seems unrelated to the ovf changes, the logs show that you have operations take a *very* long time.
For example vgchange (and other operations) is taking more than 5! minutes in this setup (other lvm commands are running fine):
  1 9a631a12-8ff5-4599-9ae7-047de6d9ddb3::DEBUG::2013-08-15 21:25:31,954::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgchange --config " devices { pr    eferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'a%360a980004433374b485d444149685574|360a980004    433374b485d444149685633%\', \'r%.*%\' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --deltag M    DT_POOL_SPM_ID=-1 --deltag MDT__SHA_CKSUM=7074e67f094d9140ed99d4f5aa36b671f96f0c5f --deltag MDT_POOL_SPM_LVER=21 --addtag MDT_POOL_SPM_ID=1 --addtag MDT__SHA_CKSUM=9e19c427    0492f4fccddb27c8cfc2a0b7e5de35df --addtag MDT_POOL_SPM_LVER=22 2a7361c7-71a5-4f11-b8fa-372bf3dfdeee' (cwd None)
  2 9a631a12-8ff5-4599-9ae7-047de6d9ddb3::DEBUG::2013-08-15 21:30:35,905::misc::83::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0


First time I see it is: 2013-08-15 17:00:57,789::misc::83::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgchange

Comment 10 Eyal Edri 2014-02-10 09:51:22 UTC
moving to 3.3.2 since 3.3.1 was built and moved to QE.


Note You need to log in before you can comment on or make changes to this bug.