Bug 848401 - [backend] Editing data center with the only master storage causes that it cannot be removed afterwards
Summary: [backend] Editing data center with the only master storage causes that it can...
Keywords:
Status: CLOSED DUPLICATE of bug 845310
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nobody's working on this, feel free to take it
QA Contact: Pavel Novotny
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-15 13:37 UTC by Pavel Novotny
Modified: 2012-08-15 13:42 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-08-15 13:42:45 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
VDSM log (88.78 KB, text/x-log)
2012-08-15 13:38 UTC, Pavel Novotny
no flags Details

Description Pavel Novotny 2012-08-15 13:37:08 UTC
Description of problem:
In RHEVM webadmin, when you put a data center containing last data master storage to maintenance, update it (DC description for example) and then you try to remove the DC, it fails and the status is switched from Maintenance to Non-Responsive. Exception is also thrown in the engine log.

Version-Release number of selected component (if applicable):
rhevm-3.1.0-11.el6ev (SI13.3)

How reproducible:
Always

Steps to Reproduce:
1. Create new data center and add cluster with host in it.
2. Create new NFS (master) data domain in the DC
3. Put the master domain to maintenance.
4. On Data Centers tab, edit the DC's description.
5. Now try to remove the DC.

Note: You can swap steps 3 and 4, it does not have any effect on the result.
  
Actual results:
Removal fails and the DC status is changed to 'Non-Responsive'.
In the Events tab you can see message "Invalid status on Data Center DC1. Setting status to Non-Responsive.".
There is also an exception in the engine.log "IRSErrorException: SpmStart failed" (see Additional info) and some error messages in vdsm.log too.


Expected results:
Data center is successfully removed.


Additional info:
Regression against rhevm-3.1.0-6.el6ev (SI10)

Attached logs: engine.log, vdsm.log
Both logs begin at the time of putting NFS master domain to maintenance (step 3).

engine.log exception:
{{{
2012-08-15 12:00:29,709 ERROR [org.ovirt.engine.core.bll.storage.RemoveStoragePoolCommand] (pool-3-thread-49) [51431555] Failed destroy storage pool with id ae32878d-2639-47d7-b751-0ced2b0cd4d2 and after that failed to stop spm because of org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed
}}}

Comment 1 Pavel Novotny 2012-08-15 13:38:00 UTC
Created attachment 604607 [details]
VDSM log

Comment 2 Haim 2012-08-15 13:42:45 UTC

*** This bug has been marked as a duplicate of bug 845310 ***


Note You need to log in before you can comment on or make changes to this bug.