Red Hat Bugzilla – Bug 848401
[backend] Editing data center with the only master storage causes that it cannot be removed afterwards
Last modified: 2012-08-15 09:42:45 EDT
Description of problem:
In RHEVM webadmin, when you put a data center containing last data master storage to maintenance, update it (DC description for example) and then you try to remove the DC, it fails and the status is switched from Maintenance to Non-Responsive. Exception is also thrown in the engine log.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create new data center and add cluster with host in it.
2. Create new NFS (master) data domain in the DC
3. Put the master domain to maintenance.
4. On Data Centers tab, edit the DC's description.
5. Now try to remove the DC.
Note: You can swap steps 3 and 4, it does not have any effect on the result.
Removal fails and the DC status is changed to 'Non-Responsive'.
In the Events tab you can see message "Invalid status on Data Center DC1. Setting status to Non-Responsive.".
There is also an exception in the engine.log "IRSErrorException: SpmStart failed" (see Additional info) and some error messages in vdsm.log too.
Data center is successfully removed.
Regression against rhevm-3.1.0-6.el6ev (SI10)
Attached logs: engine.log, vdsm.log
Both logs begin at the time of putting NFS master domain to maintenance (step 3).
2012-08-15 12:00:29,709 ERROR [org.ovirt.engine.core.bll.storage.RemoveStoragePoolCommand] (pool-3-thread-49)  Failed destroy storage pool with id ae32878d-2639-47d7-b751-0ced2b0cd4d2 and after that failed to stop spm because of org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed
Created attachment 604607 [details]
*** This bug has been marked as a duplicate of bug 845310 ***