Description of problem: Currently force remove doesn't work when there are tasks running. Force removal should succeed removing a locked VM even if there are tasks running. Version-Release number of selected component (if applicable): si20 How reproducible: 100% Steps to Reproduce: 1.Lock a VM (i.e. create a template) 2.Force remove locked VM Actual results: Removal fails - [Cannot force remove VM when there are running tasks.] Expected results: VM is removed. Additional info:
logs please. also, is this really expected?
(In reply to comment #1) > logs please. also, is this really expected? Until we have a cancel option on this task is this really a bug?
(In reply to comment #2) > (In reply to comment #1) > > logs please. also, is this really expected? > > Until we have a cancel option on this task is this really a bug? You can wait 50 hours and hope the task would die.
not easy. might bring a lot of issues if we allow canceling a task in the middle. Let's revisit in 3.2 btw having r/o and r/w locks should help to mitigate some of these issue
I recommend to add the option to remove a VM forcibly without taking into consideration the storage domain status / tasks status or any other logic.
(In reply to comment #5) > I recommend to add the option to remove a VM forcibly without taking into > consideration the storage domain status / tasks status or any other logic. With the lock release utility should this be covered? Since we don't have cancel task, I prefer this to be done under GSS supervision (they may also help with proper cleanup of the task). Removing a single VM with leftovers is not the same as force remove storage domain where you can later clean it up on the storage side. Removing the 3.2 target - need to revisit for 3.3 with proper solution based on cancel tasks or comment #4 or even both.
I encountered new situation where floating disks failed to remove , I suggest to add the option to forcibly remove floating disks as well ( at least from API )
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.