Steps to reproduce: 1. create new pool with 1 VM 2. detach this VM from the pool 3. select the pool and press Edit The edit pool dialog hangs forever Additional info: Frontend exceptions: java.lang.NullPointerException: null at org.ovirt.engine.ui.uicommonweb.models.vms.ExistingPoolModelBehavior.Template_SelectedItemChanged(ExistingPoolModelBehavior.java:37) at org.ovirt.engine.ui.uicommonweb.models.vms.UnitVmModel.Template_SelectedItemChanged(UnitVmModel.java:1441) java.lang.NullPointerException: null at org.ovirt.engine.ui.uicommonweb.models.vms.ExistingPoolModelBehavior.ChangeDefualtHost(ExistingPoolModelBehavior.java:25) at org.ovirt.engine.ui.uicommonweb.models.vms.VmModelBehaviorBase$8.OnSuccess(VmModelBehaviorBase.java:409)
The problem here is, that most of the data like memory, OS type etc are not stored on the vm_pools entity but only on the VMs of the pool, so if we want to get some info about the pool, we get a random VM from that pool and read the info from it. The problem is, that when the pool has no VMs, we know nearly nothing about the pool itself. I see the following possibilities: 1. don't let the user detach all the VMs from the pool 2. when the user detaches all the VMs from the pool, delete the pool automatically 3. when the pool has no VMs, disable the "edit" button, so only the "delete" will be enabled for the pool 4. store all the info about the pool on the vm_pools entity so don't rely on the VMs of the pool
Created attachment 607448 [details] backend logs I don't particularly understand why backend logs, as this is not a backend bug, but attaching. It contains: start of application create a new pool with 1 VM detaching this VM clicking edit on this pool (edit pool hangs) stop the application
After the discussion with Einav we came to the conclusion: as the vm_pools does not have the possibility to store all the values, it does not make sense to let the user edit pools, which has no VMs. So, when the user selects a pool which has no VMs, the "edit" button will be disabled with a title describing why. @Miki: Is it OK with you? If yes, could you please provide the title message?
regarding bug 500413 - note that it has been closed as NOT-A-BUG, claiming that the user might want to add VMs to the empty pool later on; however, since there is no information in an empty pool, no point of letting the user editing it. the suggestion in comment #6 ("have a warning when you choose to remove all the VMs which would warn you that you loose your pool this way, and if the user accepts it, than also delete the pool itself") sounds like a great solution: When attempting to detach x VMs from the pool, compare x to the number of VMs already-assigned to the pool. If equals: - show a "note" in the "are you sure" dialog that "you chose to detach all VMs from the pool; this will remove the pool object from the system" (or similar) - [after clicking "ok"] detach all VMs from pool + remove the pool object itself. There can be a corner case in which you choose to remove all VMs from the pool while, in the background (due to previous/parallel action from another client), new VMs are being added to it, which can lead to unexpected results; not a typical scenario + probably no harsh results, though.
(In reply to comment #7) > There can be a corner case in which you choose to remove all VMs from the > pool while, in the background (due to previous/parallel action from another > client), new VMs are being added to it, which can lead to unexpected > results; not a typical scenario + probably no harsh results, though. though we want to make sure we won't delete the pool in such case
in upstream: http://gerrit.ovirt.org/#/c/7583/
(In reply to comment #8) > (In reply to comment #7) > > There can be a corner case in which you choose to remove all VMs from the > > pool while, in the background (due to previous/parallel action from another > > client), new VMs are being added to it, which can lead to unexpected > > results; not a typical scenario + probably no harsh results, though. > though we want to make sure we won't delete the pool in such case I assume that the "remove vm-pool" action that will be triggered as a result of detaching the (allegedly) last VMs from the pool will simply fail (on CanDoAction or similar) due to the fact that these were not really the last VMs in the pool. So the pool should be kept in this state.
merged upstream: 3a0b93dd38c5767bf9e66cc8ba3e1d08472c331f