Bug 1070835
| Summary: | Editing VM clears the VNIC profiles | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Pablo Iranzo Gómez <pablo.iranzo> | |
| Component: | ovirt-engine-webadmin-portal | Assignee: | Lior Vernia <lvernia> | |
| Status: | CLOSED ERRATA | QA Contact: | Martin Pavlik <mpavlik> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | high | |||
| Version: | 3.3.0 | CC: | acathrow, bazulay, ecohen, iheim, jhunsaker, lbopf, lpeer, lvernia, masayag, myakove, nyechiel, pablo.iranzo, Rhev-m-bugs, yeylon | |
| Target Milestone: | --- | Keywords: | Triaged, ZStream | |
| Target Release: | 3.4.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | network | |||
| Fixed In Version: | av6 | Doc Type: | Known Issue | |
| Doc Text: |
Previously, editing a virtual machine in a non-default cluster without network attachment reset the VNIC (virtual network interface card) profiles. This happened because the cluster could potentially be called twice in the initialization of the dialog. Now, the correct cluster is properly selected the first time to prevent dual sets of backend queries, and the VNIC profile is not cleared.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1075119 1078215 (view as bug list) | Environment: | ||
| Last Closed: | 2014-06-09 15:04:55 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1075119 | |||
| Bug Blocks: | 1078215, 1090946 | |||
|
Description
Pablo Iranzo Gómez
2014-02-27 14:54:46 UTC
This sounds like something I've fixed in the past, which as far as I remember should be in RHEV-3.3. I'll try to reproduce on a clean deployment and see if I can pin-point the problem. Lior, Did you had the chance to test on your side? It would be nice to have an estimate for a fix. Thanks! Hi Pablo, Yes, on a clean deployment I couldn't reproduce this behavior. Does it happen with all VMs? With some of them? Does it only happen once a VM has been run and stopped, or does it happen on freshly-created VMs? Any additional information could prove helpful. Yours, Lior. Also, is it possible that the customer is running some beta version (apologies, the stated rpm version tells me nothing), i.e. that my test took place on a different version than theirs? Lior, I was able to reproduce on one of my machines with RHEVMdbviewer, that machine was installed using the official channels (no beta there), and after restoring customer database we ran into this issue. I've just tried with the one that was stopped in that database snapshot, but as per their last update, this is happening with more than one vm, as they usually change parameters like vlan and ram assigned to them. Do you want me to make the db available somewhere for testing? Thanks, Pablo Hi Pablo, What about when using the same machine with another DB? I'm just trying to establish that this issue is related to the state of their DB (which wouldn't have been my first guess). In which case, an excerpt from the engine log following opening the "Edit VM" dialog would be helpful. All VMs, or just more than one? What about when creating a new VM from scratch with a couple of interfaces, then editing that? Thanks! Lior. (In reply to Lior Vernia from comment #9) > Hi Pablo, > > What about when using the same machine with another DB? I'm just trying to > establish that this issue is related to the state of their DB (which > wouldn't have been my first guess). In which case, an excerpt from the > engine log following opening the "Edit VM" dialog would be helpful. > > All VMs, or just more than one? What about when creating a new VM from > scratch with a couple of interfaces, then editing that? > > Thanks! > Lior. Lior, I'm attaching the engine.log with the following marks in it: ### ssh to RHEV-M echo "START VM EDIT" >> /var/log/ovirt-engine/engine.log ### Then, using RHEV-M UI, click on edit on an existing vm, edit ram, etc, and click ok to reproduce the issue in the logs. ### ssh to RHEV-M echo "STOP VM EDIT" >> /var/log/ovirt-engine/engine.log ### ### ssh to RHEV-M echo "START VM CREATE" >> /var/log/ovirt-engine/engine.log ### Then, on RHEV-M UI, create a new VM, defining everything needed (ram, disks, etc) ### ssh to RHEV-M echo "STOP VM CREATE" >> /var/log/ovirt-engine/engine.log ### ### ssh to RHEV-M echo "START NEW-VM EDIT" >> /var/log/ovirt-engine/engine.log ### Then, on RHEV-M edit the newly crated VM and check if the nic profile is filled in during the dialog, change ram for example, and click ok for saving ### ssh to RHEV-M echo "STOP NEW-VM EDIT" >> /var/log/ovirt-engine/engine.log ### I'm seeing: 2014-03-05 12:31:27,288 INFO [org.ovirt.engine.core.bll.MultipleActionsRunner] (ajp-/127.0.0.1:8702-9) MultipleActionsRunner of type AddVmInterface invoked with no actions 2014-03-05 12:31:27,356 INFO [org.ovirt.engine.core.bll.network.vm.UpdateVmInterfaceCommand] (pool-4-thread-48) [5f3ad88c] Running command: UpdateVmInterfaceCommand internal: false. Entities affected : ID: fd136a67-8dbf-4760-97c6-04be9fd73514 Type: VM 2014-03-05 12:31:27,370 INFO [org.ovirt.engine.core.bll.MultipleActionsRunner] (ajp-/127.0.0.1:8702-11) MultipleActionsRunner of type RemoveVmInterface invoked with no actions 2014-03-05 12:31:27,370 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-4-thread-48) [5f3ad88c] Correlation ID: 5f3ad88c, Call Stack: null, Custom Event ID: -1, Message: Interface nic1 (VirtIO) was updated for VM testvm. (User: admin@internal) I'm, checking the behaviour on the 'UI' side from customer and update later. Let me know if I should check anything else. Pablo Hi, The issue appeared on both the old and the new VM. What should be the next steps for further diagnosing this? Thanks, Since I can't see much wrong in the engine log, I'm assuming it is something wrong in the GUI code itself, that is somehow dependent on the state of the deployment (as it doesn't seem to happen on a clean one). I will probably need to recreate the customer's DB on my environment to debug this properly. Hi Pablo, I found the problem. There were actually two bugs, one more common and easier to fix, another less common and more difficult to understand. I have fixes pending for both, just need to get them merged on other branches before they can be merged for 3.3.z, I'm estimating it might take about a week to merge them everywhere. Nir, I think this might be a 3.3.2 blocker, what do you think? Lior. Thanks Lior, Let me know when we've further progress so I can report to customer. Thanks, Pablo PD: If you've the gerrit commits for the fixes, it would be great! Added trackers (of course patches might still change until they're merged). *** Bug 1075119 has been marked as a duplicate of this bug. *** This is too late for 3.3.2. I am changing the target version to 3.4.0 and we will include it in the next z-stream release which is 3.3.3. As there is a workaround for that, we can report that as a known issue. Lior, can you please fill in the Doc Text info? Thanks, Nir See Doc Text field to understand exactly under what conditions this occurs. verified av6 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0506.html |