Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 947240

Summary: virt-manager migration calls virDomainMigrateSetMaxDowntime(dom, 30, 12345678)
Product: Red Hat Enterprise Linux 6 Reporter: Eric Blake <eblake>
Component: python-virtinstAssignee: virt-mgr-maint
Status: CLOSED WONTFIX QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.4CC: acathrow, cwei, dallan, eblake, gscrivan, hyao, lcui, mjenner, mzhan, tzheng
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-20 14:47:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
.virt-manager/virt-manager.log contents during my test run none

Description Eric Blake 2013-04-01 21:16:29 UTC
Description of problem:
After pressing 'Migrate...' but before the migration options popup box appears, virt-manager ends up sending two separate calls to virDomainMigrateSetMaxDowntime to the source side, both with an invalid flags value of 0xbc614e (decimal 12345678).


Version-Release number of selected component (if applicable):
virt-manager-0.9.0-18.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. open virt-manager with connections to two machines both running libvirtd, and start a guest on one of the two machines
2. in virt-manager, right-click on the running domain, and hit 'Migrate...'
3. inspect libvirtd logs
  
Actual results:
When I put a breakpoint in libvirtd, I was able to see that after hitting Migrate but before the migrate popup box appeared, there were two calls on the machine with the running domain with an invalid flags arguments, leaving a message like this twice in the libvirtd log:

2013-04-01 20:51:04.935+0000: 592: error : qemuDomainMigrateSetMaxDowntime:10522 : unsupported flags (0xbc614e) in function qemuDomainMigrateSetMaxDowntime


Expected results:
virt-manager shouldn't call libvirtd with an obviously bogus flags value; and since the call is failing (twice), there's probably further issues with virt-manager not being able to drive migration speed correctly.

Additional info:
Upstream commit febecf40 mentions:
+    def support_downtime(self):
+        # Note: this function has side effect
+        # if domain supports downtime, the downtime may be overriden to 30ms
+        return support.check_domain_support(self._backend,
+                        support.SUPPORT_DOMAIN_MIGRATE_DOWNTIME)
Maybe it is this call which intentionally passes a bogus flag, to see whether the API exists but with a flag that will be rejected to ensure that no change is made, but it still looks suspicious when re-reading the logs.

Comment 1 Eric Blake 2013-04-01 21:20:23 UTC
Created attachment 730478 [details]
.virt-manager/virt-manager.log contents during my test run

Comment 2 Eric Blake 2013-04-01 21:31:02 UTC
It looks like this code in python-virtinst.git:virtinst/support.py is responsible:

    SUPPORT_DOMAIN_MIGRATE_DOWNTIME : {
        "function" : "virDomain.migrateSetMaxDowntime",
        # Use a bogus flags value, so that we don't overwrite existing
        # downtime value
        "args" : (30, 12345678),
    },

But it's choice of bogus flag values might not always be bogus in a future version of libvirt.  Previously, libvirt added a GetMaxSpeed() counterpart so that SetMaxSpeed() would no longer be a write-only interface; perhaps we should do the same and add a GetMaxDowntime() function, where if that function exists, it is no longer necessary to set with a bogus value.  The fallback for older libvirt would of course still have to rely on a bogus value, but then we would at least be future-proof to avoiding an introduction of flags that actually mean something; furthermore, it would avoid littering libvirtd logs with a scary message about mis-use of the API when talking to newer libvirt.

There's also the question of why the function is being called twice per migration attempt, so while I'm moving this bug to python-virtinst at the moment, it may need cloning to libvirt and/or virt-manager.

Comment 3 hyao@redhat.com 2013-07-04 09:24:48 UTC
I can't reproduce this bug with the following packages both:
#rpm -qa virt-manager libvirt
virt-manager-0.9.0-18.el6.x86_64
libvirt-0.10.2-18.el6_4.9.x86_64

and can't reproduce it on rhel7
# rpm -qa virt-manager libvirt 
libvirt-1.1.0-1.el7.x86_64
virt-manager-0.10.0-1.el7.noarch

Comment 4 hyao@redhat.com 2013-07-04 09:30:10 UTC
Could you please offer more details about the related packages. Thanks

Comment 5 Eric Blake 2013-07-11 21:18:20 UTC
(In reply to hyao from comment #4)
> Could you please offer more details about the related packages. Thanks

I was definitely testing a devel version of libvirt at the time I reported this bug; maybe it's reproducible with virt-manager-0.9.0-18 but newer libvirt-1.1.0-1, if you can manage to test that mixed environment.  In other words, the bogus call of virt-manager-0.9.0-18 is present whether or not libvirt logs it, but it may be that libvirt didn't log it in 0.10.2-18 and that you need a newer libvirt to get the log message.

Comment 6 Giuseppe Scrivano 2014-02-20 14:47:52 UTC
I think it is safe to close this for RHEL-6.6 and deal with it upstream.