Bug 475778 - [RHEL 5.3 Xen]: Guest hang on FV save/restore
Summary: [RHEL 5.3 Xen]: Guest hang on FV save/restore
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel-xen
Version: 5.3
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Don Dutile (Red Hat)
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On:
Blocks: 475849
TreeView+ depends on / blocked
 
Reported: 2008-12-10 13:14 UTC by Chris Lalancette
Modified: 2009-01-20 19:49 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-01-20 19:49:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Proposed patch; seems to be working on test case (save/restore w/parallel make in background) (2.51 KB, patch)
2008-12-10 19:20 UTC, Don Dutile (Red Hat)
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2009:0225 0 normal SHIPPED_LIVE Important: Red Hat Enterprise Linux 5.3 kernel security and bug fix update 2009-01-20 16:06:24 UTC

Description Chris Lalancette 2008-12-10 13:14:35 UTC
Description of problem:
I was testing out FV save/restore functionality on RHEL-5.3, with dom0 x86_64 kernel 2.6.18-125.el5, xen-3.0.3-73, and FV x86_64 kernel 2.6.18-125.el5.  My FV guest is assigned 8 vcpus, and no PV-on-HVM devices (this is important).  I successfully boot up my guest, and then execute:

# xm save rhel5fv_x86_64 /var/lib/xen/save/rhel5fv_x86_64-save

That appears to successfully complete.  Then I do:

# xm restore /var/lib/xen/save/rhel5fv_x86_64-save

Which also appears to successfully complete, except that all 8 processors are just spinning, and the guest never returns.

I took a core-dump of the stuck domain, and saw this:

PID: 3003   TASK: ffff81002fea5040  CPU: 0   COMMAND: "suspend"
 #0 [ffff81002aa49ca0] schedule at ffffffff80063035
 #1 [ffff81002aa49ca8] thread_return at ffffffff80063097
 #2 [ffff81002aa49d68] __next_cpu at ffffffff80148b8b
 #3 [ffff81002aa49db8] physflat_send_IPI_allbutself at ffffffff80079441
 #4 [ffff81002aa49e18] __smp_call_function at ffffffff800759f6
 #5 [ffff81002aa49e68] smp_call_function at ffffffff80075b2c
 #6 [ffff81002aa49e98] __xen_suspend at ffffffff881b69a1
 #7 [ffff81002aa49ed8] xen_suspend at ffffffff881b6615
 #8 [ffff81002aa49ee8] kthread at ffffffff800324b3
 #9 [ffff81002aa49f48] kernel_thread at ffffffff8005dfb1

PID: 0      TASK: ffff81002fc34100  CPU: 1   COMMAND: "swapper"
 #0 [ffff81002fc55e18] schedule at ffffffff80063035
 #1 [ffff81002fc55e20] thread_return at ffffffff80063097
 #2 [ffff81002fc55ec8] default_idle at ffffffff8006b2b3
 #3 [ffff81002fc55ef0] cpu_idle at ffffffff80048e69

(repeat CPU 1 entry for the other 6 CPUs)

So, here's what happened:
1)  The xm save command caused a "suspend" value to be written to xenbus.
2)  This, in turn, fired a watch in the guest kernel.  The guest kernel responds by starting to run the xen_suspend() function, which does an "smp_call_function" to quiesce all of the other CPUs.
3)  However, before this action could complete inside the guest, the tools on the dom0 decided that this was an HVM domain, and further, *it doesn't know that PV-on-HVM drivers are active*.  Because of this, the tools on the dom0 just pull the plug, save the memory, and kill the domain.
4)  Now, on resume, everything is put back in place, but the guest kernel still thinks it is shutting down.  At this point, we are completely wedged.

In the tools, here's what's happening:
    def shutdown(self, reason):
        if not reason in shutdown_reasons.values():
            raise XendError('Invalid reason: %s' % reason)
        if self.domid == 0:
            raise XendError("Can't specify Domain-0")
        self.storeDom("control/shutdown", reason)

        # HVM domain shuts itself down only if it has PV drivers
        if self.is_hvm():
            hvm_pvdrv = xc.hvm_get_param(self.domid, HVM_PARAM_CALLBACK_IRQ)
            if not hvm_pvdrv:
                code = reverse_shutdown_reasons[reason]
                xc.domain_destroy_hook(self.domid)
                log.info("HVM save:remote shutdown dom %d!", self.domid)
                xc.domain_shutdown(self.domid, code)

As you can see, it queries the HVM_PARAM_CALLBACK_IRQ to see if it has a value.  This value is stored in the hypervisor.  However, for some reason (which I haven't yet determined), this comes back as false, so the tools go on to pull the plug on the domain without waiting for the domain to shut itself down.

Note that I believe the RHEL-4 kernel has the same problem, although I have not yet verified that this is the case.

Comment 1 Don Dutile (Red Hat) 2008-12-10 19:20:52 UTC
Created attachment 326538 [details]
Proposed patch; seems to be working on test case (save/restore w/parallel make in background)

From upstream xen-unstable, cset 18669

Comment 3 Chris Lalancette 2008-12-12 08:31:11 UTC
Just for the record, my initial analysis was totally incorrect; I was confused by some error messages in xend.log.  What's actually going on is much simpler; it's a simple deadlock in the guest kernel, on the suspend_lock.  The above mentioned cset fixes it by removing the lock, and in some more rigorous testing, seemed to work very well.

Chris Lalancette

Comment 5 Don Zickus 2008-12-16 19:15:48 UTC
in kernel-2.6.18-127.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5

Comment 8 errata-xmlrpc 2009-01-20 19:49:26 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2009-0225.html


Note You need to log in before you can comment on or make changes to this bug.