Bug 700565 - RHEL6.1 32bit xen hvm guest crash randomly
Summary: RHEL6.1 32bit xen hvm guest crash randomly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel-xen
Version: 5.7
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: 5.7
Assignee: Igor Mammedov
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 697793 (view as bug list)
Depends On:
Blocks: 514489 653816 705057 712884 712885
TreeView+ depends on / blocked
 
Reported: 2011-04-28 16:49 UTC by Qixiang Wan
Modified: 2013-01-08 13:31 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
A bug was found in the way the x86_emulate() function handled the IMUL instruction in the Xen hypervisor. On systems that have no support for hardware assisted paging (such as those running CPUs that do not have support for Intel Extended Page Tables or AMD Rapid Virtualization Indexing), or have it disabled, this bug could cause fully-virtualized guests to crash or lead to silent memory corruption. In reported cases, this issue occurred when booting fully-virtualized Red Hat Enterprise Linux 6.1 guests with memory cgroups enabled.
Clone Of:
Environment:
Last Closed: 2012-02-21 03:46:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
RHEL6.1 i386 hvm guest crash on xen (17.80 KB, text/plain)
2011-04-29 08:22 UTC, Qixiang Wan
no flags Details
cgroup related backtrace (2.92 KB, text/plain)
2011-05-04 16:04 UTC, Andrew Jones
no flags Details
rhel6.1 guest with rhel6.0 release kernel crashed (18.05 KB, text/plain)
2011-05-13 07:59 UTC, Qixiang Wan
no flags Details
[RHEL5.8 Xen PATCH] Fix x86_emulate() handling of imul with immediate operands (3.04 KB, patch)
2011-06-10 13:59 UTC, Igor Mammedov
no flags Details | Diff
[RHEL5.8 Xen PATCH v2] Fix x86_emulate() handling of imul with immediate operands (3.87 KB, patch)
2011-06-10 15:14 UTC, Igor Mammedov
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:0150 0 normal SHIPPED_LIVE Moderate: Red Hat Enterprise Linux 5.8 kernel update 2012-02-21 07:35:24 UTC

Description Qixiang Wan 2011-04-28 16:49:55 UTC
Description of problem:
While using RHEL6.1 32bit HVM guest over xen, the guest may crash randomly, that may happen during booting, shutdown. It's not 100% reproducible, about 5% for me. seems more reproducible while there are multiple guests on the same host. guest vcpus is 4 for me, haven't tried with single vcpu.

Guest kernel is 2.6.32-131.0.9.el6.i686. 
Host is RHEL5.6 x86_64: kernel-xen-2.6.18-238.el5 + xen-3.0.3-129.el5

Not sure whether this issue exist with 2.6.32-131.0.8, at least I didn't hit the issue with booting/rebooting 2.6.32-131.0.8 guest for about 30 times.

also not sure whether it exist on 32bit host, will do more investigation and update here.

Version-Release number of selected component (if applicable):
RHEL6.1 2.6.32-131.0.9.el6.i686

How reproducible:
5% for me

Steps to Reproduce:
1. create RHEL6.1 32bit HVM guest:
$ cat rhel.cfg
name='rhel'
maxmem = 300
memory = 300
vcpus = 4
builder = "hvm"
kernel = "/usr/lib/xen/boot/hvmloader"
boot = "c"
pae = 1
acpi = 1
apic = 1
localtime = 0
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "rename-restart"
sdl = 0
vnc = 1
vncunused = 1
vnclisten = "0.0.0.0"
device_model = "/usr/lib64/xen/bin/qemu-dm"
disk = [ "file:/root/cfgs/RHEL61/RHEL-Server-6.1-32-hvm.raw,hda,w" ] 
vif = [ "mac=00:16:36:01:10:30,bridge=xenbr0,script=vif-bridge" ]
serial = "pty"

2. try reboot guest if you doesn't see crash during boot

  
Actual results:
guest crash

Expected results:
guest should not crash

Additional info:

Comment 4 Andrew Jones 2011-04-29 08:11:09 UTC
The memory looks a bit small. I think we should use a 512M minimum for rhel6 guests. I'll still look at the core dump, but if your 5% goes to 0% using 512M or more, then this is a low priority issue.

Comment 5 Qixiang Wan 2011-04-29 08:20:58 UTC
(In reply to comment #4)
> The memory looks a bit small. I think we should use a 512M minimum for rhel6
> guests. I'll still look at the core dump, but if your 5% goes to 0% using 512M
> or more, then this is a low priority issue.

I set memory to 300 just because I want to capture a small size core dump file, it can be reproduced with 1024M memory. just got a more detailed guest console log.

Comment 6 Qixiang Wan 2011-04-29 08:22:44 UTC
Created attachment 495731 [details]
RHEL6.1 i386 hvm guest crash on xen

(guest kernel is 2.6.32-131.0.8.el6.0af9d2f as I reset git HEAD to 
0af9d2f [mm] Prevent page_fault at do_mm_track_pte+0xc when Stratus dirty page tracking is active )

Comment 7 Qixiang Wan 2011-04-29 08:56:48 UTC
just confirmed the issue exist with 2.6.32-131.0.8.el6.i686.

Comment 8 Qixiang Wan 2011-04-29 12:20:39 UTC
confirmed the issue exist with 2.6.32-131.el6.i686

Comment 9 Andrew Jones 2011-04-29 13:15:56 UTC
It bugged in mem_cgroup_create. We should if this is a regression from rhel
6.0, since it's possible that one of the many updates to mm/memcontrol.c
introduced this. Although I don't see anything right off that looks suspicious,
as most were cleanup patches.

Comment 10 Qixiang Wan 2011-04-29 13:17:19 UTC
reproduced with rhel6.0 release kernel 2.6.32-71.el6.i686.

Comment 11 Qixiang Wan 2011-04-29 14:12:38 UTC
seems it can only be reproduced with multiple vcpus, I have tried boot/reboot guest (kernel-2.6.32-71.el6.i686) for 30+ times, the guest works well.

Comment 12 Andrew Jones 2011-04-29 14:13:57 UTC
OK, not a regression, only occurs ~5% of the time. I'm moving this to
6.2/6.1.z, it's not blocker material.

Comment 13 Andrew Jones 2011-05-04 16:03:26 UTC
I just saw a similar random crash on shutdown of a 32-on-64 rhel 6.1 (2.6.32-131.0.10.el6.i686) guest. I'll attach the backtrace. Hopefully this really is only ~5% or less...

Comment 14 Andrew Jones 2011-05-04 16:04:25 UTC
Created attachment 496833 [details]
cgroup related backtrace

Comment 15 Igor Mammedov 2011-05-11 16:56:54 UTC
Tried to reproduce it on host 2.6.18-257 + xen-3.0.3-129 with hvm
guest 2.6.32-131.0.13.el6.i686, so far 75 reboots without crash.
Will leave it for a night.

Qixiang,
Can you reproduce it with specified host/guest versions on your test
box? To force guest's perpetual reboot, I've just put '/sbin/init 6'
in the end of /etc/rc.local. And set guest config to preserve crashed
domain, so that test should stop on the guest crash.

Here is my guest config:
name = "rh6_32fv"
uuid = "1d73df6f-9c15-b0aa-ff5d-ec8907259657"
maxmem = 512
memory = 512
vcpus = 4
builder = "hvm"
kernel = "/usr/lib/xen/boot/hvmloader"
boot = "c"
pae = 1
acpi = 1
apic = 1
localtime = 0
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "preserve"
device_model = "/usr/lib64/xen/bin/qemu-dm"
sdl = 0
vnc = 1
vncunused = 1
vnclisten = "0.0.0.0"
keymap = "en-us"
disk = [ "phy:/dev/main/rh61.32,hda,w" ]
vif = [ "mac=00:16:36:20:c1:99,bridge=xenbr0,script=vif-bridge" ]
parallel = "none"
serial = "pty"

Comment 16 Qixiang Wan 2011-05-12 13:48:08 UTC
Hi Igor,
I'm trying on an AMD host and an Intel host, but haven't trigger the crash by now, I'll try to find which host is the one I found the issue tomorrow if it can't be reproduced on these 2 hosts.

Comment 17 Igor Mammedov 2011-05-12 14:34:13 UTC
Hi Qixiang,

I've run a second attempt to reproduce today, without any luck of getting crashed
guest for 140 consecutive reboots. So I' give up on reproducing for now or at least for today.

The only difference between your and mine guest config is the disk backend, if
we do not count xen/kernel versions difference.

Comment 18 Qixiang Wan 2011-05-13 06:56:28 UTC
Hi Igor,

I've reproduced the crash on 2 Intel hosts and 1 AMD host with RHEL6.1 32 hvm guest (kernel-2.6.32-131.0.15.el6.i686 and  kernel-2.6.32-131.0.10.el6.i686), just reboot 10~20 times can trigger the crash. (4 vcpus for all the guest)

host : 
Dual-Core AMD Opteron(tm) Processor 1220 - kernel-xen-2.6.18-238.el5
Intel(R) Xeon(R) CPU E5405  @ 2.00GHz - kernel-xen-2.6.18-260.el5
Intel(R) Core(TM)2 Quad CPU Q9400  @ 2.66GHz - kernel-xen-2.6.18-260.el5

I was trying to reproduce it with RHEL6.0 release yesterday in comment 16, but didn't see any crash after 300 times reboot, so there should be something wrong with my comment 10, this should be a regression introduced in RHEL6.1. I'm going to investigate more what's wrong with my test in comment 10.

Comment 19 Qixiang Wan 2011-05-13 07:59:24 UTC
Created attachment 498717 [details]
rhel6.1 guest with rhel6.0 release kernel crashed

FYI, I have reproduced the error with RHEL6.1 + kernel-2.6.32-71.el6.i686 (RHEL6.0 release kernel). here is the log.

And according to the test I did yesterday, the crash can't be reproduced by rebooting RHEL6.0 pre-installed guest (tried reboot for 300~400 times). So there could be something other than kernel make the difference.

Comment 20 Andrew Jones 2011-05-13 08:23:15 UTC
Likely the cgconfig and cgred services aren't turned on by default with 6.0, but they are with 6.1. We should try a round of 6.1 testing with those services turned off.

Comment 21 Andrew Jones 2011-05-13 08:30:52 UTC
Also, possibly the reproducer could be just

while :; do
  service cgconfig start
  service cgconfig stop
done

Comment 22 Qixiang Wan 2011-05-13 09:31:26 UTC
(In reply to comment #21)
> Also, possibly the reproducer could be just
> 
> while :; do
>   service cgconfig start
>   service cgconfig stop
> done

Tried with the following 2 methods, but failed to reproduce it:

1. with RHEL6.0 + kernel-2.6.32-131.0.15.el6.i686:
chkconfig --level 345 cgconfig on
chkconfig --level 345 cgred on

then reboot guest for 20+ times, haven't hit the crash by now

2. with RHEL6.1 (kernel-2.6.32-131.0.15.el6.i686):
failed to get the crash after i > 1000
----------------------------------
i=0
while sleep 0.5;
do
    echo $((i++))
    /etc/init.d/cgconfig start
    /etc/init.d/cgred start
    sleep 0.5
    /etc/init.d/cgconfig stop
    /etc/init.d/cgconfig stop
done
----------------------------------

Comment 23 Qixiang Wan 2011-05-13 09:34:20 UTC
(In reply to comment #22)
>     sleep 0.5
>     /etc/init.d/cgconfig stop
>     /etc/init.d/cgconfig stop
> done
> ----------------------------------

typo in comment, should be 'cgred' for the second 'cgconfig'.

Comment 24 Andrew Jones 2011-05-17 08:03:32 UTC
I was able to reproduce the original backtrace on my own machine within 10 reboots. I then shut off the cgconfig service and ran a reboot loop all night. It survived 430 reboots with no problem. That doesn't answer the question of why it doesn't work, but it's enough to create a release note for 6.1. I'll write one now and then we'll continue investigating this.

As far as the investigation goes, the start/stop experiment probably didn't work because it needs to be used in some way. libvirtd will use it if it's there and is also started on boot (which it is now by default for 6.1 Server). Maybe we can look at what libvirtd is doing (mkdir -> cgroup_mkdir) and then write a simpler reproducer.

Comment 25 Andrew Jones 2011-05-17 12:21:22 UTC
At this time, I still don't know why this problem is limited to 32-bit HVM guests, although that's what testing shows. I'll write the release note more generally to be on the safe side.

Comment 27 Andrew Jones 2011-05-17 12:27:54 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
RHEL 6.1 Xen guests may experience crashes if cgroups are used. To disable their use ensure that the service cgconfig is turned off for all runlevels.

Comment 29 Igor Mammedov 2011-05-27 08:52:40 UTC
Andrew,

Crash dump analysis shows that there is some race condition memory cgroups.
It is easier to reproduce on a system that has several running guests and test guest uses vcpu overcommiting (i.e. on a system with 4 cpus, I've used 16 vcpus for test guest and I'm able to reproduce crash in 2-8 reboots).

Problem happens in 'mem_cgroup_force_empty_list'

pc = list_entry(list->prev, struct page_cgroup, lru);

where list->prev == 0, so list entry either is uninitialised yet or this memory region is zeroed from another thread. And next attempt of dereferencing 'pc' in 'mem_cgroup_move_parent' leads to page fault.


So far guest crashes happens only inside memory cgroups code. We probably should correct technote to something less restrictive:

RHEL 6.1 Xen guests may experience crashes if memory cgroups are used with more that 1 vcpu. To disable their use ensure that the kernel's command line has following option set "cgroup_disable=memory" when vcpus count > 1.

Comment 31 Igor Mammedov 2011-05-31 07:23:51 UTC
Env description where bug was reproduced.

intel-q9400-8-2.englab.nay.redhat.com 4 cpus
RHEL6.1x64 hvm guest 4 vcpus, with 4 cpu hog tasks in background.
RHEL6.1x32 hvm 16 vcpus xen_emul_unlug=never with reboot task

This env can reproduce bug in ~4-6 reboots.

Comment 32 Igor Mammedov 2011-06-03 12:36:31 UTC
To reproduce bug HVM shall work in shadow pages mode (i.e. put hap=0 on xen kernel command line if box supports hap)

Comment 34 Igor Mammedov 2011-06-07 08:25:23 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,4 +1,4 @@
-RHEL 6.1 Xen guests may experience crashes if memory cgroups are used with more that 1 vcpu on a host without EPT or NPT features.
+RHEL 6.1 Xen guests may experience crashes if memory cgroups are used with more than 1 vcpu on a host without EPT or NPT features.
 Possible workarounds:
   1 - Run guest on a hap enabled host if memory cgroups feature is required.
   2 - Disable memory cgroups. Add to guest's kernel command line the following option "cgroup_disable=memory".

Comment 35 Igor Mammedov 2011-06-07 14:17:01 UTC
Well, I give up  for now since the lack of ideas of what to do next, any ideas
are welcome. Following is just for documenting the current state.
========== 
Discussion if cgroup race is possible https://lkml.org/lkml/2011/6/1/416

--------
Wasn't able to reproduce bug on SLES. There is no way to turn off hap globally in that case, but they have per guest hap config option. Even with hap disabled in guest's config bug doesn't pop up, however I'm not sure if hap was turned off.

--------
Build a custom kernel with extra printk right after allocation of mem_cgroup_per_node to print out physical addr, something like this:

@@ -3348,6 +3350,7 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *mem, int node)
        pn = kmalloc_node(sizeof(*pn), GFP_KERNEL, tmp);
        if (!pn)
                return 1;
+       printk(KERN_DEBUG "XXX: pn: %p, phy: %x", pn, (u32)virt_to_phys(pn));


After crashing in mem_cgroup_force_empty, vtop of the offending pn almost always shows the same phy addr as at the time when it was allocated. However on one of reboots the phy addr of pn was different from 'allocated' one.

crash analysis doesn't show much (my lack of experience???)

problem starts at mem_cgroup_force_empty_list

           pc = list_entry(list->prev, struct page_cgroup, lru);

where list->prev == 0 => we have pc = fffffff4 and then dereference it to OOPs 

invalid pn contents (each mem_cgroup_per_zone is 31 32bit words):
crash> rd f3446a00 62
f3446a00:  00000000 00000000 00000000 00000000   ................
f3446a10:  00000000 00000000 00000000 00000000   ................
f3446a20:  00000000 00000000 00000000 00000000   ................
f3446a30:  00000000 00000000 00000000 00000000   ................
f3446a40:  00000000 00000000 00000000 00000000   ................
f3446a50:  00000000 00000000 00000000 00000000   ................
f3446a60:  00000000 00000000 00000000 00000000   ................
f3446a70:  00000000 00000000 f36ef800 f3446a7c   ..........n.|jD.
f3446a80:  f3446a7c f3446a84 f3446a84 f3446a8c   |jD..jD..jD..jD.
f3446a90:  f3446a8c f3446a94 f3446a94 f3446a9c   .jD..jD..jD..jD.
f3446aa0:  f3446a9c 00000000 00000000 00000000   .jD.............
f3446ab0:  00000000 00000000 00000000 00000000   ................
f3446ac0:  00000000 00000000 00000000 00000000   ................
f3446ad0:  00000000 00000000 00000000 00000000   ................
f3446ae0:  00000000 00000000 00000000 00000000   ................
f3446af0:  00000000 f36ef800

crash> struct mem_cgroup f36ef800
struct mem_cgroup {
...
info = {
    nodeinfo = {0xf3446a00}
  },
...

It looks like a very targeted corruption of the first zone except of
the last field, while the second zone and the rest are perfectly
normal (i.e. have empty initialized lists).

Comment 36 Igor Mammedov 2011-06-07 15:40:24 UTC
with debug kernel sometimes it will crash at mem_cgroup_create:

XXX: pn: f208dc00, phy: 3208dc00
XXX: pn: f2e85a00, phy: 32e85a00
BUG: unable to handle kernel paging request at 9b74e240
IP: [<c080b95f>] mem_cgroup_create0x+0xef/0x350
*pdpt = 0000000033542001 *pde = 0000000000000000 
Oops: 0002 [#1] SMP 
...

Pid: 1823, comm: libvirtd Tainted: G           ---------------- T (2.6.32.700565 #21) HVM domU
EIP: 0060:[<c080b95f>] EFLAGS: 00210297 CPU: 3
EIP is at mem_cgroup_create+0xef/0x350
EAX: 9b74e240 EBX: f2e85a00 ECX: 00000001 EDX: 00000001
ESI: a88c8840 EDI: a88c8840 EBP: f201deb4 ESP: f201de8c
 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
Process libvirtd (pid: 1823, ti=f201c000 task=f3642ab0 task.ti=f201c000)
Stack:
 c09579b2 f2e85a00 32e85a00 f3455800 00000000 f2e85a00 f2c14ac0 c0a5a820
<0> fffffff4 f2c14ac0 f201def8 c049d3a7 00000000 00000000 00000000 000001ed
<0> f2c14ac8 f5fa4400 f24fe954 f3502000 f2c14e40 f24f5608 f3502010 f2c14ac0
Call Trace:
 [<c049d3a7>] cgroup_mkdir+0xf7/0x450
 [<c05318e3>] vfs_mkdir+0x93/0xf0
 [<c0533787>] ? lookup_hash+0x27/0x30
 [<c053390e>] sys_mkdirat+0xde/0x100
 [<c04b5d4d>] ? call_rcu_sched+0xd/0x10
 [<c04b5d58>] ? call_rcu+0x8/0x10
 [<c047ab9f>] ? __put_cred+0x2f/0x50
 [<c0524ded>] ? sys_faccessat+0x14d/0x180
 [<c0523fb7>] ? filp_close+0x47/0x70
 [<c0533950>] sys_mkdir+0x20/0x30
 [<c0409b5f>] sysenter_do_call+0x12/0x28


looking at core EDI value looks invalid/impossible
crash> dis 0xc080b93e 15
0xc080b93e <mem_cgroup_create+206>:     movl   $0x0,-0x18(%ebp)
0xc080b945 <mem_cgroup_create+213>:     mov    %esi,-0x1c(%ebp)
0xc080b948 <mem_cgroup_create+216>:     imul   $0x7c,-0x18(%ebp),%edi
0xc080b94c <mem_cgroup_create+220>:     xor    %ecx,%ecx
0xc080b94e <mem_cgroup_create+222>:     xor    %edx,%edx
0xc080b950 <mem_cgroup_create+224>:     lea    (%edi,%edx,8),%esi
0xc080b953 <mem_cgroup_create+227>:     add    $0x1,%ecx
0xc080b956 <mem_cgroup_create+230>:     lea    (%ebx,%esi,1),%eax
0xc080b959 <mem_cgroup_create+233>:     add    $0x1,%edx
0xc080b95c <mem_cgroup_create+236>:     cmp    $0x5,%ecx
0xc080b95f <mem_cgroup_create+239>:     mov    %eax,(%ebx,%esi,1)
0xc080b962 <mem_cgroup_create+242>:     mov    %eax,0x4(%eax)
0xc080b965 <mem_cgroup_create+245>:     jne    0xc080b950
0xc080b967 <mem_cgroup_create+247>:     mov    -0x14(%ebp),%eax
0xc080b96a <mem_cgroup_create+250>:     movl   $0x0,0x6c(%eax)

EDI on the first iteration should be 0 however it is a88c8840 according to Oops dump and looking at -0x18(%ebp) in core we see 0 as well:

crash> x/xw 0xf201deb4-0x18
0xf201de9c:     0x00000000

so it looks like EDI is incorrectly restored or at the moment when 0xc080b948 was executed -0x18(%ebp) has that weird value.

Comment 37 Igor Mammedov 2011-06-07 16:37:08 UTC
it is possible that EDI value from comment 36 leads to another accessible page and writes

0xc080b95f <mem_cgroup_create+239>:     mov    %eax,(%ebx,%esi,1)
0xc080b962 <mem_cgroup_create+242>:     mov    %eax,0x4(%eax)

go to that page and than after init list loop it uses correct pn offset to from -0x14(%ebp) and initialises the rest fields of structure on the correct page.

                mz->usage_in_excess = 0;
                mz->on_tree = false;
                mz->mem = mem;

0xc080b967 <mem_cgroup_create+247>:     mov    -0x14(%ebp),%eax
0xc080b96a <mem_cgroup_create+250>:     movl   $0x0,0x6c(%eax)
0xc080b971 <mem_cgroup_create+257>:     movl   $0x0,0x70(%eax)
0xc080b978 <mem_cgroup_create+264>:     movb   $0x0,0x74(%eax)
0xc080b97c <mem_cgroup_create+268>:     mov    -0x1c(%ebp),%edx
0xc080b97f <mem_cgroup_create+271>:     mov    %edx,0x78(%eax)
0xc080b982 <mem_cgroup_create+274>:     add    $0x7c,%eax
0xc080b985 <mem_cgroup_create+277>:     addl   $0x1,-0x18(%ebp)
0xc080b989 <mem_cgroup_create+281>:     cmpl   $0x4,-0x18(%ebp)
0xc080b98d <mem_cgroup_create+285>:     mov    %eax,-0x14(%ebp)
0xc080b990 <mem_cgroup_create+288>:     jne    0xc080b948

which could lead to the state described by comment 35  (i.e. 0-ed list entries)
and the originally reported Oops in mem_cgroup_force_empty.
Afterwards it looks like:

0xc080b985 <mem_cgroup_create+277>:     addl   $0x1,-0x18(%ebp)

-0x18(%ebp) is read correctly and the rest of 3 mz entries are initialized as expected.

So question is why and how 
0xc080b948 <mem_cgroup_create+216>:     imul   $0x7c,-0x18(%ebp),%edi
may be screwed up

Comment 39 Igor Mammedov 2011-06-10 13:57:23 UTC
At 11:01 +0100 on 10 Jun (1307703699), Tim Deegan wrote:
> > Actually, looking at the disassembly you posted, it looks more like it
> > might be an emulator bug in Xen; if Xen finds itself emulating the IMUL
> > instruction and either gets the logic wrong or does the memory access
> > wrong, it could cause that failure.  And one reason that Xen emulates
> > instructions is if the memory operand is on a pagetable that's shadowed
> > (which might be a page that was recently a pagetable). 
> > 
> > ISTR that even though the RHEL xen reports a 3.0.x version it has quite
> > a lot of backports in it.  Does it have this patch?
> > http://hg.uk.xensource.com/xen-3.1-testing.hg/rev/e8fca4c42d05
>Oops, that URL doesn't work; I meant this:
>http://xenbits.xen.org/xen-3.1-testing.hg/rev/e8fca4c42d05

Comment 40 Igor Mammedov 2011-06-10 13:59:18 UTC
Created attachment 504122 [details]
[RHEL5.8 Xen PATCH] Fix x86_emulate() handling of imul with immediate operands

Comment 42 Igor Mammedov 2011-06-10 15:14:41 UTC
Created attachment 504145 [details]
[RHEL5.8 Xen PATCH v2] Fix x86_emulate() handling of imul with immediate operands

-- changes since v1 patch --
 - Improved description
 - Replaced tabs indentation with spaces
----------------------------

Comment 47 Igor Mammedov 2011-07-12 16:27:56 UTC
*** Bug 697793 has been marked as a duplicate of this bug. ***

Comment 49 Jarod Wilson 2011-08-23 14:01:05 UTC
Patch(es) available in kernel-2.6.18-282.el5
You can download this test kernel (or newer) from http://people.redhat.com/jwilson/el5
Detailed testing feedback is always welcomed.

Comment 50 Martin Prpič 2011-08-31 16:03:46 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,4 +1,9 @@
-RHEL 6.1 Xen guests may experience crashes if memory cgroups are used with more than 1 vcpu on a host without EPT or NPT features.
+A bug was found in the way the x86_emulate() function handled the IMUL
-Possible workarounds:
+instruction in the Xen hypervisor. On systems without support for hardware
-  1 - Run guest on a hap enabled host if memory cgroups feature is required.
+assisted paging (HAP), such as those running CPUs that do not have support
-  2 - Disable memory cgroups. Add to guest's kernel command line the following option "cgroup_disable=memory".+for (or those that have it disabled) Intel Extended Page Tables (EPT) or
+AMD Virtualization (AMD-V) Rapid Virtualization Indexing (RVI), this bug
+could cause fully-virtualized guests to crash or lead to silent memory
+corruption. In reported cases, this issue occurred when booting
+fully-virtualized Red Hat Enterprise Linux 6.1 guests with memory cgroups
+enabled.

Comment 51 Paolo Bonzini 2011-08-31 16:46:31 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,9 +1,2 @@
-A bug was found in the way the x86_emulate() function handled the IMUL
+A bug was found in the way the x86_emulate() function handled the IMUL instruction in the Xen hypervisor. On systems that have no support for hardware assisted paging (such as those running CPUs that do not have support
-instruction in the Xen hypervisor. On systems without support for hardware
+for Intel Extended Page Tables or AMD Rapid Virtualization Indexing), or have it disabled, this bug could cause fully-virtualized guests to crash or lead to silent memory corruption. In reported cases, this issue occurred when booting fully-virtualized Red Hat Enterprise Linux 6.1 guests with memory cgroups enabled.-assisted paging (HAP), such as those running CPUs that do not have support
-for (or those that have it disabled) Intel Extended Page Tables (EPT) or
-AMD Virtualization (AMD-V) Rapid Virtualization Indexing (RVI), this bug
-could cause fully-virtualized guests to crash or lead to silent memory
-corruption. In reported cases, this issue occurred when booting
-fully-virtualized Red Hat Enterprise Linux 6.1 guests with memory cgroups
-enabled.

Comment 52 Qixiang Wan 2011-12-07 09:27:21 UTC
Verified with kernel-xen-2.6.18-300.el5.

With the 5.7 GA kernel on host, I can reproduce the crash with 4 RHEL6.1 32 HVM guests running on an Intel Q9400 host (4 cpus, 8G memory). Keep rebooting one of the guest (cgconfig service is enabled), it crashed in 5 times. After update host kernel to 2.6.18-300.el5, the guest didn't crash after 80+ times of rebooting.

Comment 53 errata-xmlrpc 2012-02-21 03:46:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-0150.html


Note You need to log in before you can comment on or make changes to this bug.