Bug 1004167
Summary: | KVM fails with "KVM internal error. Suberror: 2" | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | klaas.buist |
Component: | kernel | Assignee: | Kernel Maintainer List <kernel-maint> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 20 | CC: | acathrow, bsarathy, drjones, fsimonce, gansalmon, hhuang, itamar, jonathan, juzhang, kernel-maint, klaas.buist, madhu.chinakonda, marcelo.barbosa, mbooth, mkenneth, ohadlevy, pbonzini, qzhang, rjones, virt-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-06-18 14:03:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
klaas.buist
2013-09-04 06:49:00 UTC
I've never seen an error anything like this. Is "KVM internal error. Suberror: 2 [etc]" printed out by the appliance kernel or by the /usr/libexec/qemu-kvm process? We hits some similar error during system_reset guest, but not always reproduced. Bug 1002794 - KVM internal error. Suberror: 1 when doing system_reset (In reply to Richard W.M. Jones from comment #1) > I've never seen an error anything like this. Is > "KVM internal error. Suberror: 2 [etc]" printed out by the > appliance kernel or by the /usr/libexec/qemu-kvm process? qemu outputs the error, but it does so due to kvm returning KVM_EXIT_INTERNAL_ERROR for its exit reason. Unfortunately there are many reasons this exit reason could be returned. We need to identify a reliable way to reproduce this, and then trace kvm while reproducing it. It appears to reproduce 100% for the reporter, so maybe it's machine-specific? Klaas, can you please paste the output of /proc/cpuinfo here? Here is the cpuinfo of the machine I'm running libguestfs in: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel Xeon E312xx (Sandy Bridge) stepping : 1 cpu MHz : 2195.016 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon rep_good unfair_spinlock pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms bogomips : 4390.03 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: And this is of it's host (maybe that is relevant as well: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 58 model name : Intel(R) Core(TM) i7-3632QM CPU @ 2.20GHz stepping : 9 microcode : 0x19 cpu MHz : 2574.000 cache size : 6144 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms bogomips : 4389.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Is there anything printed by the kernel (dmesg) when the error occurs? You could also use an alternate qemu, eg. one compiled from upstream sources, and just set LIBGUESTFS_QEMU to point to the alternate qemu. export LIBGUESTFS_QEMU=/path/to/d/qemu/x86_64-softmmu/qemu-system-x86_64 libguestfs-test-tool No messages are printed by the kernel on host or guest. I have tried with both 1.5.3 and 1.6.0 versions of qemu and they both fail in similar way. When the problem happens the qemu process seems to be stuck and so I killed it with a sigsegv to get a core dump of it. It shows the following stacktrace: Core was generated by `libguestfs-test-tool'. Program terminated with signal 11, Segmentation fault. #0 0x00007f2b730a7513 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:82 82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS) Missing separate debuginfos, use: debuginfo-install libidn-1.18-2.el6.x86_64 yajl-1.0.7-3.el6.x86_64 (gdb) bt #0 0x00007f2b730a7513 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:82 #1 0x00007f2b735eb358 in guestfs___recv_from_daemon (g=0x8f0620, size_rtn=0x7fffa6894f2c, buf_rtn=0x7fffa6894ef0) at proto.c:584 #2 0x00007f2b735e7dd4 in launch_appliance (g=0x8f0620) at launch.c:967 #3 0x00007f2b7359113c in guestfs_launch (g=<value optimized out>) at actions.c:1123 #4 0x000000000040210d in ?? () #5 0x0000007c00000001 in ?? () #6 0x00000000004022f0 in ?? () #7 0x0000000000000000 in ?? () I am planning on also trying a later version of qemu on the host to see if that makes a difference. Using the latest version of qemu (1.5.3) on the host also does not seem to make difference. libguestfs-test-tool is still failing consistently inside the client. (In reply to klaas.buist from comment #7) > No messages are printed by the kernel on host or guest. > > I have tried with both 1.5.3 and 1.6.0 versions of qemu and they both fail > in similar way. When the problem happens the qemu process seems to be stuck > and so I killed it with a sigsegv to get a core dump of it. > It shows the following stacktrace: > > Core was generated by `libguestfs-test-tool'. > Program terminated with signal 11, Segmentation fault. > #0 0x00007f2b730a7513 in __select_nocancel () at > ../sysdeps/unix/syscall-template.S:82 > 82 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS) > Missing separate debuginfos, use: debuginfo-install libidn-1.18-2.el6.x86_64 > yajl-1.0.7-3.el6.x86_64 > (gdb) bt > #0 0x00007f2b730a7513 in __select_nocancel () at > ../sysdeps/unix/syscall-template.S:82 > #1 0x00007f2b735eb358 in guestfs___recv_from_daemon (g=0x8f0620, > size_rtn=0x7fffa6894f2c, buf_rtn=0x7fffa6894ef0) at proto.c:584 > #2 0x00007f2b735e7dd4 in launch_appliance (g=0x8f0620) at launch.c:967 > #3 0x00007f2b7359113c in guestfs_launch (g=<value optimized out>) at > actions.c:1123 > #4 0x000000000040210d in ?? () > #5 0x0000007c00000001 in ?? () > #6 0x00000000004022f0 in ?? () > #7 0x0000000000000000 in ?? () That's the stack trace of libguestfs-test-tool which isn't really telling us anything -- it just says that libguestfs is blocked waiting for an answer from qemu. You need to get a stack trace from qemu itself. Ahh, here it is, unfortunately it does not show much info yet, even though the executable is not stripped. # gdb --core=core.1664 --exec=/usr/local/bin/qemu-system-x86_64 GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1) Copyright (C) 2010 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. warning: core file may not match specified executable file. [New Thread 1664] [New Thread 1667] Missing separate debuginfo for Try: yum --disablerepo='*' --enablerepo='*-debug*' install /usr/lib/debug/.build-id/05/4c5697ea4022cf320747aabbf8120fe1246ff6 Reading symbols from /lib64/librt-2.12.so...Reading symbols from /usr/lib/debug/lib64/librt-2.12.so.debug...done. done. Loaded symbols for /lib64/librt-2.12.so Reading symbols from /lib64/libgthread-2.0.so.0.2200.5...Reading symbols from /usr/lib/debug/lib64/libgthread-2.0.so.0.2200.5.debug...done. done. Loaded symbols for /lib64/libgthread-2.0.so.0.2200.5 Reading symbols from /lib64/libglib-2.0.so.0.2200.5...Reading symbols from /usr/lib/debug/lib64/libglib-2.0.so.0.2200.5.debug...done. done. Loaded symbols for /lib64/libglib-2.0.so.0.2200.5 Reading symbols from /lib64/libutil-2.12.so...Reading symbols from /usr/lib/debug/lib64/libutil-2.12.so.debug...done. done. Loaded symbols for /lib64/libutil-2.12.so Reading symbols from /lib64/libz.so.1.2.3...Reading symbols from /usr/lib/debug/lib64/libz.so.1.2.3.debug...done. done. Loaded symbols for /lib64/libz.so.1.2.3 Reading symbols from /lib64/libm-2.12.so...Reading symbols from /usr/lib/debug/lib64/libm-2.12.so.debug...done. done. Loaded symbols for /lib64/libm-2.12.so Reading symbols from /lib64/libpthread-2.12.so...Reading symbols from /usr/lib/debug/lib64/libpthread-2.12.so.debug...done. [Thread debugging using libthread_db enabled] done. Loaded symbols for /lib64/libpthread-2.12.so Reading symbols from /lib64/libc-2.12.so...Reading symbols from /usr/lib/debug/lib64/libc-2.12.so.debug...done. done. Loaded symbols for /lib64/libc-2.12.so Reading symbols from /lib64/ld-2.12.so...Reading symbols from /usr/lib/debug/lib64/ld-2.12.so.debug...done. done. Loaded symbols for /lib64/ld-2.12.so Core was generated by `/usr/local/bin/qemu-system-x86_64 -global virtio-blk-pci.scsi=off -nodefconfig'. Program terminated with signal 11, Segmentation fault. #0 0x00007f26a7f26293 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87 87 int result = INLINE_SYSCALL (poll, 3, CHECK_N (fds, nfds), nfds, timeout); (gdb) bt #0 0x00007f26a7f26293 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87 #1 0x00007f26a95b41f6 in ?? () #2 0x0000001900000003 in ?? () #3 0xffffffff00000000 in ?? () #4 0x0000001968d41e60 in ?? () #5 0xe0869fe4664878a2 in ?? () #6 0x00007fff68d41e60 in ?? () #7 0x00007f26a95b4299 in ?? () #8 0x00007fff68d41e60 in ?? () #9 0x00000000a9638e63 in ?? () #10 0x00000002ffffffff in ?? () #11 0xe0869fe4664878a2 in ?? () #12 0x00007fff68d41e80 in ?? () #13 0x00007f26a9638ed9 in ?? () #14 0x00007f2600000001 in ?? () #15 0xe0869fe4664878a2 in ?? () #16 0x00007fff68d421e0 in ?? () #17 0x00007f26a96401e4 in ?? () #18 0x00007f2600000017 in ?? () #19 0x00007f26a7e48cec in ?? () from /lib64/libc-2.12.so #20 0x0000000000000000 in ?? () (In reply to klaas.buist from comment #10) > # gdb --core=core.1664 --exec=/usr/local/bin/qemu-system-x86_64 This is some random version of qemu? TBH I've no idea what this bug is, but it could be something specific to the CentOS kernel. Have you tried looking for similar reports in the CentOS bug tracker, or seeing if a fresh CentOS install can run 'libguestfs-test-tool'? This is version 1.5.3 of qemu. version 1.6.0 gives similar traces. During testing with the non-stripped versions I had 1 or 2 occasions of successfull libguestfs-test-tool runs, but most of the time the runs would fail. Could this be indicating some timing issues? I carried out these tests on a freshly installed Centos VM and I did not find anything similar in the cento bug tracker. After going back from kernel 3.10 to a 3.9 version on the fedora 19 host, the libguestfs-test-tool is running ssuccessfull all the time. So it appears something got broken between kernel 3.9.5-301.fc19 and 3.10.10-200.fc19 on the host. Same problem here (with regular qemu-kvm not libguestfs): KVM internal error. Suberror: 2 extra data[0]: 80000202 extra data[1]: 80000202 rax 00000000c3300100 rbx 00000000c33080c0 rcx 00000000c33080c0 rdx 00000000c0408995 rsi 0000000000000001 rdi 00000000ffffffff rsp 00000000f70a1ebc rbp 00000000f7086ab0 r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000 r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000 rip 00000000c0830abb rflags 00000006 cs 0060 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0) ds 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0) es 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0) ss 0068 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0) fs 00d8 (027f9000/ffffffff p 1 dpl 0 db 0 s 1 type 3 l 0 g 1 avl 0) gs 00e0 (c3307f80/00000018 p 1 dpl 0 db 1 s 1 type 1 l 0 g 0 avl 0) tr 0080 (c3305dc0/0000206b p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) gdt c3300000/ff idt c0a2b000/7ff cr0 8005003b cr2 0 cr3 a09000 cr4 6f0 cr8 0 efer 800 Even with a newer kernel: kernel-3.11.0-200.fc19.x86_64 Klaas, thanks for taking the time to enter a bug report with us. We appreciate the feedback and look to use reports such as this to guide our efforts at improving our products. That being said, we're not able to guarantee the timeliness or suitability of a resolution for issues entered here because this is not a mechanism for requesting support. If this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain it receives the proper attention and prioritization to assure a timely resolution. For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto Do I understand correctly that you are running nested guest and this nested guest fails with internal error? (In reply to Gleb Natapov from comment #16) > Do I understand correctly that you are running nested guest and this nested > guest fails with internal error? qemu-kvm in L1 is reporting: KVM internal error. Suberror: 2 extra data[0]: 80000202 extra data[1]: 80000202 ... L2 just hangs there (frozen). No errors in L0. (In reply to Gleb Natapov from comment #16) > Do I understand correctly that you are running nested guest and this nested > guest fails with internal error? Yes, but in my case only when using libguestfs. I did not encounter the problem when starting 'normal' KVM VMs (using openstack). (In reply to klaas.buist from comment #18) > (In reply to Gleb Natapov from comment #16) > > Do I understand correctly that you are running nested guest and this nested > > guest fails with internal error? > > Yes, but in my case only when using libguestfs. I did not encounter the > problem when starting 'normal' KVM VMs (using openstack). Hi Klaas, maybe you can share here the qemu-kvm command line used by openstack, which might help us to identify what's different there and therefore what's the problem. Thanks. (In reply to klaas.buist from comment #18) > (In reply to Gleb Natapov from comment #16) > > Do I understand correctly that you are running nested guest and this nested > > guest fails with internal error? > > Yes, but in my case only when using libguestfs. I did not encounter the > problem when starting 'normal' KVM VMs (using openstack). Presumably the OpenStack VM is not nested, ie. runs on baremetal, and you're running libguestfs inside the OpenStack VM (hence nested)? (In reply to Federico Simoncelli from comment #19) > (In reply to klaas.buist from comment #18) > > (In reply to Gleb Natapov from comment #16) > > > Do I understand correctly that you are running nested guest and this nested > > > guest fails with internal error? > > > > Yes, but in my case only when using libguestfs. I did not encounter the > > problem when starting 'normal' KVM VMs (using openstack). > > Hi Klaas, maybe you can share here the qemu-kvm command line used by > openstack, which might help us to identify what's different there and > therefore what's the problem. Thanks. Here is the command as used by openstack to lauch a VM. This VM is running fine: qemu 7945 1 2 11:51 ? 00:01:24 /usr/libexec/qemu-kvm -name instance-0000000e -S -M rhel6.4.0 -no-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -uuid 3a06ff96-e21d-4b60-b1f2-d8b7d461cdc4 -smbios type=1,manufacturer=Red Hat,, Inc.,product=Red Hat OpenStack Nova,version=2013.1.3-3.el6ost,serial=6fcbde20-64bb-074d-8788-8778f826b615,uuid=3a06ff96-e21d-4b60-b1f2-d8b7d461cdc4 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000000e.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/3a06ff96-e21d-4b60-b1f2-d8b7d461cdc4/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:26:13:32,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/3a06ff96-e21d-4b60-b1f2-d8b7d461cdc4/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 192.168.100.20:0 -k en-us -vga cirrus -incoming fd:22 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 For comparing, this is the (stuck) libguestfs qemu-kvm: root 25275 25093 17 12:46 pts/0 00:00:08 /usr/libexec/qemu-kvm -global virtio-blk-pci.scsi=off -nodefconfig -nodefaults -nographic -drive file=/tmp/libguestfs-test-tool-sda-od2bTz,cache=none,format=raw,if=virtio -nodefconfig -machine accel=kvm:tcg -m 500 -no-reboot -device virtio-serial -serial stdio -device sga -chardev socket,path=/tmp/libguestfsm1fkfw/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -kernel /var/tmp/.guestfs-0/kernel.25093 -initrd /var/tmp/.guestfs-0/initrd.25093 -append panic=1 console=ttyS0 udevtimeout=300 no_timer_check acpi=off printk.time=1 cgroup_disable=memory selinux=0 guestfs_verbose=1 TERM=xterm-256color -drive file=/var/tmp/.guestfs-0/root.25093,snapshot=on,if=virtio,cache=unsafe (In reply to Richard W.M. Jones from comment #20) > (In reply to klaas.buist from comment #18) > > (In reply to Gleb Natapov from comment #16) > > > Do I understand correctly that you are running nested guest and this nested > > > guest fails with internal error? > > > > Yes, but in my case only when using libguestfs. I did not encounter the > > problem when starting 'normal' KVM VMs (using openstack). > > Presumably the OpenStack VM is not nested, ie. runs on > baremetal, and you're running libguestfs inside the > OpenStack VM (hence nested)? I have openstack running inside a VM (for evaluation). The libguestfs is run inside that VM where openstack is installed/runs. (In reply to klaas.buist from comment #21) > (In reply to Federico Simoncelli from comment #19) > > (In reply to klaas.buist from comment #18) > > > (In reply to Gleb Natapov from comment #16) > > > > Do I understand correctly that you are running nested guest and this nested > > > > guest fails with internal error? > > > > > > Yes, but in my case only when using libguestfs. I did not encounter the > > > problem when starting 'normal' KVM VMs (using openstack). > > > > Hi Klaas, maybe you can share here the qemu-kvm command line used by > > openstack, which might help us to identify what's different there and > > therefore what's the problem. Thanks. > > Here is the command as used by openstack to lauch a VM. This VM is running > fine: > > qemu 7945 1 2 11:51 ? 00:01:24 /usr/libexec/qemu-kvm -name > instance-0000000e -S -M rhel6.4.0 -no-kvm -m 512 -smp You don't see this error happening in openstack because it's not using kvm as it uses the -no-kvm flag. (In reply to Federico Simoncelli from comment #23) > > > > qemu 7945 1 2 11:51 ? 00:01:24 /usr/libexec/qemu-kvm -name > > instance-0000000e -S -M rhel6.4.0 -no-kvm -m 512 -smp > > You don't see this error happening in openstack because it's not using kvm > as it uses the -no-kvm flag. Humm, overlooked that. After changing from qemu to kvm, the VM fails to start with the same error as well. This is a bug in the Fedora kernel's support for nested virtualization. Changing product for now, but it's probably best moved to the upstream kernel bug tracker. *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 19 kernel bugs. Fedora 19 has now been rebased to 3.12.6-200.fc19. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 20, and are still experiencing this issue, please change the version to Fedora 20. If you experience different issues, please open a new bug report for those. (In reply to Justin M. Forbes from comment #27) > *********** MASS BUG UPDATE ************** > > We apologize for the inconvenience. There is a large number of bugs to go > through and several of them have gone stale. Due to this, we are doing a > mass bug update across all of the Fedora 19 kernel bugs. > > Fedora 19 has now been rebased to 3.12.6-200.fc19. Please test this kernel > update (or newer) and let us know if you issue has been resolved or if it is > still present with the newer kernel. > > If you have moved on to Fedora 20, and are still experiencing this issue, > please change the version to Fedora 20. > > If you experience different issues, please open a new bug report for those. I am still seeing the issue with the latest fedora 19 kernel 3.12.6-200.fc19.x86_64 *********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 20 kernel bugs. Fedora 20 has now been rebased to 3.14.4-200.fc20. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you experience different issues, please open a new bug report for those. This bug is being closed with INSUFFICIENT_DATA as there has not been a response in 2 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously. |