Bug 1359324 - qemu-system-x86 dumped core upon normal shutdown of guest
Summary: qemu-system-x86 dumped core upon normal shutdown of guest
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: qemu
Version: 24
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Fedora Virtualization Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: 1359325
TreeView+ depends on / blocked
 
Reported: 2016-07-22 20:26 UTC by Chris Murphy
Modified: 2017-03-14 20:08 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1359325 (view as bug list)
Environment:
Last Closed: 2017-03-14 20:08:47 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
journal.log (811.08 KB, text/x-vhdl)
2016-07-22 20:26 UTC, Chris Murphy
no flags Details
gdb coredump (5.14 KB, text/plain)
2016-07-23 01:14 UTC, Chris Murphy
no flags Details

Description Chris Murphy 2016-07-22 20:26:45 UTC
Created attachment 1182937 [details]
journal.log

Description of problem:

This is bug 1 of 2. This bug is about the apparent crash of qemu-system-x86. Bug 2 of 2 will be the side effect of this crash where the qcow2 backing file becomes extraordinarily large (37 Petabytes).



Version-Release number of selected component (if applicable):
qemu-system-x86-2.6.0-5.fc24.x86_64
4.6.4-301.fc24.x86_64

How reproducible:
Unknown, not attempted again yet


Steps to Reproduce:
1.
# qemu-img create -f qcow2 -o nocow=on uefi_opensuseleap42.2a3-1.qcow2 50G
# qemu-img create -f qcow2 -o nocow=on uefi_opensuseleap42.2a3-2.qcow2 50G
2. Both of these back virtio disks, appearing as vda and vdb in the guest.
3. In the guest, I ask YaST to use both vda and vdb, create an EFI
System partition for both drives, and the rest of the free space on
both drives become md members set to RAID level1. Then I start the
installation.
4. At some point well past midway I get an rpm I/O error from YaST, none of this
environment state was saved, except what might appear in the host's
journal. After the error, YaST wouldn't continue so I chose the power
off option. And it powered off the VM cleanly.

The two obvious problems are the difference in qcow2 file sizes, as if
the md RAID setup didn't work correctly. But I'm going to set aside
the 2nd qcow2 not having been written to at all since it was created,
and deal with this 37 Petabyte qcow2.

5. When I change to a Fedora 24 Workstation ISO to boot the VM, it
fails to start, complaining about qcow2 corruption.

Actual results:

There are two results:

Jul 22 13:24:30 f24m systemd-coredump[3914]: Process 3829 (qemu-system-x86) of user 107 dumped core.
                                             
                                             Stack trace of thread 3829:
                                             #0  0x00007f9faceec6f5 n/a (n/a)


And also

[root@f24m images]# ll
total 59765472
-rw-r-----. 1 qemu qemu        1541406720 Jul 21 10:54 Fedora-Workstation-Live-x86_64-24-1.2.iso
-rw-r--r--. 1 qemu qemu        1433403392 Jul 20 13:28 Fedora-Workstation-Live-x86_64-Rawhide-20160718.n.0.iso
-rw-r-----. 1 qemu qemu        4647288832 Jul 22 10:43 openSUSE-Leap-42.2-DVD-x86_64-Build0109-Media.iso
-rw-r--r--. 1 root root 40537894204538880 Jul 22 13:23 uefi_opensuseleap42.2a3-1.qcow2
-rw-r--r--. 1 root root            197632 Jul 22 08:46 uefi_opensuseleap42.2a3-2.qcow2

Yes that's a 37 Petabyte file.


Expected results:

qemu shouldn't crash, nor should it corrupt its backing files (while also making them 37P in size).


Additional info:

coredump file is ~185MiB
https://drive.google.com/open?id=0B_2Asp8DGjJ9UHNJSXJBUTBCTzg

[root@f24m images]# coredumpctl -o qemu-system-x86.coredump dump /usr/bin/qemu-system-x86_64
           PID: 3829 (qemu-system-x86)
           UID: 107 (qemu)
           GID: 107 (qemu)
        Signal: 6 (ABRT)
     Timestamp: Fri 2016-07-22 13:24:21 MDT (40min ago)
  Command Line: /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UEFI,debug-threads=on -S -machine pc-i440fx-2.4,accel=kvm,usb=off,vmport=off -cpu SandyBridge -drive file=/usr/share/edk2/ovmf/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/var/lib/libvirt/qemu/nvram/UEFI_VARS.fd,if=pflash,format=raw,unit=1 -m 3072 -realtime mlock=off -smp 3,sockets=3,cores=1,threads=1 -uuid 11831a99-fad2-4e1f-8a31-f521cbf91ff3 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-UEFI/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device ahci,id=sata0,bus=pci.0,addr=0x8 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/Fedora-Workstation-Live-x86_64-24-1.2.iso,format=raw,if=none,media=cdrom,id=drive-sata0-0-1,readonly=on -device ide-cd,bus=sata0.1,drive=drive-sata0-0-1,id=sata0-0-1,bootindex=1 -drive file=/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=unsafe,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -drive file=/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2,format=qcow2,if=none,id=drive-virtio-disk1,cache=unsafe,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk1,id=virtio-disk1,bootindex=3 -netdev tap,fd=25,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:fe:40:e3,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
    Executable: /usr/bin/qemu-system-x86_64
 Control Group: /machine.slice/machine-qemu\x2d2\x2dUEFI.scope
          Unit: machine-qemu\x2d2\x2dUEFI.scope
         Slice: machine.slice
       Boot ID: b91161300395440f96b49cd0b879488d
    Machine ID: 358f3fdc5df34832b44a6816f3b04881
      Hostname: f24m
      Coredump: /var/lib/systemd/coredump/core.qemu-system-x86.107.b91161300395440f96b49cd0b879488d.3829.1469215461000000000000.lz4
       Message: Process 3829 (qemu-system-x86) of user 107 dumped core.
                
                Stack trace of thread 3829:
                #0  0x00007f9faceec6f5 n/a (n/a)
More than one entry matches, ignoring rest.

Comment 1 Richard W.M. Jones 2016-07-22 20:39:44 UTC
So I'm guessing (because you're using nocow=on) that you are
using btrfs on the host?  I would first look for btrfs problems
on the host.  Are there any messages in the system dmesg or
system journal pointing to btrfs / host filesystem problems,
I/O errors, etc.?

Comment 2 Chris Murphy 2016-07-22 20:59:41 UTC
Host /var/lib/libvirt/images is on Btrfs. There are no Btrfs messages since mount time at last startup. There are no libata messages related to the SSD since startup. The file system passes a scrub with no errors; and also an offline btrfs check. If it's a Btrfs bug, it doesn't know about it, and is continuing to let me use the file system unabated.

Comment 3 Chris Murphy 2016-07-23 01:14:41 UTC
Created attachment 1183020 [details]
gdb coredump

Unfortunately the core dump is truncated for some reason, so this gdm attempt is probably useless.
BFD: Warning: /var/tmp/coredump-S2qIGu is truncated: expected core file size >= 4115308544, found: 2147483648.

2147483648 is 0x80000000 or exactly 2GiB. That's suspicious. Misconfiguration somewhere causing the truncation?

Comment 4 Zbigniew Jędrzejewski-Szmek 2016-07-23 16:28:49 UTC
2GiB is the systemd-coredump default for ProcessSizeMax= and ExternalSizeMax=. Was this coredump captured by systemd-coredump?

Comment 5 Chris Murphy 2016-07-23 19:22:02 UTC
(In reply to Zbigniew Jędrzejewski-Szmek from comment #4)
> 2GiB is the systemd-coredump default for ProcessSizeMax= and
> ExternalSizeMax=.

OK should I change both values to something higher like 4GiB? The VM is allocated 3GiB, but gdb expects a ~3.8GiB core dump file.

> Was this coredump captured by systemd-coredump?

Yes.

Comment 6 Zbigniew Jędrzejewski-Szmek 2016-07-23 19:29:40 UTC
(In reply to Chris Murphy from comment #5)
> OK should I change both values to something higher like 4GiB? The VM is
> allocated 3GiB, but gdb expects a ~3.8GiB core dump file.
Yes. But like I wrote in the e-mail thread, I don't think truncating coredumps like that makes sense.

Comment 7 Chris Murphy 2016-07-23 20:17:41 UTC
OK filed bug 1359410 for the coredump file truncation.

I'll change the coredump file limits and try to reproduce this bug to try and figure out why qemu crashed.

Comment 8 Chris Murphy 2016-07-23 20:38:55 UTC
Look what I found.

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='unsafe' io='threads'/>
      <source file='/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <boot order='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='unsafe' io='threads'/>
      <source file='/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <boot order='3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>

The same qcow2 file associated with two drives, and those two drives were setup in the VM to be part of an mdadm RAID 1. Well, that explains why the -2 file wasn't being written to. Neither virsh nor virt-manager warn or complain about this. It appears to permit the same file being used as backing for two virtual devices. So, a.) user error, b.) no warnings, c.) qemu blows up well after d.) totally corrupting the target qcow2, e.) results in qcow2 becoming astronomically large.

Comment 9 Cole Robinson 2016-07-26 21:11:24 UTC
(In reply to Chris Murphy from comment #8)
> Look what I found.
> 
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2' cache='unsafe' io='threads'/>
>       <source
> file='/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2'/>
>       <target dev='vda' bus='virtio'/>
>       <boot order='2'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
> function='0x0'/>
>     </disk>
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2' cache='unsafe' io='threads'/>
>       <source
> file='/var/lib/libvirt/images/uefi_opensuseleap42.2a3-1.qcow2'/>
>       <target dev='vdb' bus='virtio'/>
>       <boot order='3'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
> function='0x0'/>
>     </disk>
> 
> The same qcow2 file associated with two drives, and those two drives were
> setup in the VM to be part of an mdadm RAID 1. Well, that explains why the
> -2 file wasn't being written to. Neither virsh nor virt-manager warn or
> complain about this.

virt-manager would have warned if you used the UI to attach the disk images to the VM. But it won't warn at start time, neither will virsh/libvirt like you say, with the default config. You can enable virtlockd and it will catch issues like this... there's been discussions occasionally about enabling it by default but it hasn't happened yet. But that's the proper place to handle this type of validation.

 It appears to permit the same file being used as
> backing for two virtual devices. So, a.) user error, b.) no warnings, c.)
> qemu blows up well after d.) totally corrupting the target qcow2, e.)
> results in qcow2 becoming astronomically large.

The interesting bit here is the qemu crash... we don't want qemu to crash even if the disk image is outrageously sized. So please update if you get a complete backtrace

Comment 10 Cole Robinson 2017-02-16 19:53:57 UTC
Chris, have you seen this since, or captured a backtrace?


Note You need to log in before you can comment on or make changes to this bug.