Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1380703

Summary: qemu-kvm segfault in kvm packstack aio + ovs-dpdk
Product: Red Hat Enterprise Linux 7 Reporter: jwang
Component: qemu-kvm-rhevAssignee: jason wang <jasowang>
Status: CLOSED CURRENTRELEASE QA Contact: Shai Revivo <srevivo>
Severity: low Docs Contact:
Priority: unspecified    
Version: 7.2CC: ailan, juzhang, jwang, knoel, michen, pezhang, srevivo, virt-bugs, virt-maint, xfu
Target Milestone: pre-dev-freeze   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-04-17 06:45:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description jwang 2016-09-30 11:37:15 UTC
Description of problem:
qemu-kvm segfault when test ovs-dpdk in kvm virtual machine packstack aio.
 
Version-Release number of selected component (if applicable):
Host 
kernel-3.10.0-327.36.1.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64

How reproducible:
kernel-3.10.0-327.36.1.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
openstack-nova-novncproxy-12.0.4-8.el7ost.noarch
openstack-glance-11.0.1-4.el7ost.noarch
openstack-cinder-7.0.2-3.el7ost.noarch
openstack-neutron-openvswitch-7.1.1-5.el7ost.noarch
openstack-nova-conductor-12.0.4-8.el7ost.noarch
openstack-neutron-7.1.1-5.el7ost.noarch
openstack-keystone-8.0.1-1.el7ost.noarch
openstack-nova-scheduler-12.0.4-8.el7ost.noarch
openstack-neutron-common-7.1.1-5.el7ost.noarch
openstack-nova-compute-12.0.4-8.el7ost.noarch
openstack-nova-cert-12.0.4-8.el7ost.noarch
openstack-nova-console-12.0.4-8.el7ost.noarch
openstack-neutron-ml2-7.1.1-5.el7ost.noarch
openstack-nova-api-12.0.4-8.el7ost.noarch
openstack-nova-common-12.0.4-8.el7ost.noarch


Steps to Reproduce:
1. Install packstack aio in rhel7.2 guest
2. Config ovs-dpdk per document
3. Create instance 

Actual results:
instance state become SHUTDOWN
qemu-kvm segfault

Expected results:
instance state become ACTIVE

Additional info:

Comment 1 jwang 2016-09-30 11:37:59 UTC
id edddd91056dfe6b30e08e5a58639f7d47e073f7a
reason:         qemu-kvm killed by SIGSEGV
time:           Fri Sep 30 04:32:06 2016
cmdline:        /usr/libexec/qemu-kvm -name instance-00000004 -S -machine pc-i440fx-rhel7.2.0,accel=tcg,usb=off -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid aac388d7-02d6-4efe-b7b9-c8b84bd23a1c -smbios 'type=1,manufacturer=Red Hat,product=OpenStack Compute,version=12.0.4-8.el7ost,serial=dc4735e5-5572-473a-b6fb-079f15b39f5b,uuid=aac388d7-02d6-4efe-b7b9-c8b84bd23a1c,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000004/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/aac388d7-02d6-4efe-b7b9-c8b84bd23a1c/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhucf1aa1c6-bb -netdev type=vhost-user,id=hostnet0,chardev=charnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:84:2f:76,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/aac388d7-02d6-4efe-b7b9-c8b84bd23a1c/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
package:        qemu-kvm-rhev-2.3.0-31.el7_2.21
uid:            107 (qemu)
count:          1
Directory:      /var/spool/abrt/ccpp-2016-09-30-04:32:06-4795
Run 'abrt-cli report /var/spool/abrt/ccpp-2016-09-30-04:32:06-4795' for creating a case in Red Hat Customer Portal

Comment 2 jwang 2016-09-30 11:38:34 UTC
(gdb) bt
#0  0x00007fb6c573c0b8 in kvm_virtio_pci_irqfd_use (proxy=proxy@entry=0x7fb6d17e2000, queue_no=queue_no@entry=0, vector=vector@entry=1) at hw/virtio/virtio-pci.c:498
#1  0x00007fb6c573d2de in virtio_pci_vq_vector_unmask (msg=..., vector=1, queue_no=0, proxy=0x7fb6d17e2000) at hw/virtio/virtio-pci.c:624
#2  virtio_pci_vector_unmask (dev=0x7fb6d17e2000, vector=1, msg=...) at hw/virtio/virtio-pci.c:660
#3  0x00007fb6c570d7ca in msix_set_notifier_for_vector (vector=1, dev=0x7fb6d17e2000) at hw/pci/msix.c:513
#4  msix_set_vector_notifiers (dev=dev@entry=0x7fb6d17e2000, use_notifier=use_notifier@entry=0x7fb6c573d130 <virtio_pci_vector_unmask>, 
    release_notifier=release_notifier@entry=0x7fb6c573d080 <virtio_pci_vector_mask>, poll_notifier=poll_notifier@entry=0x7fb6c573bf40 <virtio_pci_vector_poll>) at hw/pci/msix.c:540
#5  0x00007fb6c573d82d in virtio_pci_set_guest_notifiers (d=0x7fb6d17e2000, nvqs=2, assign=<optimized out>) at hw/virtio/virtio-pci.c:821
#6  0x00007fb6c55ed1c0 in vhost_net_start (dev=dev@entry=0x7fb6d17e9f40, ncs=0x7fb6c8601da0, total_queues=total_queues@entry=1) at /usr/src/debug/qemu-2.3.0/hw/net/vhost_net.c:353
#7  0x00007fb6c55e91e4 in virtio_net_vhost_status (status=<optimized out>, n=0x7fb6d17e9f40) at /usr/src/debug/qemu-2.3.0/hw/net/virtio-net.c:143
#8  virtio_net_set_status (vdev=<optimized out>, status=7 '\a') at /usr/src/debug/qemu-2.3.0/hw/net/virtio-net.c:162
#9  0x00007fb6c55f97dc in virtio_set_status (vdev=vdev@entry=0x7fb6d17e9f40, val=val@entry=7 '\a') at /usr/src/debug/qemu-2.3.0/hw/virtio/virtio.c:609
#10 0x00007fb6c573ca4e in virtio_ioport_write (val=7, addr=18, opaque=0x7fb6d17e2000) at hw/virtio/virtio-pci.c:283
#11 virtio_pci_config_write (opaque=0x7fb6d17e2000, addr=18, val=7, size=<optimized out>) at hw/virtio/virtio-pci.c:409
#12 0x00007fb6c55ca3d7 in memory_region_write_accessor (mr=0x7fb6d17e2880, addr=<optimized out>, value=0x7fb6ac21a338, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-2.3.0/memory.c:457
#13 0x00007fb6c55ca0e9 in access_with_adjusted_size (addr=addr@entry=18, value=value@entry=0x7fb6ac21a338, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, 
    access=access@entry=0x7fb6c55ca380 <memory_region_write_accessor>, mr=mr@entry=0x7fb6d17e2880, attrs=attrs@entry=...) at /usr/src/debug/qemu-2.3.0/memory.c:516
#14 0x00007fb6c55cbb51 in memory_region_dispatch_write (mr=mr@entry=0x7fb6d17e2880, addr=18, data=7, size=1, attrs=...) at /usr/src/debug/qemu-2.3.0/memory.c:1161
#15 0x00007fb6c55976e0 in address_space_rw (as=0x7fb6c5c51cc0 <address_space_io>, addr=49266, attrs=..., buf=buf@entry=0x7fb6ac21a40c "\a\177", len=len@entry=1, is_write=is_write@entry=true)
    at /usr/src/debug/qemu-2.3.0/exec.c:2353
#16 0x00007fb6c559794b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., attrs@entry=..., buf=buf@entry=0x7fb6ac21a40c "\a\177", len=len@entry=1)
    at /usr/src/debug/qemu-2.3.0/exec.c:2415
#17 0x00007fb6c55c3a4c in cpu_outb (addr=<optimized out>, val=7 '\a') at /usr/src/debug/qemu-2.3.0/ioport.c:67
#18 0x00007fb6ad7020b0 in ?? ()
#19 0x00007fb6ac21a500 in ?? ()
#20 0x0000000000000000 in ?? ()

Comment 3 jwang 2016-09-30 13:04:10 UTC
Packstack in osp9 in kvm guest has same result.

# yum repolist
Loaded plugins: search-disabled-repos
repo id                                                         repo name                                                        status
rhel-7-server-extras-rpms                                       rhel-7-server-extras-rpms                                          64
rhel-7-server-openstack-9-rpms                                  rhel-7-server-openstack-9-rpms                                    537
rhel-7-server-rh-common-rpms                                    rhel-7-server-rh-common-rpms                                      140
rhel-7-server-rpms                                              rhel-7-server-rpms                                               8774
repolist: 9515

[ 2507.108072] qemu-kvm[11078]: segfault at 28 ip 00007f37659930b8 sp 00007f374c471030 error 4 in qemu-kvm[7f3765754000+423000]

[root@jwang-test-02 ~(keystone_admin)]# nova list
+--------------------------------------+-------+---------+------------+-------------+------------------+
| ID                                   | Name  | Status  | Task State | Power State | Networks         |
+--------------------------------------+-------+---------+------------+-------------+------------------+
| 11ff5cd5-8fb1-4e01-8017-1902fbc0f61f | inst1 | SHUTOFF | -          | Shutdown    | flat=172.16.0.61 |
+--------------------------------------+-------+---------+------------+-------------+------------------+

Comment 4 jason wang 2016-12-26 06:06:31 UTC
Can you reproduce this issue with recent qemu-kvm-rhev?

Comment 6 juzhang 2017-02-09 07:42:29 UTC
(In reply to jason wang from comment #4)
> Can you reproduce this issue with recent qemu-kvm-rhev?

Hi Jason,

Seems jwang is not from QE team. KVM will try to reproduce this issue from qemu-kvm-rhev level.

Hi Pezhang,

Could you please reproduce this issue by using latest qemu-kvm-rhev/libvirt?

Best Regards,
Junyi

Comment 7 Pei Zhang 2017-02-10 11:00:05 UTC
(In reply to juzhang from comment #6)
> (In reply to jason wang from comment #4)
> > Can you reproduce this issue with recent qemu-kvm-rhev?
> 
> Hi Jason,
> 
> Seems jwang is not from QE team. KVM will try to reproduce this issue from
> qemu-kvm-rhev level.
> 
> Hi Pezhang,
> 
> Could you please reproduce this issue by using latest qemu-kvm-rhev/libvirt?
> 
> Best Regards,
> Junyi

Testing this bug with qemu-kvm-rhev/libvirt layer, and this issue is not found.


Versions:
- Host
3.10.0-560.el7.x86_64
qemu-kvm-rhev-2.8.0-3.el7.x86_64
libvirt-3.0.0-2.el7.x86_64

- L1 guest:
3.10.0-560.el7.x86_64
qemu-kvm-rhev-2.8.0-3.el7.x86_64
libvirt-3.0.0-2.el7.x86_64
dpdk-16.11-3.el7fdb.x86_64
openvswitch-2.6.1-5.git20161206.el7fdb.x86_64

-L2 guest
3.10.0-560.el7.x86_64

Steps:
1. In Host, setup nested parameter
# modprobe -r kvm_intel
# modprobe kvm_intel nested=1 enable_shadow_vmcs=1 ept=1 enable_apicv=1
# cat /sys/module/kvm_intel/parameters/nested
Y

2. In host, boot L1 guest with host cpu model
  <cpu mode='host-model'>
    <model fallback='forbid'/>
  </cpu>

 
3. In L1 guest, setup ovs-dpdk
# ovs-vsctl show 
cd8e87fe-bd46-4ba1-a6a3-7db7a6ed46ef
    Bridge "ovsbr0"
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuser
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal


4. In L1 guest, boot L2 guest using vhostuser
    <interface type='vhostuser'>
      <mac address='52:54:00:3f:69:ba'/>
      <source type='unix' path='/run/openvswitch/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

5. L2 guest works well.


So this issue is not found.



Best Regards,
Pei

Comment 8 jason wang 2017-04-17 06:45:24 UTC
Close this according to comment #7. Feel free to reopen this.

Thanks

Comment 9 Red Hat Bugzilla 2023-09-14 03:31:41 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days