Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1393322 - Guest fails boot up with ivshmem-plain and virtio-pci device
Guest fails boot up with ivshmem-plain and virtio-pci device
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
7.4
x86_64 Linux
low Severity low
: rc
: ---
Assigned To: Paolo Bonzini
Pei Zhang
: TestOnly
: 1441512 (view as bug list)
Depends On: 1373154
Blocks:
  Show dependency treegraph
 
Reported: 2016-11-09 05:31 EST by Marcel Apfelbaum
Modified: 2017-08-01 23:35 EDT (History)
23 users (show)

See Also:
Fixed In Version: qemu-kvm-rhev-2.9.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1373154
Environment:
Last Closed: 2017-08-01 19:39:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2392 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2017-08-01 16:04:36 EDT

  None (edit)
Comment 1 Marcel Apfelbaum 2016-11-09 05:34:49 EST
Downstream commit 01549028733315a513b1b5fcc1951fd271e8a531 was needed only for 7.3. Seabios 1.10 should not have this issue.

To QE: Please check this brew build:
    https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=12064444
and give us an OK we on this BZ.

Thanks!
Marcel
Comment 3 Marcel Apfelbaum 2016-11-09 05:40:01 EST
Hi,

Please see comment #1.

Thanks,
Marcel
Comment 5 Pei Zhang 2016-11-09 22:26:40 EST
Hi Marcel and Junyi,

I tested with seabios-1.10.0-0.el7.test.x86_64, the guest can boot up and works well.

Details:
Versions:
3.10.0-520.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7.x86_64
seabios-bin-1.10.0-0.el7.test.noarch
seabios-1.10.0-0.el7.test.x86_64

Steps:
1. Boot guest with ivshmem-plain and virtio-pci, guest works well.
# /usr/libexec/qemu-kvm -name rhel7.3 \
-cpu IvyBridge,check -m 4G \
-smp 4,sockets=2,cores=2,threads=1 \
-netdev tap,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:54:00:5c:77:61,rx_queue_size=256 \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16 \
-spice port=5902,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on \
-monitor stdio \
-serial unix:/tmp/monitor,server,nowait \
-qmp tcp:0:5551,server,nowait \
-drive file=/home/pezhang/rhel7.3.qcow2,format=qcow2,if=none,id=drive-virtio-blk0,werror=stop,rerror=stop \
-device virtio-blk-pci,drive=drive-virtio-blk0,id=virtio-blk0 \
-usbdevice tablet \
-object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
-device ivshmem-plain,memdev=mem \


Best Regards,
Pei
Comment 8 Pei Zhang 2017-01-11 05:17:16 EST
When testing with seabios fixed version, fail with with latest qemu version (which is the first rebase), but success with old versions. 

Testing with versions:
3.10.0-539.el7.x86_64
seabios-1.10.1-1.el7.x86_64

qemu-kvm-rhev-2.8.0-1.el7.x86_64          Fail
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64     Work
qemu-kvm-rhev-2.6.0-28.el7.x86_64         Work


Seems this bug is fixed in seabios, however qemu rebase caused regression.
Comment 9 Pei Zhang 2017-01-11 05:21:15 EST
Hi Marcel,

Could you please check Comment 8, should QE file a regression bug on qemu-kvm-rhev component? Thanks.


Best Regards,
Pei
Comment 15 Marcel Apfelbaum 2017-01-31 11:30:34 EST
Hi Gerd,

Can you have a look?
It was supposed to work with SeaBIOS 1.10 .
Do you have any idea that may help?

Thanks,
Marcel
Comment 16 Gerd Hoffmann 2017-02-01 03:09:03 EST
(In reply to Marcel Apfelbaum from comment #15)
> Hi Gerd,
> 
> Can you have a look?
> It was supposed to work with SeaBIOS 1.10 .
> Do you have any idea that may help?
> 
> Thanks,
> Marcel

2.7.0 works.  2.8.0 fails, but in kvm mode only, tcg is fine.
Doesn't look like a seabios issue, going bisect qemu ...
Comment 17 Gerd Hoffmann 2017-02-01 03:27:35 EST
bisect landed at:

commit ad07cd69ecaffbaa015459a46975ab32e50df805
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri Oct 21 22:48:10 2016 +0200

    virtio-scsi: always use dataplane path if ioeventfd is active
    
    Override start_ioeventfd and stop_ioeventfd to start/stop the
    whole dataplane logic.  This has some positive side effects:
    
    - no need anymore for virtio_add_queue_aio (i.e. a revert of
      commit 1c627137c10ee2dcf59e0383ade8a9abfa2d4355)
    
    - no need anymore to switch from generic ioeventfd handlers to
      dataplane
    
    It detects some errors better:
    
        $ qemu-system-x86_64 -object iothread,id=io \
              -device virtio-scsi-pci,ioeventfd=off,iothread=io
        qemu-system-x86_64: -device virtio-scsi-pci,ioeventfd=off,iothread=io:
        ioeventfd is required for iothread
    
    while previously it would have started just fine.
    
    Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Comment 21 Marcel Apfelbaum 2017-02-13 06:23:13 EST
(In reply to Gerd Hoffmann from comment #17)
> bisect landed at:
> 
> commit ad07cd69ecaffbaa015459a46975ab32e50df805
> Author: Paolo Bonzini <pbonzini@redhat.com>
> Date:   Fri Oct 21 22:48:10 2016 +0200
> 
>     virtio-scsi: always use dataplane path if ioeventfd is active
>     
>     Override start_ioeventfd and stop_ioeventfd to start/stop the
>     whole dataplane logic.  This has some positive side effects:
>     
>     - no need anymore for virtio_add_queue_aio (i.e. a revert of
>       commit 1c627137c10ee2dcf59e0383ade8a9abfa2d4355)
>     
>     - no need anymore to switch from generic ioeventfd handlers to
>       dataplane
>     
>     It detects some errors better:
>     
>         $ qemu-system-x86_64 -object iothread,id=io \
>               -device virtio-scsi-pci,ioeventfd=off,iothread=io
>         qemu-system-x86_64: -device
> virtio-scsi-pci,ioeventfd=off,iothread=io:
>         ioeventfd is required for iothread
>     
>     while previously it would have started just fine.
>     
>     Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
>     Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

Hi Paolo,
Can you please have a look on what's going on here?
It seems like a whole new issue, even if the symptoms are the same.

Thanks,
Marcel
Comment 22 Paolo Bonzini 2017-02-28 08:17:24 EST
The commit broke the indirect access registers that are new in virtio 1.0.  Adding ivshmem-plain pushes the virtio-blk device above the 4G limit and causes seabios to use indirect access.
Comment 23 Sitong Liu 2017-05-01 21:50:20 EDT
*** Bug 1441512 has been marked as a duplicate of this bug. ***
Comment 24 Paolo Bonzini 2017-06-05 12:31:05 EDT
Fixed by commit e49a6618400d11e51e30328dfe8d7cafce82d4bc.
Comment 25 Pei Zhang 2017-06-05 20:47:29 EDT
==Verification==

Versions:
3.10.0-675.el7.x86_64
qemu-kvm-rhev-2.9.0-7.el7.x86_64
seabios-1.10.2-3.el7.x86_64
seabios-bin-1.10.2-3.el7.noarch

Steps:
1. Boot guest with ivshmem-plain and virtio-pci
Same With Comment 5.

2. Reboot/shutdown guest several times, guest keeps working well.


So this bug has been fixed well. Thanks.

Move status of this bug to 'VERIFIED'.
Comment 29 errata-xmlrpc 2017-08-01 19:39:45 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 30 errata-xmlrpc 2017-08-01 21:17:23 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 31 errata-xmlrpc 2017-08-01 22:09:23 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 32 errata-xmlrpc 2017-08-01 22:50:09 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 33 errata-xmlrpc 2017-08-01 23:14:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 34 errata-xmlrpc 2017-08-01 23:35:00 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Note You need to log in before you can comment on or make changes to this bug.