RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 979931 - When guest do S3/S4, guest v/ virtio-scsi call trace and reboot.
Summary: When guest do S3/S4, guest v/ virtio-scsi call trace and reboot.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Virt-S3/S4-7.0
TreeView+ depends on / blocked
 
Reported: 2013-07-01 07:45 UTC by Qian Guo
Modified: 2014-10-29 07:07 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-29 07:07:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vmcore-dmesg file (36.53 KB, text/plain)
2013-07-01 07:45 UTC, Qian Guo
no flags Details
split the vmcore to 2 files, this is the first one: vmcore_part_aa (15.00 MB, application/octet-stream)
2013-07-01 09:00 UTC, Qian Guo
no flags Details
split the vmcore to 2 files, this is the 2nd one: vmcore_part_ab (6.55 MB, application/octet-stream)
2013-07-01 09:03 UTC, Qian Guo
no flags Details

Description Qian Guo 2013-07-01 07:45:50 UTC
Created attachment 767271 [details]
vmcore-dmesg file

Description of problem:
If guest bootup w/ virtio-scsi, when try to do S3, it will reboot, and after boot again, found that it call trace.

Version-Release number of selected component (if applicable):
host and guest kernel version:
# uname -r
3.10.0-0.rc6.62.el7.x86_64
qemu-kvm version:
# rpm -q qemu-kvm
qemu-kvm-1.5.0-2.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1.Boot guest w/ virtio-scsi disk:
#/usr/libexec/qemu-kvm -cpu Penryn -enable-kvm -m 2048 -smp 4,sockets=1,cores=4,threads=1 -name rhel6u3c2 -drive file=/home/rhel7/rhel7.qcow2,if=none,id=drive-scsi0-disk0,format=qcow2,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x4 -device scsi-hd,scsi-id=0,lun=0,bus=scsi0.0,drive=drive-scsi0-disk0,id=virtio-disk0 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,mac=54:52:1b:35:3c:18,id=test -device virtio-balloon-pci,id=balloon0 -vnc :10 -vga std -boot menu=on -monitor stdio -serial unix:/tmp/qiguo1,server,nowait

2.After guest bootup, do S3 inside guest:
# echo > /sys/power/state
3.

Actual results:
Guest reboot directly, and after it reboot, see /var/crash, there's the vmcore file. 
# cat vmcore-dmesg.txt
...
[    1.362208] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
[    1.362214] IP: [<ffffffffa0045d20>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    1.362217] PGD 36d32067 PUD 36d0a067 PMD 0 
[    1.362219] Oops: 0000 [#1] SMP 
[    1.362242] Modules linked in: nf_conntrack_netbios_ns nf_conntrack_broadcast ipt_MASQUERADE ip6table_nat nf_nat_ipv6 ip6table_mangle ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 iptable_nat nf_nat_ipv4 nf_nat iptable_mangle ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables sg ppdev microcode pcspkr i2c_piix4 virtio_balloon parport_pc parport i2c_core xfs libcrc32c sr_mod cdrom sd_mod ata_generic crc_t10dif pata_acpi virtio_scsi virtio_net ata_piix libata virtio_pci virtio_ring virtio floppy dm_mirror dm_region_hash dm_log dm_mod
[    1.362245] CPU: 1 PID: 6 Comm: kworker/u8:0 Not tainted 3.10.0-0.rc6.62.el7.x86_64 #1
[    1.362246] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[    1.362251] Workqueue: events_unbound async_run_entry_fn
[    1.362252] task: ffff88007c00b7a0 ti: ffff88007c034000 task.ti: ffff88007c034000
[    1.362256] RIP: 0010:[<ffffffffa0045d20>]  [<ffffffffa0045d20>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    1.362257] RSP: 0018:ffff88007c035cc0  EFLAGS: 00010202
[    1.362259] RAX: 0000000000000210 RBX: 0000000000000001 RCX: 0000000000001de4
[    1.362259] RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000000
[    1.362260] RBP: ffff88007c035ce0 R08: ffff8800371cd700 R09: ffff88007cc00000
[    1.362261] R10: 0000000000000036 R11: 0000000000000000 R12: ffff88007c2456c8
[    1.362262] R13: ffff88007c243098 R14: 0000000000000000 R15: 0000000000000100
[    1.362264] FS:  0000000000000000(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000
[    1.362265] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[    1.362266] CR2: 0000000000000020 CR3: 0000000037368000 CR4: 00000000000006e0
[    1.362273] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    1.362277] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[    1.362278] Stack:
[    1.362281]  ffff88003729ec00 ffff88007c2456c8 ffff88007c243098 0000000000000000
[    1.362283]  ffff88007c035d00 ffffffffa0045e2c ffff88003729ec00 ffffffffa0048000
[    1.362285]  ffff88007c035d10 ffffffffa0045e5e ffff88007c035d38 ffffffffa005618d
[    1.362286] Call Trace:
[    1.362292]  [<ffffffffa0045e2c>] virtscsi_remove_vqs+0x2c/0x50 [virtio_scsi]
[    1.362296]  [<ffffffffa0045e5e>] virtscsi_freeze+0xe/0x20 [virtio_scsi]
[    1.362299]  [<ffffffffa005618d>] virtio_pci_freeze+0x4d/0x80 [virtio_pci]
[    1.362303]  [<ffffffff813172fc>] pci_pm_suspend+0x6c/0x150
[    1.362305]  [<ffffffff81317290>] ? pci_pm_freeze+0xc0/0xc0
[    1.362308]  [<ffffffff813e23fe>] dpm_run_callback+0x2e/0x60
[    1.362311]  [<ffffffff813e32a7>] __device_suspend+0xe7/0x280
[    1.362313]  [<ffffffff813e345f>] async_suspend+0x1f/0xa0
[    1.362315]  [<ffffffff8108adf9>] async_run_entry_fn+0x39/0x120
[    1.362319]  [<ffffffff8107d096>] process_one_work+0x176/0x420
[    1.362323]  [<ffffffff8107dcbb>] worker_thread+0x11b/0x3a0
[    1.362326]  [<ffffffff8107dba0>] ? rescuer_thread+0x350/0x350
[    1.362329]  [<ffffffff81084260>] kthread+0xc0/0xd0
[    1.362336]  [<ffffffff810841a0>] ? insert_kthread_work+0x40/0x40
[    1.362340]  [<ffffffff8160c92c>] ret_from_fork+0x7c/0xb0
[    1.362342]  [<ffffffff810841a0>] ? insert_kthread_work+0x40/0x40
[    1.362362] Code: e1 39 c3 74 7e 45 84 f6 75 61 31 db 41 83 bc 24 c8 01 00 00 02 74 41 0f 1f 40 00 48 63 c3 48 83 c0 20 48 c1 e0 04 49 8b 7c 04 10 <48> 8b 47 20 48 8b 80 a0 02 00 00 48 8b 40 50 48 85 c0 74 07 be 
[    1.362366] RIP  [<ffffffffa0045d20>] __virtscsi_set_affinity+0x60/0x140 [virtio_scsi]
[    1.362367]  RSP <ffff88007c035cc0>
[    1.362367] CR2: 0000000000000020

...
I will update these files.

Expected results:


Additional info:
Test w/ ide and blk, not hit this issue.

Comment 2 Qian Guo 2013-07-01 08:51:33 UTC
I will update the vmcore file later

Comment 3 Qian Guo 2013-07-01 09:00:27 UTC
Created attachment 767295 [details]
split the vmcore to 2 files, this is the first one: vmcore_part_aa

Comment 4 Qian Guo 2013-07-01 09:03:36 UTC
Created attachment 767296 [details]
split the vmcore to 2 files, this is the 2nd one: vmcore_part_ab

Comment 5 Qian Guo 2013-07-01 09:11:39 UTC
Test w/ S4, same issue.
so change the title.

Comment 8 Fam Zheng 2014-10-29 07:07:46 UTC
Qian and I couldn't reproduce with recent qemu-kvm and kernel any more. Closing this BZ.


Note You need to log in before you can comment on or make changes to this bug.