Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1033540

Summary: Guest call trace after hotplug 60 vcpus then reboot guest
Product: Red Hat Enterprise Linux 7 Reporter: langfang <flang>
Component: qemu-kvmAssignee: Igor Mammedov <imammedo>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0CC: acathrow, chayang, drjones, flang, hhuang, imammedo, juzhang, qguo, qzhang, virt-maint, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-03-10 12:41:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
hotplug vcpu script
none
Guest log none

Description langfang 2013-11-22 10:48:43 UTC
Description of problem:

Guest call trace after hotplug 60 vcpus then reboot guest

Version-Release number of selected component (if applicable):
Host:
# uname -r
3.10.0-55.el7.x86_64
# rpm -q qemu-kvm
qemu-kvm-1.5.3-19.el7.x86_64

Guest:RHEL7


How reproducible:

80%

Steps to Reproduce:
1.Boot guest with 
 .../usr/libexec/qemu-kvm -M q35 -enable-kvm -m 2G -smp 1,cores=1,thread=1,socket=1,maxcpus=160 ..

2.Stop guest 
(qemu)stop
3.Hotplug 60 vcpus--->script see attachment

#python hotplugnic.py -f 4455 -n 60


4.Cont guest
(qemu) c
5.Reboot guest

In guest
#reboot

Actual results:
After guest reboot,guest call trace
..
[  260.431068]  [<ffffffff81049ec0>] ? flush_tlb_func+0xb0/0xb0
[  260.431068]  [<ffffffff81049ec0>] ? flush_tlb_func+0xb0/0xb0
[  260.431068]  [<ffffffff810bf82d>] on_each_cpu+0x2d/0x60
[  260.431068]  [<ffffffff8104a35a>] flush_tlb_kernel_range+0x4a/0x70
[  260.431068]  [<ffffffff81169adc>] __purge_vmap_area_lazy+0x16c/0x1d0
[  260.431068]  [<ffffffff81169cb5>] vm_unmap_aliases+0x175/0x190
[  260.431068]  [<ffffffffa0b49000>] ? 0xffffffffa0b48fff
[  260.431068]  [<ffffffff81046657>] change_page_attr_set_clr+0xb7/0x470
[  260.431068]  [<ffffffffa0b49000>] ? 0xffffffffa0b48fff
[  260.431068]  [<ffffffff81046edf>] set_memory_ro+0x2f/0x40
[  260.431068]  [<ffffffffa0b49000>] ? 0xffffffffa0b48fff
[  260.431068]  [<ffffffff815f28f8>] set_section_ro_nx+0x39/0x71
[  260.431068]  [<ffffffff810c5786>] load_module+0xf36/0x1400
[  260.431068]  [<ffffffff8130de10>] ? ddebug_proc_write+0xf0/0xf0
[  260.431068]  [<ffffffff810c1eb4>] ? copy_module_from_fd.isra.42+0x44/0x140
[  260.431068]  [<ffffffff810c5dc6>] SyS_finit_module+0x86/0xb0
[  260.431068]  [<ffffffff81606399>] system_call_fastpath+0x16/0x1b
[  260.431068] Code: dc 93 00 89 c2 39 f0 0f 8d 2d fe ff ff 48 98 49 8b 4d 00 48 03 0c c5 00 f4 9e 81 f6 41 20 01 74 cc 0f 1f 40 00 f3 90 f6 41 20 01 <75> f8 48 63 35 b1 dc 93 00 eb b7 0f b6 4d b4 48 8b 75 c0 4c 89 
[  280.363041] BUG: soft lockup - CPU#26 stuck for 23s! [systemd-udevd:816]
[  280.364037] Modules linked in: serio_raw(+) mfd_core i2c_i801 microcode(+) virtio_balloon xfs libcrc32c cirrus syscopyarea sysfillrect ahci sysimgblt libahci drm_kms_helper virtio_net virtio_blk ttm virtio_pci drm virtio_ring libata i2c_core virtio dm_mirror dm_region_hash dm_log dm_mod
[  280.364037] CPU: 26 PID: 816 Comm: systemd-udevd Not tainted 3.10.0-11.el7.x86_64 #1
[  280.364037] Hardware name: Red Hat KVM, BIOS Bochs 01/01/2011
[  280.364037] task: ffff880075bf6420 ti: ffff8800762ba000 task.ti: ffff8800762ba000
[  280.364037] RIP: 0010:[<ffffffff810bf76e>]  [<ffffffff810bf76e>] smp_call_function_many+0x25e/0x2c0
[  280.364037] RSP: 0018:ffff8800762bbc98  EFLAGS: 00000202
[  280.364037] RAX: 0000000000000001 RBX: 0000000000000286 RCX: ffff88007c238628
[  280.364037] RDX: 0000000000000001 RSI: 00000000000000a0 RDI: 0000000000000000
[  280.364037] RBP: ffff8800762bbce8 R08: ffff88007a99e400 R09: ffff88007c5573e0
[  280.364037] R10: ffffea0001eaee00 R11: ffffffff812ee3a9 R12: ffff8800762bbc18
[  280.364037] R13: 0000000000000286 R14: 0000000000000010 R15: ffffffff81036dae
[  280.364037] FS:  00007fc1594f3880(0000) GS:ffff88007c540000(0000) knlGS:0000000000000000
[  280.364037] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  280.364037] CR2: 00007fc159519000 CR3: 0000000074e0e000 CR4: 00000000000006e0
[  280.364037] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  280.364037] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  280.364037] Stack:
[  280.364037]  000000018104656f 00000000000150c0 ffffffff81045b30 ffff88007c9750c0
[  280.364037]  0000000000000202 ffff8800762bbdb0 ffffffff81045b30 0000000000000000
[  280.364037]  0000000000000001 0000000000000000 ffff8800762bbd10 ffffffff810bf82d
[  280.364037] Call Trace:
[  280.364037]  [<ffffffff81045b30>] ? __cpa_process_fault+0xa0/0xa0
[  280.364037]  [<ffffffff81045b30>] ? __cpa_process_fault+0xa0/0xa0
[  280.364037]  [<ffffffff810bf82d>] on_each_cpu+0x2d/0x60
[  280.364037]  [<ffffffffa0104000>] ? 0xffffffffa0103fff
[  280.364037]  [<ffffffff81046916>] change_page_attr_set_clr+0x376/0x470
[  280.364037]  [<ffffffff81046e9f>] set_memory_rw+0x2f/0x40
[  280.364037]  [<ffffffff810c0f49>] unset_module_init_ro_nx+0x59/0x80
[  280.364037]  [<ffffffff810c5840>] load_module+0xff0/0x1400
[  280.364037]  [<ffffffff8130de10>] ? ddebug_proc_write+0xf0/0xf0
[  280.364037]  [<ffffffff810c1eb4>] ? copy_module_from_fd.isra.42+0x44/0x140
[  280.364037]  [<ffffffff810c5dc6>] SyS_finit_module+0x86/0xb0
[  280.364037]  [<ffffffff81606399>] system_call_fastpath+0x16/0x1b
.


...
Expected results:

Guest work well

Additional info:

My CLI:
 /usr/libexec/qemu-kvm -M q35 -enable-kvm -m 2G -smp 1,cores=1,thread=1,socket=1,maxcpus=160 -name rhel6 -uuid 0a41b8b4-7cb5-419a-b23e-7636e215028e -rtc base=utc,clock=host,driftfix=slew -boot c -drive file=/home/RHEL-Server-7.0-64-virtio.qcow2,if=none,id=drive-virtio-0-1,format=qcow2,cache=unsafe,werror=report,rerror=report -device virtio-blk-pci,drive=drive-virtio-0-1,id=virt0-0-1 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:50:a4:c2:c5 -vnc :1 -device virtio-balloon-pci,id=ballooning -monitor stdio -qmp tcp:0:4455,server,nowait -monitor unix:/tmp/monitor2,server,nowait -serial unix:/tmp/tty0,server,nowait -device nec-usb-xhci,id=xhci0 -device usb-host,hostbus=1,hostaddr=3,id=usb-stick

Comment 1 langfang 2013-11-22 10:51:56 UTC
Created attachment 827681 [details]
hotplug vcpu script

Comment 2 langfang 2013-11-22 10:55:44 UTC
Created attachment 827682 [details]
Guest log

Comment 4 Igor Mammedov 2013-11-25 09:43:13 UTC
Does it work with piix4 machine?

Comment 5 langfang 2013-11-25 10:24:59 UTC
(In reply to Igor Mammedov from comment #4)
> Does it work with piix4 machine?

Hi Igor

    Hit the same problem with  "pc-i440fx-rhel7.0.0".



thanks

Comment 6 Igor Mammedov 2014-03-05 12:44:57 UTC
After a testing/debugging this, the bug appears to be not related to bug 968147.
It' somewhat related to bug 1071454 but I still managed to reproduce issue
even with that fix.

Comment 7 Igor Mammedov 2014-03-10 12:41:25 UTC

*** This bug has been marked as a duplicate of bug 1073568 ***