Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1455450

Summary: Hotplug hugepage memory after do stress in guest cause "error: kvm run failed Bad address"
Product: Red Hat Enterprise Linux 7 Reporter: Yumei Huang <yuhuang>
Component: qemu-kvm-rhevAssignee: Igor Mammedov <imammedo>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.4CC: juzhang, knoel, virt-maint, wehuang
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-05-27 05:33:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yumei Huang 2017-05-25 08:53:23 UTC
Description of problem:
Boot guest with normal memory and a present pc-dimm backed by hugepage, do stress in guest. After stress test finish, hotplug a pc-dimm backed by hugepage, hit "error: kvm run failed Bad address".
 

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.9.0-5.el7
kernel-3.10.0-666.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. Set hugepage on host

# cat /proc/meminfo  | grep -i huge
AnonHugePages:     10240 kB
HugePages_Total:    2058
HugePages_Free:     2058
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

# cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages 
515
515
514
514

# mount
none on /mnt/kvm_hugepage type hugetlbfs (rw,relatime,seclabel)


2. Boot guest with a pc-dimm backed by hugepage

# /usr/libexec/qemu-kvm -m 2G,slots=20,maxmem=30G rhel74.qcow2  \

-numa node -numa node    \

-vnc :0 -monitor stdio \

-netdev tap,id=tap0 -device virtio-net-pci,netdev=tap0,id=net0   \

-qmp tcp:0:4444,server,nowait  \

-object memory-backend-file,policy=bind,mem-path=/mnt/kvm_hugepage,host-nodes=0,size=1G,id=mem-mem1 \

-device pc-dimm,node=1,id=dimm-mem1,memdev=mem-mem1


3. Run stress test in guest
# stress --cpu 32 --io 32 --vm 32 --vm-bytes 85451200 --hdd 32 --hdd-bytes 1048576 --timeout 60
stress: info: [2090] dispatching hogs: 32 cpu, 32 io, 32 vm, 32 hdd
stress: info: [2090] successful run completed in 61s


4. After stress test finish, hotplug hugepage memory to guest

{'execute': 'object-add', 'arguments': {'id': 'mem-plug', 'qom-type': 'memory-backend-file', 'props': {'policy': 'bind', 'mem-path': '/mnt/kvm_hugepage', 'host-nodes': [0], 'size': 1073741824}}, 'id': 'eZ0ABqQz'}

{'execute': 'device_add', 'arguments':{'id': 'dimm0','driver': 'pc-dimm', 'memdev': 'mem-plug'}}


Actual results:
(qemu) error: kvm run failed Bad address
RAX=0000000000000000 RBX=ffff8bd8be801b00 RCX=0000000000000000 RDX=0000000000000040
RSI=ffff8bd9c7a00000 RDI=ffff8bd8be801b00 RBP=ffff8bd899163978 RSP=ffff8bd899163950
R8 =0000000000000005 R9 =0000000000000000 R10=0000000000000004 R11=fffffffffffffff8
R12=ffff8bd9c7a00000 R13=ffffe9b5c51e8000 R14=ffff8bd9c7a00000 R15=ffff8bd9c7a00000
RIP=ffffffff901db645 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 0000000000000000 ffffffff 00000000
CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
SS =0000 0000000000000000 ffffffff 00000000
DS =0000 0000000000000000 ffffffff 00000000
FS =0000 00007fd6dc3f48c0 ffffffff 00000000
GS =0000 ffff8bd8bec00000 ffffffff 00000000
LDT=0000 0000000000000000 0000ffff 00000000
TR =0040 ffff8bd8bec140c0 00002087 00008b00 DPL=0 TSS64-busy
GDT=     ffff8bd8bec09000 0000007f
IDT=     ffffffffff529000 00000fff
CR0=80050033 CR2=000000000676a000 CR3=0000000028075000 CR4=000007f0
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=89 e7 49 89 cc 4c 89 fe 48 89 df e8 4f fc ff ff 48 63 43 20 <4d> 89 24 07 41 0f b7 55 1a 8b 43 18 81 e2 ff 7f 00 00 48 63 c8 0f af d0 4c 01 e1 48 63 d2

(qemu) info status
VM status: paused (internal-error)


Expected results:
Guest works well.

Additional info:

Comment 2 Yumei Huang 2017-05-26 03:33:11 UTC
Only hit this issue when the binded host node has no enough hugepage.

Comment 3 Yumei Huang 2017-05-26 04:48:10 UTC
Could reproduce with qemu-kvm-rhev-2.6.0-28.el7.

Comment 4 Yumei Huang 2017-05-27 05:33:24 UTC
It is duplicate of bug1329086. I will close it. Pls reopen if I'm wrong.

*** This bug has been marked as a duplicate of bug 1329086 ***