Bug 2132176
| Summary: | libvirt kills virtual machine on restart when 2M and 1G hugepages are mounted [rhel-8.7.0.z] | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | RHEL Program Management Team <pgm-rhel-tools> |
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> |
| Status: | CLOSED ERRATA | QA Contact: | liang cong <lcong> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 8.4 | CC: | ailan, duclee, dzheng, haizhao, jdenemar, jsuchane, mprivozn, virt-maint, yafu, yalzhang |
| Target Milestone: | rc | Keywords: | Triaged, Upstream, ZStream |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-8.0.0-10.1.module+el8.7.0+17192+cbc2449b | Doc Type: | Bug Fix |
| Doc Text: |
Cause:
When libvirt is restarted after a hugetlbfs was mounted and a guest is running, libvirt tries to create guest specific path in the new hugetlbfs mount point. Because of a bug in namespace code this fails which results in the guest being killed by libvirt.
Consequence:
Guest is killed on libvirtd restart.
Fix:
Twofold. Firstly, the namespace code was fixed so that creating this guest specific path now succeeds. Secondly, the creation is postponed until really needed (memory hotplug).
Result:
Guests can now survive libvirtd restart.
|
Story Points: | --- |
| Clone Of: | 2123196 | Environment: | |
| Last Closed: | 2023-01-12 09:18:55 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2123196, 2151869 | ||
| Bug Blocks: | |||
Additional info for comment#2 : more info about reproduce steps could be seen at bug#2123196comment#34 See bug 2123196#c38 for more info. Just for the record here, this is a slightly different issue and it's tracked as bug 2134009. Pre-verified on scratch build:
# rpm -q libvirt qemu-kvm
libvirt-8.0.0-11.el8_rc.8c593088fd.x86_64
qemu-kvm-6.2.0-20.module+el8.7.0+16689+53d59bc2.1.x86_64
Verify steps:
1. Define a guest with below memorybacking xml.
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB'/>
</hugepages>
</memoryBacking>
2. Start the VM and stop libvirt
# virsh start vm1 && systemctl stop libvirtd
Domain vm1 started
Warning: Stopping libvirtd.service, but it can still be activated by:
libvirtd.socket
libvirtd-ro.socket
libvirtd-admin.socket
3. Mount 1G hugepage path
# mkdir /dev/hugepages1G
# mount -t hugetlbfs -o pagesize=1G hugetlbfs /dev/hugepages1G
4. Do virsh list and guest still in running state.
# virsh -r list --all
Id Name State
----------------------
1 vm1 running
# virsh -r list --all
Id Name State
----------------------
1 vm1 running
5. Prepare memory device hotplug xml like below:
# cat dimm1G.xml
<memory model='dimm'>
<source>
<pagesize unit='KiB'>1048576</pagesize>
<nodemask>0-1</nodemask>
</source>
<target>
<size unit='KiB'>1048576</size>
<node>0</node>
</target>
</memory>
6. Hotplug dimm memory device:
# virsh attach-device vm1 dimm1G.xml
Device attached successfully
7. Prepare memory device with 2M hugepage source hotplug xml like below:
# cat dimm2M.xml
<memory model='dimm'>
<source>
<pagesize unit='KiB'>2048</pagesize>
<nodemask>0-1</nodemask>
</source>
<target>
<size unit='KiB'>1048576</size>
<node>0</node>
</target>
</memory>
8. Hotplug dimm memory device:
# virsh attach-device vm1 dimm2M.xml
Device attached successfully
9. Shutoff vm
# virsh destroy vm1
Domain vm1 destroyed
10. Restart libvirtd
# systemctl restart libvirtd
11. Start vm
# virsh start vm1
Domain 'vm1' started
Also check the below scenarios:
Steps:
1. memory backing 2M guest vm start -> stop libvirt -> mount 1G path -> start libvirt -> hotplug 1G dimm -> restart vm -> restart libvirtd -> hotplug 1G dimm
2. mount 1G path -> memory backing 2M guest vm start -> restart libvirtd -> hogplug 1G dimm -> restart libvirtd -> restart vm ->hogplug 1G dimm
Tested with these settings:remember_owner=1 or 0, memfd memory backing, default memory backing, 1G hugepage memory backing, 1G hugepage path as /mnt/hugepages1G
Verified on build:
# rpm -q libvirt qemu-kvm
libvirt-8.0.0-10.1.module+el8.7.0+17192+cbc2449b.x86_64
qemu-kvm-6.2.0-20.module+el8.7.0+16905+efca5d32.2.x86_64
Verify steps:
1. Define a guest with below memorybacking xml.
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB'/>
</hugepages>
</memoryBacking>
2. Start the VM and stop libvirt
# virsh start vm1 && systemctl stop libvirtd
Domain vm1 started
Warning: Stopping libvirtd.service, but it can still be activated by:
libvirtd.socket
libvirtd-ro.socket
libvirtd-admin.socket
3. Mount 1G hugepage path
# mkdir /dev/hugepages1G
# mount -t hugetlbfs -o pagesize=1G hugetlbfs /dev/hugepages1G
4. Do virsh list and guest still in running state.
# virsh -r list --all
Id Name State
----------------------
1 vm1 running
# virsh -r list --all
Id Name State
----------------------
1 vm1 running
5. Prepare memory device hotplug xml like below:
# cat dimm1G.xml
<memory model='dimm'>
<source>
<pagesize unit='KiB'>1048576</pagesize>
<nodemask>0-1</nodemask>
</source>
<target>
<size unit='KiB'>1048576</size>
<node>0</node>
</target>
</memory>
6. Hotplug dimm memory device:
# virsh attach-device vm1 dimm1G.xml
Device attached successfully
7. Prepare memory device with 2M hugepage source hotplug xml like below:
# cat dimm2M.xml
<memory model='dimm'>
<source>
<pagesize unit='KiB'>2048</pagesize>
<nodemask>0-1</nodemask>
</source>
<target>
<size unit='KiB'>1048576</size>
<node>0</node>
</target>
</memory>
8. Hotplug dimm memory device:
# virsh attach-device vm1 dimm2M.xml
Device attached successfully
9. Shutoff vm
# virsh destroy vm1
Domain vm1 destroyed
10. Restart libvirtd
# systemctl restart libvirtd
11. Start vm
# virsh start vm1
Domain 'vm1' started
Also check the below scenarios:
Steps:
1. memory backing 2M guest vm start -> stop libvirt -> mount 1G path -> start libvirt -> hotplug 1G dimm -> restart vm -> restart libvirtd -> hotplug 1G dimm
2. mount 1G path -> memory backing 2M guest vm start -> restart libvirtd -> hogplug 1G dimm -> restart libvirtd -> restart vm ->hogplug 1G dimm
Tested with these settings:remember_owner=1 or 0, memfd memory backing, default memory backing, 1G hugepage memory backing, 1G hugepage path as /mnt/hugepages1G
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0099 |
Find an issue on: # rpm -q libvirt qemu-kvm libvirt-8.0.0-11.el8_rc.8c593088fd.x86_64 qemu-kvm-6.2.0-20.module+el8.7.0+16689+53d59bc2.1.x86_64 1. Define a guest with below memorybacking xml. <memoryBacking> <hugepages> <page size='2048' unit='KiB'/> </hugepages> </memoryBacking> 2. Mount 1G hugepage path mount -t hugetlbfs -o pagesize=1G hugetlbfs /dev/hugepages1G 3. Start vm # virsh start vm1 Domain vm1 started 4. Prepare memory device hotplug xml like below: # cat dimm1G.xml <memory model='dimm'> <source> <pagesize unit='KiB'>1048576</pagesize> <nodemask>0-1</nodemask> </source> <target> <size unit='KiB'>1048576</size> <node>0</node> </target> </memory> 5. Attach 1G memory device described in step 4. # virsh attach-device vm1 dimm1G.xml error: Failed to attach device from dimm1G.xml error: internal error: unable to execute QEMU command 'object-add': can't open backing store /dev/hugepages1G/libvirt/qemu/3-vm1 for guest RAM: Permission denied @mprivozn could you help to check this issue?