Bug 1806857
| Summary: | "An error occurred, but the cause is unknown" raised when starting VM with non-existed numa node in <numatune> | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | jiyan <jiyan> | |
| Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> | |
| Status: | CLOSED ERRATA | QA Contact: | Jing Qi <jinqi> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 8.2 | CC: | dyuan, jdenemar, jsuchane, lcong, lmen, mprivozn, pkrempa, virt-maint, xuzhang, yalzhang | |
| Target Milestone: | rc | Keywords: | Reopened, Triaged, Upstream | |
| Target Release: | 8.0 | Flags: | pm-rhel:
mirror+
|
|
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | libvirt-7.8.0-1.module+el8.6.0+12978+7d7a0321 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2001390 (view as bug list) | Environment: | ||
| Last Closed: | 2022-05-10 13:18:34 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | 6.8.0 | |
| Embargoed: | ||||
| Bug Depends On: | 1724866 | |||
| Bug Blocks: | 2001390 | |||
|
Description
jiyan
2020-02-25 07:58:36 UTC
Merged upstream as: 9e0d4b9240 virnuma: Report error when NUMA -> CPUs translation fails v6.7.0-86-g9e0d4b9240 Oops, this is for RHEL. This was addressed in bug 1724866 for RHEL-AV 8.3.0. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Le sigh. This bug is fixed and we're just waiting for RHEL to pick up rebased version. There's no reason to close this bug! Tested with libvirt-daemon-7.8.0-1.module+el8.6.0+12982+5e169f40.x86_64 & qemu-kvm-6.1.0-3.module+el8.6.0+12982+5e169f40.x86_64 In a machine with 8 numa nodes - available: 8 nodes (0-7) node 0 cpus: 0 1 16 17 node 0 size: 15731 MB node 0 free: 13377 MB node 1 cpus: 2 3 18 19 node 1 size: 0 MB node 1 free: 0 MB node 2 cpus: 4 5 20 21 node 2 size: 0 MB node 2 free: 0 MB node 3 cpus: 6 7 22 23 node 3 size: 0 MB node 3 free: 0 MB node 4 cpus: 8 9 24 25 node 4 size: 16062 MB node 4 free: 14859 MB node 5 cpus: 10 11 26 27 node 5 size: 0 MB node 5 free: 0 MB node 6 cpus: 12 13 28 29 node 6 size: 0 MB node 6 free: 0 MB node 7 cpus: 14 15 30 31 node 7 size: 0 MB node 7 free: 0 MB node distances: node 0 1 2 3 4 5 6 7 0: 10 16 16 16 32 32 32 32 1: 16 10 16 16 32 32 32 32 2: 16 16 10 16 32 32 32 32 3: 16 16 16 10 32 32 32 32 4: 32 32 32 32 10 16 16 16 5: 32 32 32 32 16 10 16 16 6: 32 32 32 32 16 16 10 16 7: 32 32 32 32 16 16 16 10 1. Set node 0 to non-exist physical numa node '8' # virsh numatune avocado-vt-vm1 0 8 2. Tried to start vm and failed with the error message - # virsh start avocado-vt-vm1 error: Failed to start domain 'avocado-vt-vm1' error: operation failed: NUMA node 8 is not available Tested on: # rpm -q libvirt qemu-kvm libvirt-8.0.0-5.module+el8.6.0+14344+04da0821.x86_64 qemu-kvm-6.2.0-8.module+el8.6.0+14324+050a5215.x86_64 on a machine with 1 numa node: # numactl --hard available: 1 nodes (0) node 0 cpus: 0 1 node 0 size: 3732 MB node 0 free: 912 MB node distances: node 0 0: 10 define a vm with non-existed numa node in <numatune>: # virsh numatune qcow2_test numa_mode : strict numa_nodeset : 0,41 start the vm, the error message change to: # virsh start qcow2_test error: Failed to start domain 'qcow2_test' error: Invalid value '0,41' for 'cpuset.mems': Invalid argument Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1759 |