Description of problem: [ppc][nvdimm]the guest with device nvdimm specified in cmdline shouldn't boot up with "nvdimm=off" Version-Release number of selected component (if applicable): kernel-4.18.0-211.el8.ppc64le qemu-kvm-5.0.0-0.module+el8.3.0+6620+5d5e1420.ppc64le How reproducible: always Steps to Reproduce: 1.boot up guest with ...-machine pseries,nvdimm=off -object memory-backend-file,id=mem1,share=on,mem-path=/tmp/nvdimm1,size=1G -device nvdimm,memdev=mem1,id=nv1,label-size=512M,uuid=2b9ee44f-d390-4ad2-afd9-b84b1f31dbdf ... 2. 3. Actual results: The guest boot up successfully. Expected results: The guest shouldn't boot up, the guest with "-device nvdimm" specified shouldn't boot up because of "nvdimm=off", Additional info: Just for reference,the result was from x86. # /usr/libexec/qemu-kvm -M pc,nvdimm=off -object memory-backend-file,id=mem0,size=1G,mem-path=/tmp/aa -device nvdimm,id=dimm0,memdev=mem0 qemu-kvm: -device nvdimm,id=dimm0,memdev=mem0: nvdimm is not enabled: missing 'nvdimm' in '-M'
I posted patches upstream for this a few days ago and forgot to mention it here: https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg06472.html David accepted the fixes in his ppc-for-5.2 branch. I wasn't able to make this option behave like other archs, specially the behavior of disabling support by default like x86 does. The reason is that the NVDIMM support for pSeries was released in 5.1 ignoring this option altogether, assuming the default = enable support. This puts the fix in a strange spot: if I change the support to be like x86, I'll break existing pseries-5.1 users that are using NVDIMM without specifying the 'nvdimm' option. If I leave pseries-5.1 alone and change pseries-5.2 to be like x86, I'll change NVDIMM behavior between pseries 5.1 and 5.2, potentially breaking people doing machine updates. What was done then is to do what we know it's reasonable to do in all cases, and what the bug proposes: if the user explicitly puts "nvdimm=off" in the command line, we will disable NVDIMM support regardless of the machine version.
Fix is now merged upstream from my last pull request, moving to POST for RHEL-AV-8.4.
According to this piece of comment from https://bugzilla.redhat.com/show_bug.cgi?id=1718461#c20, we need this fix for rhel8.3 too, thanks.
Subsequent comments suggest that's not necessary after all.
Re-tested the bug, the original issue hasn't been reproduced on qemu-kvm-5.2.0-0.module+el8.4.0+8855+a9e237a9, thanks.
Base on comment6 and latest test results, qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a.ppc64le Move this bug to be verified, thanks a lot.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2098
This comment was flagged as spam, view the edit history to see the original text if required.