Description of problem: When trying to attach a volume to a fresh RHEL 8.2 PPC VM the following error is seen in the nova logs: ~~~ Failed to attach volume at mountpoint: /dev/sdc: libvirtError: Requested operation is not valid: Domain already contains a disk with that address ~~~ Version running on the compute nodes: libvirt-4.5.0-23.el7_7.5.x86_64
I'm confused. The title says this is a ppc VM, but then an x86 libvirt package is listed as the version. The comments say we're dealing with RHEL8.2, but it's filed against RHEL7.7. Can we get some more clarity on the situation here: 1. What's the Openstack version in use? 2. What's the RHEL version on the hosts? 3. What type of machines are the compute hosts? 4. What OS is in the guests? 5. What are package versions for more relevant components: - host kernel - guest kernel - Nova - qemu-kvm virtio-scsi should definitely work with ppc now, although it's possible if one of the components is an old version it might lack support. The initial error suggests that Nova might be instructing libvirt to connect two disks at the same address, which sounds like a Nova or configuration bug, rather than a libvirt or
Hi David, Sorry, the libvirt version running on the compute node is libvirt-4.5.0-23.el7_7.5.ppc64le I am just gathering the other information for you.
1. What's the OpenStack version in use? OSP13z11 2. What's the RHEL version on the hosts? Red Hat Enterprise Linux Server release 7.7 (Maipo) 3. What type of machines are the compute hosts i.e brand and model? IBM Power 822L 4. What OS is in the guests? RHEL8.2 PPC but the issue is the same for RHEL7.6, RHEL8.2 and Fedora28 5. What are the package versions for more relevant components: - host kernel Linux compute-dev-822l-0 3.10.0-1062.12.1.el7.ppc64le #1 SMP Thu Dec 12 11:47:54 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux - guest kernel Different on RHEL7.6, RHEL8.2 and Fedora28 - Nova openstack-nova-compute:13.0-129 - qemu-kvm libvirt-daemon-driver-qemu-4.5.0-23.el7_7.5.ppc64le qemu-kvm-rhev-2.12.0-33.el7_7.4.ppc64le qemu-kvm-common-rhev-2.12.0-33.el7_7.4.ppc64le Let me know if you need anything else.
Hi David, thank you for stepping in. Please review the info in comment 9. Is there anything else you need to know? Thanks.
It looks like a known problem: https://access.redhat.com/solutions/4356171 Could you check?
(In reply to Jaroslav Suchanek from comment #10) > Hi David, thank you for stepping in. Please review the info in comment 9. Is > there anything else you need to know? Thanks. Can someone from libvirt team check if the problem described here is the same as the one described in https://access.redhat.com/solutions/4356171 ?
(In reply to Laurent Vivier from comment #12) > (In reply to Jaroslav Suchanek from comment #10) > > Hi David, thank you for stepping in. Please review the info in comment 9. Is > > there anything else you need to know? Thanks. > > Can someone from libvirt team check if the problem described here is the > same as the one described in https://access.redhat.com/solutions/4356171 ? It doesn't look like it. That page says the problem was fixed in libvirt-4.5.0-23.el7_7.3. Comment #9 shows the customer already has 4.5.0-23.el7_7.5.ppc64le, so will have the fix. In addition the bug on that page hits when the VM has more than 6 disks, and attaching a disk on unit=7 This VM only has 2 disks and is requesting sdc which should get unit=2. I think we'd need to see the XML that the Nova is using to hotplug the disk, to identify whether the info it provides to libvirt is correct.
------- Comment From chavez.com 2020-06-29 15:40 EDT------- > Comment #9 shows the customer already has 4.5.0-23.el7_7.5.ppc64le, so will have the fix. So can you confirm this is bug is for a problem reported by an external customer please? Thanks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (openstack-nova bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4393