Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionCarlos Mestre González
2015-11-04 13:25:12 UTC
Description of problem:
Adding a new disk to a vm will make the other disk rotate (more in reproduce steps)
Version-Release number of selected component (if applicable):
rhevm-3.6.0.2-0.1.el6.noarch
libvirt-daemon-driver-nodedev-1.2.17-13.el7.ppc64le
libvirt-daemon-driver-storage-1.2.17-13.el7.ppc64le
libvirt-client-1.2.17-13.el7.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7.ppc64le
libvirt-daemon-1.2.17-13.el7.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-network-1.2.17-13.el7.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7.ppc64le
qemu-kvm-rhev-2.3.0-31.el7.ppc64le
qemu-kvm-tools-rhev-2.3.0-31.el7.ppc64le
qemu-kvm-common-rhev-2.3.0-31.el7.ppc64le
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
qemu-img-rhev-2.3.0-31.el7.ppc64le
vdsm-python-4.17.10.1-0.el7ev.noarch
vdsm-yajsonrpc-4.17.10.1-0.el7ev.noarch
vdsm-4.17.10.1-0.el7ev.noarch
vdsm-jsonrpc-4.17.10.1-0.el7ev.noarch
vdsm-cli-4.17.10.1-0.el7ev.noarch
vdsm-xmlrpc-4.17.10.1-0.el7ev.noarch
vdsm-infra-4.17.10.1-0.el7ev.noarch
vdsm-hook-ethtool-options-4.17.10.1-0.el7ev.noarch
How reproducible:
100%
Steps to Reproduce:
1. Create a vm with a bootable disk and OS. Start it
2. VM's disk will be seen from the OS as /dev/vda (as expected)
3. Shutdown the vm and add a new disk. Start it again
4. First and bootable disk is now /dev/vdb and the new disk is now /dev/vda
Expected results:
Disk device name should be preserved as it happens in x86 platform
Adding to storage component for the moment.
Hi Carlos,
please provide any relevant log as Yaniv requested.
Generally, referencing a device by it's logical name isn't good because it's not deterministic. we pass the disk id as the serial and you should use that, please take a further look at the discussion above -
https://bugzilla.redhat.com/show_bug.cgi?id=1063597#c18
Unless there is another action item here, it seems like this one can be closed. We shouldn't rely on the device names.
Comment 4Carlos Mestre González
2015-11-10 16:22:03 UTC
Hi Liron,
Thanks for your response.
Yes, we're relying on the device name in some cases since in x86_64 the device names "are reliable", at least in all our testing for the boot device it always kept the /dev/vda adding any number of disks later on.
In PPC is consistent that, after the steps provided in the description, the new disks always takes /dev/vda and the boot disk changes to /dev/vdb. So I'm thinking this is a difference between the way each platform's libvirt version handles this.
I'm adding the logs here.
Comment 5Carlos Mestre González
2015-11-10 16:24:00 UTC
Created attachment 1092351[details]
logs
General logs, engine, vdsm log, vm's OS messages, qemu log.
vm name is test_get_device_name, starts execution at 18:00:00, vdsm logs 11:00
Comment 6Carlos Mestre González
2015-11-11 13:43:35 UTC
So according to the logs, the old disk is still passed as virtio-disk0 with bootindex=1 and the new disk is virtio-disk1 without bootindex. So libvirt is doing the right thing here.
According to the logs from the guest OS, even QEMU is doing the right thing because the boot loader finds the kernel and dracut on the correct disk and starts booting from it.
And there's no bug in guest OS configuration either since it happily finishes booting of the right disk.
So as stated in comment 3, don't rely on a device names, use filesystem labels, UUIDs, or something like that to identify individual disks. The fact that it's reliable on some configuration does not mean it will work reliably on another configuration.