Bug 818505
Summary: | xen: fix drive naming [rhel-6.2.z] | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | RHEL Program Management <pm-rhel> |
Component: | kernel | Assignee: | Frantisek Hrbata <fhrbata> |
Status: | CLOSED ERRATA | QA Contact: | Red Hat Kernel QE team <kernel-qe> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 6.3 | CC: | agrimm, barumuga, branto, dhoward, drjones, jgreguske, kevin, kzhang, leiwang, lersek, mmcgrath, mrezanin, pasteur, pbonzini, pm-eus, qguan, qwan, sforsber, whayutin |
Target Milestone: | rc | Keywords: | EC2, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | kernel-2.6.32-220.20.1.el6 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-06-18 13:34:54 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 729586 | ||
Bug Blocks: |
Description
RHEL Program Management
2012-05-03 08:51:30 UTC
How is this getting "fixed"? /dev/xvde -> /dev/xvda or /dev/xvde -> /dev/sda ? It is critical this change be tested in EC2 with the existing 6.2 AMIs. Changing the device names with a kernel update may cause the systems to be unbootable since /etc/fstab is looking for /dev/xvda. (In reply to comment #5) > It is critical this change be tested in EC2 with the existing 6.2 AMIs. > Changing the device names with a kernel update may cause the systems to be > unbootable since /etc/fstab is looking for /dev/xvda. This change has already been made in RHEL 6.3. We should direct any inquires to Laszlo Ersek, the assigned engineer on that BZ https://bugzilla.redhat.com/show_bug.cgi?id=729586 Setting Needinfo "sda" is the identifier used in the VM config file, that is, the file under /etc/xen/GUEST in dom0 (= host). xvda and xvde are device nodes in the domU (= VM, guest). ----------+--------------------+------------------------------------------- domU ver | devnode in VM conf | devnode in guest ----------+--------------------+------------------------------------------- 6.0 | sda | xvda ----------+--------------------+------------------------------------------- 6.1 | sda | xvde ----------+--------------------+------------------------------------------- 6.2 GA | sda | xvde ----------+--------------------+------------------------------------------- 6.2.z | sda | xvde (compatible w/ 6.1 & 6.2 GA, default) (this BZ) | | xvda (compatible w/ 6.0, guest modparam: and 6.3 | | xen_blkfront.sda_is_xvda=1) ----------+--------------------+------------------------------------------- Graceful guest upgrde: (1) First make sure the xen_blkfront.sda_is_xvda=1 modparam is in effect in the domU (while it runs the 6.0.z kernel). For example, via /etc/modprobe.d/some-config-file. Should the 6.0.z kernel be rebooted in this state, the xen_blkfront driver will ignore the modparam. (2) Upgrade the guest kernel to 6.2.z (having the backported patch). The first time that kernel boots, the modparam will already be in effect: the initrd is built for the first time when the 6.2.z kernel is installed, and by then the /etc/modprobe.d/some-config-file should have the modparam in place (step 1). Alternatively, if a guest is installed from scratch, make sure the first time its initrd is built, it gets xen_blkfront.sda_is_xvda=1. I've never completely understood the AMI management for EC2, but iirc, the guest kernels are completely managed by Amazon, not the customers. This means customers can't just do a 'yum update kernel' and then hose themselves with this issue. Instead, the kernel update has to be coordinated with Amazon, and thus the kernel command line can be added by Amazon when the update is performed, if needed. I believe we discussed all of this with Amazon at the time Laszlo wrote this patch. We had to, because at that time we had to choose which way to go for the default. Now, if I'm wrong, and the customer can do 'yum update kernel', then IMHO, they should fix their /etc/fstab to use UUIDs instead of hardcoded devs, rather than adding kernel command line parameters. If they switch to UUIDs then they don't have to worry about drive renaming at all, but that's just my 2 cents... Before pv-grub was widely used in EC2, kernel were managed separately by Amazon and OS Vendors like us. Today however customers can do 'yum update kernel' just like on real hardware. We'll need to make sure the UUID approach works in EC2. I seem to recall they changed either with each guest, or with each reboot, which made use of them infeasible. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-0743.html |