Description of problem:
The RHEV-H installation process boots from CD/DVD and presents the user with a series of fibrechannel LUNs on which to either wipe or install RHEV-H. The LUNs present themselves as sdnn devices. Picking the wrong LUN from the list could be disastrous. Imagine accidentally wiping a storage domain and installing a copy of RHEV-H over the top of it. That could destroy a logical data center and shut down thousands of users.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Boot a new bare metal host from RHEV-H media.
2. Pick the wrong sda device and install RHEV-H.
I don't like to think about the possible actual results.
RHEV-H should present a warning or something to ensure it never installs on the wrong LUN.
This applies to fibrechannel data centers where firmware presents LUNs with no higher level software getting involved. It is especially dangerous in boot from SAN scenarios.
It is already possible to address specific disks during auto-installation using the serial numbers.
Could you provide the output of
$ find /dev/disk/
from a host with the described storage setup?
With that output we can tell if the disks expose enough information to be identifiable on RHEV-H.
re: comment 1 - I put the question to the customer in the support case.
If rhevh.next does its install similarly to RHEL7 with the graphical display of existing partitions and so forth, that feels like a great step forward. I'll ask the customer if that will work.
RHEV-H next (planned for RHEV 4.0) will be using Anaconda for installation.
Anaconda provides mechanisms to precisely identify LUNs (i.e. through serial numbers/wwids).
Does the featureset provided by Anaconda meet the requirements expressed here?
It should. I asked the customer earlier to try a test RHEL 7 or Fedora install to get a feel for it. I'll ask again.
- Greg Scott
Thanks, moving this bug ON_QA for now.
According #c5 & C6, Anaconda can provides mechanisms to precisely identify LUNs.
So the bug is fixed. Change bug status to VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.