Hide Forgot
Description of problem: I've installed an HP CN1100R HP FlexFabric and StoreFabric adapter into the system hp-dl160gen8-03.khw.lab.eng.bos.redhat.com. I configured the adapter so I can boot from a SAN device. The SAN LUN is on an EMC VNX5300 attached via FCoE. When I do a manual installation I can select the disk using the FCoE option in the installer. This presents an mpath device.When I attempt and install through a beaker the installation does not find the FCoE LUN and fails with "No Disk Found". Version-Release number of selected component (if applicable): RHEL 7.2 How reproducible: This can currently be seen using the configuration on hp-dl160gen8-03.khw.lab.eng.bos.redhat.com. Steps to Reproduce: 1. Manual install RHEL7.2 and suring the install select disk->fcoe->add disk 2. Complete the install to the disk and reboot 3. Next run a beaker job to install RHEL7.2 on the system and watch the serial console. The console will show the following: Installation00 8741 0 0 568k 0 --:--:-- --:--:-- --:--:-- 609k + python /tm 1) [x] Language settings 2) [x] Timezone settingsos.redhat.com:8000/RPC2 (English (United States)) (America/New_York timezone) 3) [x] Installation source 4) [x] Software selection (NFS server bigpapi.bos.redhat. (Custom software selected) com) 6) [x] Kdump 5) [!] Installation Destination (Kdump is enabled) (No disks selected) 8) [ ] User creation 7) [x] Network configuration (No user will be created) (Wired (eno1) connected) Not enough space in file systems for the current software selection. An additional 2861.02 MiB is needed. Please make your choice from above ['q' to quit | 'b' to begin installation | 'r' to refresh]: Actual results: Automated install of RHEL 7.2 with FCoE target fails. Expected results: During the install Anaconda should be able to find the FCoE target and allow automated installation of the OS to complete. Additional info: A link to the failed job has been added below: https://beaker.engineering.redhat.com/jobs/1508203
From console.log: The following problem occurred on line 41 of the kickstart file: Disk "disk/by-id/dm-uuid-mpath-36006016039302b00e1e00f2fea7ce411" given in ignoredisk command does not exist. Is the path correct? In the Beaker kickstart file I couldn't find the fcoe command to activate FCoE devices - although I'm not sure if it is necessary. Could you check this? The fcoe kickstart command is described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/sect-kickstart-syntax.html#sect-kickstart-commands
Below is the current kickstart entry that I added. I took the device from a local install that allowed me to access this drive. I've also tried using the same Metadata string changing the device to: ignoredisk --only --only-use=disk/by-id/scsi-36006016039302b00e1e00f2fea7ce411 Kickstart Metadata: grubport=0x03f8 fcoe --nic=ens1f1 --autovlan --dcb fcoe --nic=ens1f0 --autovlan --dcb ignoredisk --only-use=disk/by-id/dm-uuid-mpath36006016039302b00e1e00f2fea7ce411 This is a list of devices from /dev/disk/by-id/ taken from the running system: > dm-name-mpatha > dm-name-mpatha1 > dm-name-mpatha2 > dm-name-rhel_hp--dl160gen8--03-home > dm-name-rhel_hp--dl160gen8--03-root > dm-name-rhel_hp--dl160gen8--03-swap > dm-uuid-LVM-p9OOimFBhHMBh9ZjZHjcVYFE2TUnDIuka6pmnQEGiJ9yzp3jwyFiME3o9C8xiALD > dm-uuid-LVM-p9OOimFBhHMBh9ZjZHjcVYFE2TUnDIukMCoufGNqev17deKVPChLHrQgVnCf44op > dm-uuid-LVM-p9OOimFBhHMBh9ZjZHjcVYFE2TUnDIuko4BkgaHV92QXxwTHyf3yICjj6DVfbNZ3 > dm-uuid-mpath-36006016039302b00e1e00f2fea7ce411 > dm-uuid-part1-mpath-36006016039302b00e1e00f2fea7ce411 > dm-uuid-part2-mpath-36006016039302b00e1e00f2fea7ce411 > lvm-pv-uuid-wP3fh8-pucm-1Ri9-2McD-EiLl-k4fq-eiuGJw > scsi-36006016039302b00e1e00f2fea7ce411 > wwn-0x6006016039302b00e1e00f2fea7ce411 >
Bruno looked at this and opened the following bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1378714
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.