Description of problem: When the virtual disk attached on a spapr-vscsi bus is with a channel id loaded to the scope of 1~3 instead of 0, then SLOF won't scan the disk, i.e. you can't see the disk in SLOF once you press F12 in the guest. Version-Release number of selected component (if applicable): Host kernel: 4.18.0-57.el8.ppc64le SLOF: SLOF-20171214-4.gitfa98132.module+el8+2179+85112f94.noarch qemu-kvm: qemu-kvm-3.1.0-2.module+el8+2606+2c716ad7.ppc64le How reproducible: 100% Steps to Reproduce: 1. Boot up a guest with a virtual disk with channel id is 1 on a spapr-vscsi bus: -blockdev node-name=disk1,file.driver=file,driver=qcow2,file.filename=/home/hd1,cache.no-flush=off,cache.direct=off \ -device scsi-hd,drive=disk1,id=image1,scsi-id=0,lun=0,channel=1,werror=stop,rerror=stop,bootindex=1 \ -boot menu=on,strict=off,order=cdn,once=c \ 2. When the guest starts to boot, press F12 for the guest, check if the scsi disk is listed under SLOF. 3. Repeat steps 1/2 with the channel id is 2, 3 and 0 respectively. Actual results: In step3, when the channel id is 1, 2 or 3 instead of 0, the virtual disk could not be scanned under SLOF. Expected results: when the channel id is 1, 2 or 3, the virtual disk also could be scanned under SLOF. Additional info: This is in fact the issue in https://bugzilla.redhat.com/show_bug.cgi?id=1655649#c24
(In reply to Gu Nini from comment #0) > Description of problem: > When the virtual disk attached on a spapr-vscsi bus is with a channel id > loaded to the scope of 1~3 instead of 0, then SLOF won't scan the disk, i.e. > you can't see the disk in SLOF once you press F12 in the guest. > > Version-Release number of selected component (if applicable): > Host kernel: 4.18.0-57.el8.ppc64le > SLOF: SLOF-20171214-4.gitfa98132.module+el8+2179+85112f94.noarch > qemu-kvm: qemu-kvm-3.1.0-2.module+el8+2606+2c716ad7.ppc64le > > The guest kernel is: 4.18.0-51.el8.ppc64le
Bug reproduction: Host: 4.18.0-57.el8.ppc64le qemu-kvm-3.1.0-3.module+el8+2614+d714d2bb.ppc64le SLOF-20171214-5.gitfa98132.module+el8+2618+c5e2b86b.noarch steps: Boot guest with qemu cli: -device spapr-vscsi,id=scsi2 \ -blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/xianwang/rhel80-ppc64le-virtio.qcow2,node-name=drive_scsi1 \ -blockdev driver=qcow2,node-name=drive_scsi11,file=drive_scsi1 \ -device scsi-hd,drive=drive_scsi11,id=scse-disk1,bus=scsi2.0,channel=0,scsi-id=0,lun=0,bootindex=0 \ -blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/xianwang/RHEL-8.0-20181220.1-ppc64le-dvd1.iso,node-name=drive_cd2,read-only=on \ -blockdev driver=raw,node-name=drive_cd22,file=drive_cd2,read-only=on \ -device scsi-cd,id=cd1,drive=drive_cd22,bus=scsi2.0,channel=1,scsi-id=0,lun=1,bootindex=1 \ -device virtio-net-pci,mac=9a:7b:7c:7d:7e:72,id=id9HRc5V,vectors=4,netdev=idjlQN53,bus=pci.0,addr=0xa \ -netdev tap,id=idjlQN53,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \ -boot order=cdn,once=n,menu=on,strict=off \ press f12 at the early stage of boot result: Select boot device (or press '0' to abort): 1) disk : /vdevice/v-scsi@71000001/disk@8000000000000000 2) net : /pci@800000020000000/ethernet@a change the value of channel of scsi-cd disk to 0, result is as following: Select boot device (or press '0' to abort): 1) cdrom : /vdevice/v-scsi@71000001/disk@8001000000000000 2) disk : /vdevice/v-scsi@71000001/disk@8000000000000000 3) net : /pci@800000020000000/ethernet@a
Patch has been accepted upstream: https://github.com/aik/SLOF/commit/8ae76e0f117d56deb9f8a4f3ed4c36359bb08d4a
Fix included in SLOF-20180702-3.git9b7ab2f.module+el8+2717+98011079
Verify the bug on following sw versions: Host kernel: 4.18.0-62.el8.ppc64le qemu-kvm-3.1.0-7.module+el8+2715+f4b84bed.ppc64le SLOF-20180702-3.git9b7ab2f.module+el8+2717+98011079.noarch i.e. SLOF could scan scsi disk with channel id is 1, 2 and 3. Will change the bug to VERIFIED status once it enters ON_QA stage.
According to comment 12, change bug to verified status.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1293