Description of problem: On ISCSI as a storage domain, when the user need to choose LUN, the lun number is not presented to the user. Current output: [1] 3600140532a000d4a41b43c6b2fc57252 5GiB LIO-ORG scsi_disk1_serv status: free, paths: 1 active [2] 360014055d7b864b91c2480ba54a87c81 5GiB LIO-ORG scsi_disk4_serv status: free, paths: 1 active [3] 3600140588ad6c56fa8c4dfbbae3fd0ea 5GiB LIO-ORG scsi_disk5_serv status: free, paths: 1 active [4] 36001405ab745c879fb84b5984d37f4f7 5GiB LIO-ORG scsi_disk2_serv status: free, paths: 1 active [5] 36001405fe0e5a5999814f9aa914b2023 5GiB LIO-ORG scsi_disk3_serv status: free, paths: 1 active It will be nice to add the lun's number the this output as well to help the user identify the luns more easily.
New output: The following luns have been found on the requested target: [1] LUN0 36001405ab745c879fb84b5984d37f4f7 5GiB LIO-ORG scsi_disk2_serv status: free, paths: 1 active [2] LUN1 3600140532a000d4a41b43c6b2fc57252 5GiB LIO-ORG scsi_disk1_serv status: free, paths: 1 active [3] LUN2 36001405fe0e5a5999814f9aa914b2023 5GiB LIO-ORG scsi_disk3_serv status: free, paths: 1 active [4] LUN3 360014055d7b864b91c2480ba54a87c81 5GiB LIO-ORG scsi_disk4_serv status: free, paths: 1 active [5] LUN4 3600140588ad6c56fa8c4dfbbae3fd0ea 5GiB LIO-ORG scsi_disk5_serv status: free, paths: 1 active
With bug 1348225 solved too, this might be redundant - I am not a storage/iSCSI expert, no idea. It might be that the running index number is always the same (+ 1) as the lun number. Still, no harm in having both. You can try to play games - e.g. remove a lun you created in the middle (say number 3 above) and see how it affects the order.
True, for now... but if we decide in the future that the luns need to be sorted differently, an index counter will cause a bug. IMO, this way is much safer.
I've created LUNs on target by this order: 80GB, 81GB, 82GB, 83GB. I've received this order during deployment: [ INFO ] Connecting to the storage server The following luns have been found on the requested target: [1] LUN1 3514f0c5a516015ce 80GiB XtremIO XtremApp status: free, paths: 1 active [2] LUN2 3514f0c5a516015cf 81GiB XtremIO XtremApp status: free, paths: 1 active [3] LUN3 3514f0c5a516015cd 82GiB XtremIO XtremApp status: free, paths: 1 active [4] LUN4 3514f0c5a516015cc 83GiB XtremIO XtremApp status: free, paths: 1 active Please select the destination LUN (1, 2, 3, 4) [1]: Moving to verified, as the order appears as expected. Works for me on these components: ovirt-hosted-engine-ha-2.2.5-1.el7ev.noarch ovirt-hosted-engine-setup-2.2.10-1.el7ev.noarch rhvm-appliance-4.2-20180202.0.el7.noarch Linux 3.10.0-693.19.1.el7.x86_64 #1 SMP Thu Feb 1 12:34:44 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
This bugzilla is included in oVirt 4.2.1 release, published on Feb 12th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.1 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.