Created attachment 1387674 [details] failed1.png Description of problem: Retrieval of iSCSI targets failed while deploying HE via cockpit based otopi deployment. And deploy HE with iscsi storage failed. From the cockpit: The requested device is not listed by VDSM Failed to execute stage 'Environment customization': Cannot access LUN Hosted Engine deployment failed Version-Release number of selected component (if applicable): cockpit-ws-157-1.el7.x86_64 cockpit-bridge-157-1.el7.x86_64 cockpit-storaged-157-1.el7.noarch cockpit-dashboard-157-1.el7.x86_64 cockpit-157-1.el7.x86_64 cockpit-ovirt-dashboard-0.11.6-0.1.el7ev.noarch cockpit-system-157-1.el7.noarch ovirt-hosted-engine-setup-2.2.8-2.el7ev.noarch ovirt-hosted-engine-ha-2.2.4-1.el7ev.noarch rhvm-appliance-4.2-20180125.0.el7.noarch rhvh-4.2.1.2-0.20180126.0+1 How reproducible: 100% Steps to Reproduce: 1. Clean install latest RHVH4.2.1 with ks(rhvh-4.2.1.2-0.20180126.0+1) 2. Deploy HE via cockpit with iscsi storage based otopi deployment Actual results: After step2, from the cockpit: 1) Retrieval of iSCSI targets failed. 2) The requested device is not listed by VDSM Failed to execute stage 'Environment customization': Cannot access LUN Hosted Engine deployment failed Expected results: Deploy HE via cockpit with iscsi storage based otopi deployment successfully. Additional info: Deploy HE via CLI with the same iscsi storage based otopi deployment successfully. --------------------------------------------------------------------------------------------------------------- [root@dell-per515-02 ~]# hosted-engine --deploy --noansible [ INFO ] Stage: Initializing [ INFO ] Generating a temporary VNC password. [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. Continuing will configure this host for serving as hypervisor and create a VM where you have to install the engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]: It has been detected that this program is executed through an SSH connection without using screen. Continuing with the installation may lead to broken installation if the network connection fails. It is highly recommended to abort the installation and run it inside a screen session using command "screen". Do you want to continue anyway? (Yes, No)[No]: Yes [ INFO ] Hardware supports virtualization Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180129165632-18rbz7.log Version: otopi-1.7.6 (otopi-1.7.6-1.el7ev) [ INFO ] Detecting available oVirt engine appliances [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: iscsi Please specify the iSCSI portal IP address: 10.66.8.158 Please specify the iSCSI portal port [3260]: Please specify the iSCSI portal user: yzhao Please specify the iSCSI portal password: The following targets have been found: [1] iqn.iscsi-2017.com TPGT: 1, portals: 10.66.8.158:3260 [2] iqn.iscsi-2018.com TPGT: 1, portals: 10.66.8.158:3260 Please select a target (1, 2) [1]: [ INFO ] Connecting to the storage server The following luns have been found on the requested target: [1] LUN1 360000000000000000e00000000010001 60GiB IET VIRTUAL-DISK status: free, paths: 1 active Please select the destination LUN (1) [1]: [ INFO ] Connecting to the storage server --== HOST NETWORK CONFIGURATION ==-- Please indicate a pingable gateway IP address [10.73.75.254]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a nic to set ovirtmgmt bridge on: (em2, p1p1, em1, p2p1, p2p2, p3p2, p3p1) [em1]: --== VM CONFIGURATION ==-- The following appliance have been found on your system: [1] - The RHV-M Appliance image (OVA) - 4.2-20180125.0.el7 [2] - Directly select an OVA file Please select an appliance (1, 2) [1]: [ INFO ] Verifying its sha1sum [ INFO ] Checking OVF archive content (could take a few minutes depending on archive size) [ INFO ] Checking OVF XML content (could take a few minutes depending on archive size) [ ERROR ] Not enough space in the temporary directory [/var/tmp] Please specify path to a temporary directory with at least 3 GB [/var/tmp]: /var/log/temp Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]: [ INFO ] Detecting host timezone. Would you like to use cloud-init to customize the appliance on the first boot (Yes, No)[Yes]? Would you like to generate on-fly a cloud-init ISO image (of no-cloud type) or do you have an existing one (Generate, Existing)[Generate]? Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: (leave it empty to skip): []: rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [lab.eng.pek2.redhat.com] Automatically execute engine-setup on the engine appliance on first boot (Yes, No)[Yes]? Automatically restart the engine VM as a monitored service after engine-setup (Yes, No)[Yes]? Enter root password that will be used for the engine appliance (leave it empty to skip): Confirm appliance root password: Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): [WARNING] Skipping appliance root ssh public key Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: Please specify the size of the VM disk in GB: [50]: Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: The following CPU types are supported by this host: - model_Opteron_G5: AMD Opteron G5 - model_Opteron_G4: AMD Opteron G4 - model_Opteron_G3: AMD Opteron G3 - model_Opteron_G2: AMD Opteron G2 - model_Opteron_G1: AMD Opteron G1 Please specify the CPU type to be used by the VM [model_Opteron_G5]: Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:06:a0:b8]: 52:54:00:5e:8e:c7 How should the engine VM network be configured (DHCP, Static)[DHCP]? Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No] yes --== HOSTED ENGINE CONFIGURATION ==-- Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: Enter engine admin password: Confirm engine admin password: [ INFO ] Stage: Setup validation --== CONFIGURATION PREVIEW ==-- Bridge interface : em1 Engine FQDN : rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com Bridge name : ovirtmgmt Host address : dell-per515-02.lab.eng.pek2.redhat.com SSH daemon port : 22 Firewall manager : iptables Gateway address : 10.73.75.254 Storage Domain type : iscsi LUN ID : 360000000000000000e00000000010001 Image size GB : 50 iSCSI Portal IP Address : 10.66.8.158 iSCSI Target Name : iqn.iscsi-2017.com iSCSI Portal port : 3260 Host ID : 1 iSCSI Target Portal Group Tag : 1 iSCSI portal login user : yzhao Memory size MB : 16384 Console type : vnc Number of CPUs : 4 MAC address : 52:54:00:5e:8e:c7 OVF archive (for disk boot) : /usr/share/ovirt-engine-appliance/rhvm-appliance-4.2-20180125.0.el7.ova Appliance version : 4.2-20180125.0.el7 Restart engine VM after engine-setup: True Engine VM timezone : Asia/Shanghai CPU Type : model_Opteron_G5 Please confirm installation settings (Yes, No)[Yes]: [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Configuring the management bridge [ INFO ] Creating Volume Group [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Creating Image for 'hosted-engine.lockspace' ... [ INFO ] Image for 'hosted-engine.lockspace' created successfully [ INFO ] Creating Image for 'hosted-engine.metadata' ... [ INFO ] Image for 'hosted-engine.metadata' created successfully [ INFO ] Creating VM Image [ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size) [ INFO ] Validating pre-allocated volume size [ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size) [ INFO ] Image successfully imported from OVF [ INFO ] Destroying Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: hosted-engine --console You can also graphically connect to the VM from your system with the following command: remote-viewer vnc://dell-per515-02.lab.eng.pek2.redhat.com:5900 Use temporary password "1281pbkm" to connect to vnc console. Please ensure that your Guest OS is properly configured to support serial console according to your distro documentation. Follow http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_old_way for more info. If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password [ INFO ] Running engine-setup on the appliance |- [ INFO ] Stage: Initializing |- [ INFO ] Stage: Environment setup |- Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-wsp.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-engine-answers', '/root/heanswers.conf'] |- Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20180129171503-udm66v.log |- Version: otopi-1.7.6 (otopi-1.7.6-1.el7ev) |- [ INFO ] Stage: Environment packages setup |- [ INFO ] Stage: Programs detection |- [ INFO ] Stage: Environment setup |- [ INFO ] Stage: Environment customization |- |- --== PRODUCT OPTIONS ==-- |- |- Configure ovirt-provider-ovn (Yes, No) [Yes]: |- Configure Image I/O Proxy on this host? (Yes, No) [Yes]: |- |- * Please note * : Data Warehouse is required for the engine. |- If you choose to not configure it on this host, you have to configure |- it on a remote host, and then configure the engine on this host so |- that it can access the database of the remote Data Warehouse host. |- Configure Data Warehouse on this host (Yes, No) [Yes]: |- |- --== PACKAGES ==-- |- |- |- --== NETWORK CONFIGURATION ==-- |- |- [ INFO ] firewalld will be configured as firewall manager. |- |- --== DATABASE CONFIGURATION ==-- |- |- Where is the DWH database located? (Local, Remote) [Local]: |- Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications. |- Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: |- |- --== OVIRT ENGINE CONFIGURATION ==-- |- |- Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: |- |- --== STORAGE CONFIGURATION ==-- |- |- |- --== PKI CONFIGURATION ==-- |- |- |- --== APACHE CONFIGURATION ==-- |- |- |- --== SYSTEM CONFIGURATION ==-- |- |- |- --== MISC CONFIGURATION ==-- |- |- Please choose Data Warehouse sampling scale: |- (1) Basic |- (2) Full |- (1, 2)[1]: |- |- --== END OF CONFIGURATION ==-- |- |- [ INFO ] Stage: Setup validation |- |- --== CONFIGURATION PREVIEW ==-- |- |- Application mode : both |- Default SAN wipe after delete : False |- Firewall manager : firewalld |- Update Firewall : True |- Host FQDN : rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com |- Configure local Engine database : True |- Set application as default page : True |- Configure Apache SSL : True |- Engine database secured connection : False |- Engine database user name : engine |- Engine database name : engine |- Engine database host : localhost |- Engine database port : 5432 |- Engine database host name validation : False |- Engine installation : True |- PKI organization : lab.eng.pek2.redhat.com |- Set up ovirt-provider-ovn : True |- Configure WebSocket Proxy : True |- DWH installation : True |- DWH database host : localhost |- DWH database port : 5432 |- Configure local DWH database : True |- Configure Image I/O Proxy : True |- Configure VMConsole Proxy : True |- [ INFO ] Stage: Transaction setup |- [ INFO ] Stopping engine service |- [ INFO ] Stopping ovirt-fence-kdump-listener service |- [ INFO ] Stopping dwh service |- [ INFO ] Stopping Image I/O Proxy service |- [ INFO ] Stopping vmconsole-proxy service |- [ INFO ] Stopping websocket-proxy service |- [ INFO ] Stage: Misc configuration |- [ INFO ] Stage: Package installation |- [ INFO ] Stage: Misc configuration |- [ INFO ] Upgrading CA |- [ INFO ] Initializing PostgreSQL |- [ INFO ] Creating PostgreSQL 'engine' database |- [ INFO ] Configuring PostgreSQL |- [ INFO ] Creating PostgreSQL 'ovirt_engine_history' database |- [ INFO ] Configuring PostgreSQL |- [ INFO ] Creating CA |- [ INFO ] Creating/refreshing DWH database schema |- [ INFO ] Configuring Image I/O Proxy |- [ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts |- [ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts |- [ INFO ] Configuring WebSocket Proxy |- [ INFO ] Creating/refreshing Engine database schema |- [ INFO ] Creating/refreshing Engine 'internal' domain database schema |- [ INFO ] Adding default OVN provider to database |- [ INFO ] Adding OVN provider secret to database |- [ INFO ] Setting a password for internal user admin |- [ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf' |- [ INFO ] Stage: Transaction commit |- [ INFO ] Stage: Closing up |- [ INFO ] Starting engine service |- [ INFO ] Starting dwh service |- [ INFO ] Restarting ovirt-vmconsole proxy service |- |- --== SUMMARY ==-- |- |- [ INFO ] Restarting httpd |- Please use the user 'admin@internal' and password specified in order to login |- Web access is enabled at: |- http://rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com:80/ovirt-engine |- https://rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com:443/ovirt-engine |- Internal CA EF:64:D9:92:4D:35:39:D5:84:0F:DA:01:61:3C:4C:BF:43:D2:14:9C |- SSH fingerprint: SHA256:QEpIKzT26E5b1ntVvCuABN3B5JIVj/ky+QHwulprjxE |- |- --== END OF SUMMARY ==-- |- |- [ INFO ] Stage: Clean up |- Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180129171503-udm66v.log |- [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180129171958-setup.conf' |- [ INFO ] Stage: Pre-termination |- [ INFO ] Stage: Termination |- [ INFO ] Execution of setup completed successfully |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS [ INFO ] Engine-setup successfully completed [ INFO ] Engine is still unreachable [ INFO ] Engine is still not reachable, waiting... [ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Acquiring internal CA cert from the engine [ INFO ] The following CA certificate is going to be used, please immediately interrupt if not correct: [ INFO ] Issuer: C=US, O=lab.eng.pek2.redhat.com, CN=rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com.87973, Subject: C=US, O=lab.eng.pek2.redhat.com, CN=rhevh-hostedengine-vm-04.lab.eng.pek2.redhat.com.87973, Fingerprint (SHA-1): EF64D9924D3539D5840FDA01613C4CBF43D2149C [ INFO ] Connecting to Engine [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] The VDSM Host is now operational [ INFO ] Saving hosted-engine configuration on the shared storage domain [ INFO ] Shutting down the engine VM [ INFO ] Enabling and starting HA services [ INFO ] Waiting for engine to start... [ INFO ] Still waiting for engine to start... [ INFO ] Still waiting for engine to start... [ INFO ] Engine is up [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180129172701.conf' [ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Hosted Engine successfully deployed
Created attachment 1387675 [details] cockpit_failed.png
*** Bug 1541029 has been marked as a duplicate of this bug. ***
Per Simone, this went into 4.2.1
Still failing in cockpit-ovirt-0.11.9-0.1 2018-02-02 13:22:25,653+0100 DEBUG otopi.context context.dumpEnvironment:833 ENV OVEHOSTED_STORAGE/iSCSIPortalIPAddress=str:'192.168.1.125' 2018-02-02 13:22:25,653+0100 DEBUG otopi.context context.dumpEnvironment:833 ENV OVEHOSTED_STORAGE/iSCSIPortalPassword=str:'' 2018-02-02 13:22:25,653+0100 DEBUG otopi.context context.dumpEnvironment:833 ENV OVEHOSTED_STORAGE/iSCSIPortalPort=str:'3260' 2018-02-02 13:22:25,653+0100 DEBUG otopi.context context.dumpEnvironment:833 ENV OVEHOSTED_STORAGE/iSCSIPortalUser=str:'' 2018-02-02 13:22:25,653+0100 DEBUG otopi.context context.dumpEnvironment:833 ENV OVEHOSTED_STORAGE/iSCSITargetName=str:'' 2018-02-02 13:27:00,111+0100 DEBUG otopi.plugins.gr_he_setup.storage.blockd blockd._customization:696 target: , tpgt: 1 2018-02-02 13:27:00,111+0100 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/blockd.py", line 699, in _customization ip_port_list = valid_targets_dict[target][tpgt] KeyError: ''
It still shows None instead of an empty field but you can continue. None value will be fixed as for https://bugzilla.redhat.com/show_bug.cgi?id=1541848
Can deploy HE with iSCSI with cockpit: Test version: cockpit-ws-157-1.el7.x86_64 cockpit-bridge-157-1.el7.x86_64 cockpit-storaged-157-1.el7.noarch cockpit-dashboard-157-1.el7.x86_64 cockpit-157-1.el7.x86_64 cockpit-ovirt-dashboard-0.11.10-0.1.el7ev.noarch cockpit-system-157-1.el7.noarch ovirt-hosted-engine-setup-2.2.9-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.4-1.el7ev.noarch rhvh-4.2.1.2-0.20180202.0+1 rhvm-appliance-4.2-20180202.0.el7.noarch So, change the bug's status to verified!
Move back to Modify status due to new patch imported
Created attachment 1391788 [details] scenario1_1
Created attachment 1391789 [details] scenario1_2
Created attachment 1391790 [details] scenario1_1_failed
Created attachment 1391791 [details] scenario1_2_failed
Created attachment 1391804 [details] scenario2
Created attachment 1391805 [details] scenario2_1
Update: I have tested some scenarios with the latest version (cockpit-ovirt-dashboard-0.11.11-0.1.el7ev.noarch) 1. Finish the iscsi configuration on the storage page(user/password, target name, Lun) https://bugzilla.redhat.com/attachment.cgi?id=1391788 https://bugzilla.redhat.com/attachment.cgi?id=1391789 connect to the iscsi target (scenario1_1 is need to the user/password, scenario1_2 did't need the user/password) Result: The requested device is not listed by VDSM Failed to execute stage 'Environment customization': Cannot access LUN Hosted Engine deployment failed https://bugzilla.redhat.com/attachment.cgi?id=1391790 https://bugzilla.redhat.com/attachment.cgi?id=1391791 2. Just finish the iscsi target ip and port(didn't finish the user/password, target name, Lun) https://bugzilla.redhat.com/attachment.cgi?id=1391804 https://bugzilla.redhat.com/attachment.cgi?id=1391805 Result: list all iscsi target, select the used iscsi target , will enter the deployment process. So, change the bug's status to assigned.
(In reply to Yihui Zhao from comment #15) > Update: > I have tested some scenarios with the latest version > (cockpit-ovirt-dashboard-0.11.11-0.1.el7ev.noarch) > > > > 1. Finish the iscsi configuration on the storage page(user/password, target > name, Lun) > https://bugzilla.redhat.com/attachment.cgi?id=1391788 > In destination LUN field you have the enter the LUN uuid and I don't think that 1 is a reasonable uuid. Can you please retry leaving that field empty in order to let otopi propose a selection menu for you?
(In reply to Simone Tiraboschi from comment #17) > In destination LUN field you have the enter the LUN uuid and I don't think > that 1 is a reasonable uuid. > Can you please retry leaving that field empty in order to let otopi propose > a selection menu for you? I opened a new specific bug on LUN uuid validation.
(In reply to Simone Tiraboschi from comment #17) > (In reply to Yihui Zhao from comment #15) > > Update: > > I have tested some scenarios with the latest version > > (cockpit-ovirt-dashboard-0.11.11-0.1.el7ev.noarch) > > > > > > > > 1. Finish the iscsi configuration on the storage page(user/password, target > > name, Lun) > > https://bugzilla.redhat.com/attachment.cgi?id=1391788 > > > > In destination LUN field you have the enter the LUN uuid and I don't think > that 1 is a reasonable uuid. > Can you please retry leaving that field empty in order to let otopi propose > a selection menu for you? let the destination LUN field empty, then deploy HE successfully. See the comment 15: 2. Just finish the iscsi target ip and port(didn't finish the user/password, target name, Lun) https://bugzilla.redhat.com/attachment.cgi?id=1391804 https://bugzilla.redhat.com/attachment.cgi?id=1391805 Result: list all iscsi target, select the used iscsi target , will enter the deployment process.
Hey Yihui - Can we move this back to VERIFIED, then? Let's document this for beta, and release, with improved discovery values in 4.2.2
I'm going to rework this for 4.2.2 The iSCSI discovery flow should allow the following: (Optional) username password (Required) target address This is to be followed by two buttons: <Test> (to verify username/password works) <Discover> (get a list of targets and populate a dropdown) There is not enough time to complete this for 4.2.1, and it's not feasible to adequately resolve this by getting an actual listing of LUNs prior to 4.2.1, which leaves two options: * Completely disable all of these fields for now, leaving them blank, and let otopi succeed. * Document that users should input the IQN, which would be a reasonable assumption anyway. In the interest of shipping 4.2.1, I'd advocate for the latter. Yaniv?
(In reply to Ryan Barry from comment #20) > Hey Yihui - > > Can we move this back to VERIFIED, then? > > Let's document this for beta, and release, with improved discovery values in > 4.2.2 Ryan, Simone filed a new bug to track the issue. And due to the release day coming, I think only document that users (for example, only input the target address for attempt). The new bug to track the "cockpit validation": https://bugzilla.redhat.com/show_bug.cgi?id=1542426 If consider to close it, I will change the status to verified when it becomes ON_QA.
The new flow is definitively the ansible based one and the otopi one is just a fallback. On my opinion it doesn't make much sense to develop something new on cockpit side specific for the vintage otopi flow (neither for 4.2.2). I'd suggest to simply hide the iSCSI related fields on cockpit side and let otopi take over and focusing on a proper design of the iSCSI dialog (with multipath support) for the ansible based flow for 4.2.2.
I agree, but the code for the wizard is mostly shared, so a 'proper' implementation of iSCSI discovery will affect both.
(In reply to Ryan Barry from comment #24) > I agree, but the code for the wizard is mostly shared, so a 'proper' > implementation of iSCSI discovery will affect both. This is an issue by itself: on the new ansible flow we have a specific playbook that performs the iSCSI discovery using REST API, no need to directly deal with iscsiadm and multipath tools.
This is exactly what's being used by IscsiUtil.js on the storage page (both flows heavily use Ansible to discover facts) My hope is to make this more interactive from the wizard, not to use iscsiadm
According to the comment 22, change the bug's status to verified, about the "cockpit validation" , see the bug : https://bugzilla.redhat.com/show_bug.cgi?id=1542426
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.