Bug 1514466

Summary: Configuring the host from cockpit is stuck, if ssh public key authentication is not configured
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: rhhiAssignee: Sahina Bose <sabose>
Status: CLOSED DUPLICATE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: rhhi-1.1CC: rhs-bugs
Target Milestone: ---Keywords: Tracking
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1514490 (view as bug list) Environment:
Last Closed: 2018-11-20 09:24:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1514490, 1651516    
Bug Blocks:    
Attachments:
Description Flags
Screenshot that shows the cockpit UI none

Description SATHEESARAN 2017-11-17 14:20:58 UTC
Description of problem:
-----------------------
If ssh public key authentication is not configured, and deployment is initiated, then there is no proper indication of error, and the flow is stuck, unless the user cancels and restarts again.

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gdeploy-2.0.2-19.el7rhgs
ansible-2.4.0.1-1
cockpit-ovirt-dashboard-0.10.9-1.el7ev.noarch

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Skip the step of copying the ssh public key to the other hosts in the cluster
2. Start the deployment

Actual results:
---------------
The cockpit window is stuck in the page.
There is no provision to 'redeploy' unless to cancel the installation

Expected results:
-----------------
1. There should be error thrown at cockpit UI, when unable to connect to the host
2. There should be flexibility to redeploy with the available conf file

Comment 1 SATHEESARAN 2017-11-17 14:21:45 UTC
Created attachment 1354186 [details]
Screenshot that shows the cockpit UI

Comment 2 SATHEESARAN 2017-11-17 14:25:15 UTC
Error message as seen from gdeploy logs:

<snip>


[2017-11-17 18:25:27] INFO gdeploy[186]: **** gdeploy run started with options ['--version'] ****
[2017-11-17 18:25:27] INFO gdeploy[187]: For complete CLI log, set log_path in /etc/ansible/ansible.cfg
[2017-11-17 18:39:28] INFO gdeploy[185]: 


[2017-11-17 18:39:28] INFO gdeploy[186]: **** gdeploy run started with options ['-c', '/tmp/gdeployConfig.conf'] ****
[2017-11-17 18:39:28] INFO gdeploy[187]: For complete CLI log, set log_path in /etc/ansible/ansible.cfg
[2017-11-17 18:39:28] INFO gdeploy[197]: gdeploy hosts: ['10.70.36.73', '10.70.36.74', '10.70.36.75']
[2017-11-17 18:39:28] INFO gdeploy[170]: Playbook directory for the run: /usr/share/gdeploy/playbooks
[2017-11-17 18:39:28] INFO helpers.py[104]: Copied files from /usr/share/gdeploy/playbooks to /tmp/tmprmF1YV
[2017-11-17 18:39:28] INFO core_function_caller.py[40]: No tuning profiles specified, ignoring
[2017-11-17 18:39:28] INFO call_features.py[146]: Parsing json file: /usr/lib/python2.7/site-packages/gdeployfeatures/script/script.json
[2017-11-17 18:39:28] INFO helpers.py[376]: Invoking playbook  /tmp/tmprmF1YV/run-script.yml
[2017-11-17 18:46:52] INFO gdeploy[185]: 


</snip>

Comment 3 Sahina Bose 2018-11-20 09:24:27 UTC

*** This bug has been marked as a duplicate of bug 1649485 ***