Description of problem: Before the cockpit-ceph-installer was created, when using Ansible directly we instructed the customer to create a regular user and set up passwordless sudo for it, on all cluster nodes. [1] We had them setup ssh-keys and had them distribute the keys to all nodes. [2] By default cockpit-ceph-installer uses root but it has support to use a regular user with passwordless sudo. [3] However, the cockpit-ceph-installer still tries to use its own generated ssh keys. If you add a node on the Hosts page you get this error: ---- SH Authentication Error You need to copy the ssh public key from this host to jb-ceph4-osd3, and ensure the user 'admin' is configured for passwordless SUDO. e.g. sudo ssh-copy-id -f -i /usr/share/ansible-runner-service/env/ssh_key.pub admin@jb-ceph4-osd3 ---- This RFE is a request for cockpit-ceph-installer to, if running from sudo, check if the user has ssh keys in ~/.ssh/id_rsa.pub and ~/.ssh/id_rsa, and if so, to use those*. 1) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/index#creating-an-ansible-user-with-sudo-access-install 2) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/index#enabling-passwordless-ssh-for-ansible 3) See "SUDO support" here: https://github.com/red-hat-storage/cockpit-ceph-installer * It should probably check for all the default named keys (from man ssh-keygen): ~/.ssh/id_dsa ~/.ssh/id_ecdsa ~/.ssh/id_ed25519 ~/.ssh/id_rsa ~/.ssh/id_dsa.pub ~/.ssh/id_ecdsa.pub ~/.ssh/id_ed25519.pub ~/.ssh/id_rsa.pub Version-Release number of selected component (if applicable): cockpit-ceph-installer-0.9-7.el8cp.noarch
fix available in 1.1 release
Verified using cockpit-ceph-installer-1.1-0.el7cp.noarch ansible-2.8.12-1.el7ae.noarch ceph-ansible-4.0.24-1.el7cp.noarch
Hi Paul, While testing I discovered this was only applicable to the customer-created user but the root user still needed to configure Cockpit Ceph Installer SSH key to all nodes in the cluster. Re-opening the Bz. I think this fix should also be applicable to the root user. @John Brier let me know your opinion.
(In reply to Ameena Suhani S H from comment #10) > Hi Paul, > While testing I discovered this was only applicable to the customer-created > user but the root user still needed to configure Cockpit Ceph Installer SSH > key to all nodes in the cluster. > > Re-opening the Bz. I think this fix should also be applicable to the root > user. @John Brier let me know your opinion. I'm okay with it only being for the non-root customer created user. You don't need to run Ansible as root.
(In reply to John Brier from comment #11) > (In reply to Ameena Suhani S H from comment #10) > > Hi Paul, > > While testing I discovered this was only applicable to the customer-created > > user but the root user still needed to configure Cockpit Ceph Installer SSH > > key to all nodes in the cluster. > > > > Re-opening the Bz. I think this fix should also be applicable to the root > > user. @John Brier let me know your opinion. > > I'm okay with it only being for the non-root customer created user. You > don't need to run Ansible as root. Based on the @John Brier input moving to "Verified" state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3003