Description of problem: - Going through the valid configuration changes that are possible to implement using MCO [1], SSH keys is one of the configurations which can be changed by modifying the worker and master machine config. - However, the SSH public key is to be provided at the beginning of the installation such that the nodes have the public key as part of ~core/.ssh/authorized_keys file on respective nodes. - If above SSH public key is given at installation time, then it is still possible to add/remove the SSH public keys as per [2]. Now, we noticed that this is possible when we have '99-worker-ssh' and '99-master-ssh' machineconfigs created. Checked in lab environment and both of these machineconfigs were not created if we do not provide the SSH public key at the beginning of the installation (IPI-AWS). - We could not proceed ahead with solution in [3] since the SSH public keys were not provided during installation time. So for now, we are managing the issue by running `oc debug node/<nodename>` and then running the required commands. Is this a limitation or is there any other better way to handle this? [1] - https://github.com/openshift/machine-config-operator#applying-configuration-changes-to-the-cluster [2] - https://github.com/openshift/machine-config-operator/blob/master/docs/Update-SSHKeys.md [3] - https://access.redhat.com/solutions/4073041 Version-Release number of selected component (if applicable): OCP 4.1 using IPI on AWS Steps to Reproduce: 1. Install new cluster using IPI on AWS and do not provide the SSH Public key 2. Try to add new SSH public key using MCO Actual results: 1. Not able to add/remove SSH keys post installation if did not provide SSH public keys in the beginning of the installation Expected results: 1. A working way to add new SSH key pair for core user on the nodes if the user decides to access to the nodes later on for troubleshooting. 2. We can update the docs accordingly.
Abhinav, while I test this out, was this the intended behavior? should we always create an ssh MC pair even if no ssh is provided at installation?
You can just create the SSH MC though, so here's how: $ cat 99-worker-ssh.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-ssh spec: config: ignition: version: 2.2.0 passwd: users: - name: core sshAuthorizedKeys: - ssh-rsa XXXXXXXXXX $ oc create -f 99-worker-ssh.yaml Same goes for the master pool, afterwards, deploy the bastion and it will work. Closing as not a bug but I'm taking the action to improve our docs in the MCO repo (https://github.com/openshift/machine-config-operator/pull/1078)
Can you provide the output of what's below after you create the SSH keys MCs: $ oc describe machineconfigpool master $ oc describe machineconfigpool worker Are you waiting for the pools to roll out the changes?
Thanks Ravi for triple checking this, we have a PR in flight to update our own documentation here https://github.com/openshift/machine-config-operator/pull/1078 - any input would be highly appreciated. Closing this now.