Description of problem: Cannot add an SSH key to an existing VM. Version-Release number of selected component (if applicable): Virtualization 4.12.1 How reproducible: Always Steps to Reproduce: 1. Create a VM using the GUI (create RHEL or Fedora) 2. Do not specify an ssh public key if customizing the VM. 3. After the VM is created, go to the Scripts tab under the VM details. 4. Edit the Authorized SSH Key 5. Edit the secret and upload an ssh public key or attach and existing secret. 6. save the key 7. restart or stop/start the VM using the GUI (do not issue a shutdown or restart command in the OS) Actual results: The VMs yaml gets updated with the secret information. ~~~ spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: rhel9-test-ssh-key-anbr6q ~~~ But the authorized_key file does not get updated. The virt-launcher pod does get recreated during the restart process. Expected results: The users .ssh/authorized_key file gets updated. Additional info: Adding the ssh public key using the Scripts tab works during the initial creation of the VM. Section "4.3.1.11. Scripts tab" of the following document makes it sound like adding the key after the VMs creation is possible. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/virtualization/index#ui-virtualmachine-details-scripts_virt-web-console-overview Also, adding a key in the GUI mentions a restart of the VM is required for the key to be added. It shows the following messsage after making changes: "Warning alert:Restart required to apply changes. The changes you have made require this VirtualMachine to be restarted." After saving the changes, a banner at the top of the VMs page shows the following: "Warning alert:Pending Changes The following areas have pending changes that will be applied when this VirtualMachine is restarted. - Scripts > Authorized SSH Key" All of this makes it look possible to add the keys after the VMs creation.
Could reproduce the issue, but I don't see any difference while adding the secret before the VM is created or after it. Tal, Could you ask someone look at the issue? It looks like not the UI issue to me.
The observed behavior is the expected behavior for cloud-init applying the config from a configDrive. The management of the authorized key outside the VM is tracked in https://issues.redhat.com/browse/CNV-6036 . I propose to change the text of the warning to "Warning alert: The authorized SSH key is only applied by cloud-init during the first boot of a VM" until https://issues.redhat.com/browse/CNV-6036 it is modified in https://issues.redhat.com/browse/CNV-6036 again.
https://bugzilla.redhat.com/show_bug.cgi?id=2151826 has the same cause, edit the cloud-init to existing VM won't work.
@dholler is there a way to know if vm already had the first start and cloud-init configuration applied? If we get this information, we can disable the editing. I would completely remove pending change / restart required alerts and add an alert on top of cloud-init for that @yfrimanm
@dvossel is there a way to know if vm already had the first start and cloud-init configuration applied? Maybe the status.created or the status.printableStatus?
@dholler Thanks Dominik. status.created seems to be great. But I have another question for you. Do we plan to change this behavior in the future? Will we add the possibility to attach ssh keys after creation? Or this is not planned for some technical limitations?
> Do we plan to change this behavior in the future? Yes, we plan to improve it. > Will we add the possibility to attach ssh keys after creation? Or this is not planned for some technical limitations? This is tracked in https://issues.redhat.com/browse/CNV-6036.
>@dvossel is there a way to know if vm already had the first start and cloud-init configuration applied? Maybe the status.created or the status.printableStatus? It's currently not possible to tell by looking at the VM/VMI api whether or not cloud-init has already run. This information is specific to the guest os. It's technically possible for a VM to run once and not have cloud-init execute, so any sort of heuristic that tries to detect if cloud-init is applied based on previous VM starts would be inaccurate. Using the guest agent dynamic ssh key injection feature [1], is the only reliable way I'm aware of to add ssh keys after creation. 1. http://kubevirt.io/user-guide/virtual_machines/accessing_virtual_machines/#dynamic-ssh-public-key-injection-via-qemu-guest-agent
@gouyang at this point, I would just add an alert message under the Scripts tab title. Explaining the situation. After this new implementation, we'll remove the message and that's it. We'll snooze also the cloud-init message 'restart for pending changes'
(In reply to Ugo Palatucci from comment #9) > @gouyang at this point, I would just add an alert message under > the Scripts tab title. Explaining the situation. > After this new implementation, we'll remove the message and that's it. > We'll snooze also the cloud-init message 'restart for pending changes' Sounds good to me.
verified on kubevirt-console-plugin-rhel9:v4.14.0-1195, now it shows an warning: Cloud-init and SSH key configurations will be applied to the VirtualMachine only at the first boot.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6817
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days