Bug 1932638
| Summary: | Removing ssh keys MC does not remove the key from authorized_keys | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | OpenShift BugZilla Robot <openshift-bugzilla-robot> |
| Component: | Machine Config Operator | Assignee: | Kirsten Garrison <kgarriso> |
| Status: | CLOSED ERRATA | QA Contact: | Michael Nguyen <mnguyen> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.5 | CC: | mbetti, mkrejci, zhsun |
| Target Milestone: | --- | Keywords: | Security |
| Target Release: | 4.7.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-06-15 09:26:44 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1885186 | ||
| Bug Blocks: | |||
Verified on 4.7.0-0.nightly-2021-06-01-194227. The fix here does not actually allow deleting of all SSH keys and the core user. It enforces that the core user must exist and must contain at least one key instead of silently failing (the user would believe the key was deleted when in actuality it was not).
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.0-0.nightly-2021-06-01-194227 True False 43m Cluster version is 4.7.0-0.nightly-2021-06-01-194227
$ oc get mc
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
00-worker 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
01-master-container-runtime 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
01-master-kubelet 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
01-worker-container-runtime 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
01-worker-kubelet 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
99-master-generated-registries 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
99-master-ssh 3.2.0 53m
99-worker-generated-registries 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
99-worker-ssh 3.2.0 53m
rendered-master-a34cb1469a464bf99881ae4189345b0a 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
rendered-worker-0253c70b27d1e02f40c8494bbecf8952 3c1fc49624d0a9edbbd4ac20223afbdbd4b5ccf4 3.2.0 39m
$ oc edit mc/99-worker-ssh
machineconfig.machineconfiguration.openshift.io/99-worker-ssh edited
$ oc get mc/99-worker-ssh -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
creationTimestamp: "2021-06-02T19:52:00Z"
generation: 2
labels:
machineconfiguration.openshift.io/role: worker
managedFields:
- apiVersion: machineconfiguration.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:machineconfiguration.openshift.io/role: {}
f:spec:
.: {}
f:config:
.: {}
f:ignition:
.: {}
f:version: {}
f:passwd: {}
f:extensions: {}
f:fips: {}
f:kernelArguments: {}
f:kernelType: {}
f:osImageURL: {}
manager: cluster-bootstrap
operation: Update
time: "2021-06-02T19:52:00Z"
- apiVersion: machineconfiguration.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:config:
f:passwd:
f:users: {}
manager: oc
operation: Update
time: "2021-06-02T20:46:23Z"
name: 99-worker-ssh
resourceVersion: "35064"
selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigs/99-worker-ssh
uid: 1ccc837f-423c-4641-8a17-401469e00cc9
spec:
config:
ignition:
version: 3.2.0
passwd:
users:
- name: core
sshAuthorizedKeys: []
extensions: null
fips: false
kernelArguments: null
kernelType: ""
osImageURL: ""
$ oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-a34cb1469a464bf99881ae4189345b0a True False False 3 3 3 0 42m
worker rendered-worker-0253c70b27d1e02f40c8494bbecf8952 False True False 3 0 0 0 42m
$ oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-a34cb1469a464bf99881ae4189345b0a True False False 3 3 3 0 43m
worker rendered-worker-0253c70b27d1e02f40c8494bbecf8952 False True True 3 0 0 1 43m
$ oc get mcp/worker -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: "2021-06-02T20:03:45Z"
generation: 4
labels:
machineconfiguration.openshift.io/mco-built-in: ""
pools.operator.machineconfiguration.openshift.io/worker: ""
managedFields:
- apiVersion: machineconfiguration.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:machineconfiguration.openshift.io/mco-built-in: {}
f:pools.operator.machineconfiguration.openshift.io/worker: {}
f:spec:
.: {}
f:configuration: {}
f:machineConfigSelector:
.: {}
f:matchLabels:
.: {}
f:machineconfiguration.openshift.io/role: {}
f:nodeSelector:
.: {}
f:matchLabels:
.: {}
f:node-role.kubernetes.io/worker: {}
f:paused: {}
manager: machine-config-operator
operation: Update
time: "2021-06-02T20:03:45Z"
- apiVersion: machineconfiguration.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:configuration:
f:name: {}
f:source: {}
f:status:
.: {}
f:conditions: {}
f:configuration:
.: {}
f:name: {}
f:source: {}
f:degradedMachineCount: {}
f:machineCount: {}
f:observedGeneration: {}
f:readyMachineCount: {}
f:unavailableMachineCount: {}
f:updatedMachineCount: {}
manager: machine-config-controller
operation: Update
time: "2021-06-02T20:06:00Z"
name: worker
resourceVersion: "35138"
selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
uid: d797a61e-185a-46f0-85e7-1b217b3d4cdb
spec:
configuration:
name: rendered-worker-4a83941a05734295687c80e1ab0c3967
source:
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 00-worker
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-container-runtime
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-kubelet
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-generated-registries
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-ssh
machineConfigSelector:
matchLabels:
machineconfiguration.openshift.io/role: worker
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
paused: false
status:
conditions:
- lastTransitionTime: "2021-06-02T20:06:05Z"
message: ""
reason: ""
status: "False"
type: RenderDegraded
- lastTransitionTime: "2021-06-02T20:46:33Z"
message: ""
reason: ""
status: "False"
type: Updated
- lastTransitionTime: "2021-06-02T20:46:33Z"
message: All nodes are updating to rendered-worker-4a83941a05734295687c80e1ab0c3967
reason: ""
status: "True"
type: Updating
- lastTransitionTime: "2021-06-02T20:46:38Z"
message: 'Node ip-10-0-189-132.us-west-2.compute.internal is reporting: "can''t
reconcile config rendered-worker-0253c70b27d1e02f40c8494bbecf8952 with rendered-worker-4a83941a05734295687c80e1ab0c3967:
ignition passwd user section contains unsupported changes: user must be core
and have 1 or more sshKeys: unreconcilable"'
reason: 1 nodes are reporting degraded status on sync
status: "True"
type: NodeDegraded
- lastTransitionTime: "2021-06-02T20:46:38Z"
message: ""
reason: ""
status: "True"
type: Degraded
configuration:
name: rendered-worker-0253c70b27d1e02f40c8494bbecf8952
source:
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 00-worker
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-container-runtime
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 01-worker-kubelet
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-generated-registries
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 99-worker-ssh
degradedMachineCount: 1
machineCount: 3
observedGeneration: 4
readyMachineCount: 0
unavailableMachineCount: 0
updatedMachineCount: 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.16 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2286 |
Failed to verify, after deleting 99-worker-ssh, mcp worker is degraded. # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.1 True False 21m Cluster version is 4.7.1 # oc debug node/ip-10-0-214-42.us-east-2.compute.internal Starting pod/ip-10-0-214-42us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.214.42 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-4.4# cat /home/core/.ssh/authorized_keys s21s21s21s21s21s21s21s21s21s21s21s21s21 # oc delete machineconfig 99-worker-ssh machineconfig.machineconfiguration.openshift.io "99-worker-ssh" deleted # oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-92e7d8bd8c6fce916cc2ff99d3d975bf True False False 3 3 3 0 3h24m worker rendered-worker-0597fe0c563d2fbf33f736f96403dc3f False True True 3 0 0 1 3h24m # oc get mcp worker -o yaml - lastTransitionTime: "2021-03-08T04:32:59Z" message: 'Node ip-10-0-151-59.us-east-2.compute.internal is reporting: "can''t reconcile config rendered-worker-0597fe0c563d2fbf33f736f96403dc3f with rendered-worker-f57d57f3e1ba42cb7e33844e388dc321: ignition passwd user section contains unsupported changes: user core may not be deleted: unreconcilable"' reason: 1 nodes are reporting degraded status on sync status: "True" type: NodeDegraded