Bug 2069891
| Summary: | Rule “ocp4-kubelet-enable-streaming-connections” is not set up correctly | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Vincent Shen <wenshen> |
| Component: | Compliance Operator | Assignee: | Vincent Shen <wenshen> |
| Status: | CLOSED ERRATA | QA Contact: | xiyuan |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.10 | CC: | clasohm, lbragsta, mrogers, suprs, xiyuan |
| Target Milestone: | --- | Flags: | xiyuan:
needinfo-
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause: Incorrect variable comparison.
Consequence: Results in inaccurate scan result (false-positive) for ocp4-kubelet-enable-streaming-connections
Fix: Consume an updated version of the compliance operator and content.
Result: Accurate scan results when setting streamingConnectionIdleTimeout
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-07-14 12:40:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vincent Shen
2022-03-30 00:42:36 UTC
Hi Vicent,
Generally it looks good. Just one question for scenario 1, when variable value set, streamingConnectionIdleTimeout in kubeletconfig unset, should the result be PASS? Thanks.
Verified with CO v0.1.53-2 and 4.11.0-rc.1
#############scenario 1. variable value set, streamingConnectionIdleTimeout in kubeletconfig unset
$ oc apply -f-<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: mod-node
spec:
title: My modified profile
description: test
enableRules:
- name: ocp4-kubelet-enable-streaming-connections
rationale: platform
setValues:
- name: ocp4-var-streaming-connection-timeouts
rationale: test
value: 5m
EOF
E0708 23:31:14.761953 31838 request.go:964] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
tailoredprofile.compliance.openshift.io/mod-node created
$ oc apply -f -<<EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
name: test
profiles:
- apiGroup: compliance.openshift.io/v1alpha1
kind: TailoredProfile
name: mod-node
settingsRef:
apiGroup: compliance.openshift.io/v1alpha1
kind: ScanSetting
name: default
EOF
E0708 23:34:15.784945 32042 request.go:964] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
scansettingbinding.compliance.openshift.io/test created
$ oc get suite -w
NAME PHASE RESULT
test RUNNING NOT-AVAILABLE
test RUNNING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test DONE COMPLIANT
test DONE COMPLIANT
^C$ oc get ccr
NAME STATUS SEVERITY
mod-node-master-kubelet-enable-streaming-connections PASS medium
mod-node-worker-kubelet-enable-streaming-connections PASS medium
$ oc get rule ocp4-kubelet-enable-streaming-connections -o=jsonpath={.instructions}
Run the following command on the kubelet node(s):
$ sudo grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet.conf
The output should return .
$ for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet.conf; done
Starting pod/ip-10-0-131-126us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Removing debug pod ...
error: non-zero exit code from debug container
Starting pod/ip-10-0-167-112us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Removing debug pod ...
error: non-zero exit code from debug container
Starting pod/ip-10-0-205-97us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Removing debug pod ...
error: non-zero exit code from debug container
##############scenario 2. value in variable and kubeletconfig equals:
$ oc label mcp worker cis-hardening=true
machineconfigpool.machineconfiguration.openshift.io/worker labeled
$ oc label mcp master cis-hardening=true
machineconfigpool.machineconfiguration.openshift.io/master labeled
$ oc apply -f -<<EOF
> apiVersion: machineconfiguration.openshift.io/v1
> kind: KubeletConfig
> metadata:
> name: myconfig
> spec:
> machineConfigPoolSelector:
> matchLabels:
> cis-hardening: "true"
> kubeletConfig:
> streamingConnectionIdleTimeout: "5m"
> EOF
E0708 23:39:57.629269 32624 request.go:964] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
kubeletconfig.machineconfiguration.openshift.io/myconfig created
$ oc get mcp -w
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-2ff0b98b7db67620f8f7583b95cea1d2 False True False 3 1 1 0 45m
worker rendered-worker-4c2edea0f9a6cfb6669898de1d40f884 False True False 3 2 2 0 45m
master rendered-master-2ff0b98b7db67620f8f7583b95cea1d2 False True False 3 1 2 0 46m
worker rendered-worker-4c2edea0f9a6cfb6669898de1d40f884 False True False 3 2 3 0 46m
master rendered-master-2ff0b98b7db67620f8f7583b95cea1d2 False True False 3 2 2 0 47m
worker rendered-worker-f897e6825fd16b39682346e0a4e894a4 True False False 3 3 3 0 47m
master rendered-master-2ff0b98b7db67620f8f7583b95cea1d2 False True False 3 2 2 0 47m
master rendered-master-4bba382da90f69e1d8376de2636bdae2 True False False 3 3 3 0 51m
...
master rendered-master-4bba382da90f69e1d8376de2636bdae2 True False False 3 3 3 0 57m
worker rendered-worker-f897e6825fd16b39682346e0a4e894a4 True False False 3 3 3 0 57m
$ oc compliance rerun-now scansettingbinding test
Rerunning scans from 'test': mod-node-master, mod-node-worker
Re-running scan 'openshift-compliance/mod-node-master'
Re-running scan 'openshift-compliance/mod-node-worker'
$ oc get suite -w
NAME PHASE RESULT
test LAUNCHING NOT-AVAILABLE
test RUNNING NOT-AVAILABLE
test PENDING NOT-AVAILABLE
test LAUNCHING NOT-AVAILABLE
test RUNNING NOT-AVAILABLE
test RUNNING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test DONE COMPLIANT
test DONE COMPLIANT
^C$ oc get ccr
NAME STATUS SEVERITY
mod-node-master-kubelet-enable-streaming-connections PASS medium
mod-node-worker-kubelet-enable-streaming-connections PASS medium
$ for i in `oc get node -l node-role.kubernetes.io/worker= --no-headers | awk '{print $1}'`;do oc debug node/$i -- chroot /host grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet.conf; done
Starting pod/ip-10-0-131-126us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
"streamingConnectionIdleTimeout": "5m0s",
Removing debug pod ...
Starting pod/ip-10-0-167-112us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
"streamingConnectionIdleTimeout": "5m0s",
Removing debug pod ...
Starting pod/ip-10-0-205-97us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
"streamingConnectionIdleTimeout": "5m0s",
Removing debug pod ...
##########scenario 3. value in variable and in kubeleteconfig diff:
oc apply -f -<<EOF
> apiVersion: machineconfiguration.openshift.io/v1
> kind: KubeletConfig
> metadata:
> name: myconfig
> spec:
> machineConfigPoolSelector:
> matchLabels:
> cis-hardening: "true"
> kubeletConfig:
> streamingConnectionIdleTimeout: "5m"
> ^C
[xiyuan@MiWiFi-RA69-srv func]$ oc apply -f -<<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: myconfig
spec:
machineConfigPoolSelector:
matchLabels:
cis-hardening: "true"
kubeletConfig:
streamingConnectionIdleTimeout: "5h"
> EOF
E0709 00:04:05.453487 1955 request.go:964] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
Warning: resource kubeletconfigs/myconfig is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
kubeletconfig.machineconfiguration.openshift.io/myconfig configured
$ oc get mcp -w
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-4bba382da90f69e1d8376de2636bdae2 False True False 3 1 1 0 69m
worker rendered-worker-f897e6825fd16b39682346e0a4e894a4 False True False 3 2 2 0 69m
master rendered-master-4bba382da90f69e1d8376de2636bdae2 False True False 3 1 2 0 71m
master rendered-master-4bba382da90f69e1d8376de2636bdae2 False True False 3 2 2 0 71m
worker rendered-worker-f897e6825fd16b39682346e0a4e894a4 False True False 3 2 3 0 72m
worker rendered-worker-a876710b4ca833ff015e97c3cd4f717d True False False 3 3 3 0 72m
master rendered-master-88d0dd22a44651e2071aedeae4ecd412 True False False 3 3 3 0 75m
^C$ oc compliance rerun-now scansettingbinding test
Rerunning scans from 'test': mod-node-master, mod-node-worker
Re-running scan 'openshift-compliance/mod-node-master'
Re-running scan 'openshift-compliance/mod-node-worker'
$ oc get suite -w
NAME PHASE RESULT
test RUNNING NOT-AVAILABLE
test RUNNING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test AGGREGATING NOT-AVAILABLE
test DONE NON-COMPLIANT
test DONE NON-COMPLIANT
^C$ oc get ccr
NAME STATUS SEVERITY
mod-node-master-kubelet-enable-streaming-connections FAIL medium
mod-node-worker-kubelet-enable-streaming-connections FAIL medium
For the scenario 2 and scenario 3 in https://bugzilla.redhat.com/show_bug.cgi?id=2069891#c6, it passed as working as expected. For scenario 1 in https://bugzilla.redhat.com/show_bug.cgi?id=2069891#c6, a new bug https://bugzilla.redhat.com/show_bug.cgi?id=2105878 was created to track. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:5537 |