Bug 1885223

Summary: Sync with upstream (fix panicking cluster-capacity binary)
Product: OpenShift Container Platform Reporter: Jan Chaloupka <jchaloup>
Component: kube-schedulerAssignee: Jan Chaloupka <jchaloup>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.6CC: aos-bugs, mfojtik
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1885232 (view as bug list) Environment:
Last Closed: 2021-02-24 15:23:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1885232    

Description Jan Chaloupka 2020-10-05 13:21:12 UTC
The last master HEAD panics:

```
./cluster-capacity --kubeconfig ~/.kube/config  --podspec examples/pod.yaml
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x16daa79]

goroutine 1 [running]:
sigs.k8s.io/cluster-capacity/vendor/k8s.io/component-base/logs.(*Options).Get(...)
/go/src/sigs.k8s.io/cluster-capacity/cmd/hypercc/main.go:42 +0x2f
```

Comment 2 RamaKasturi 2020-10-15 12:40:35 UTC
Verified with the latest upstream master and below 4.7 server and do not see any panic as described above.

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-10-15-011122]$ ./oc version
Client Version: 4.7.0-0.nightly-2020-10-15-011122
Server Version: 4.7.0-0.nightly-2020-10-15-011122
Kubernetes Version: v1.19.0+1110e21


Below are the steps followed to verify the bug:
==============================================
1) git clone https://github.com/kubernetes-sigs/cluster-capacity
2) cd cluster-capacity
3) make build
go build -o hypercc sigs.k8s.io/cluster-capacity/cmd/hypercc
ln -sf hypercc cluster-capacity
ln -sf hypercc genpod
4) Run command ./cluster-capacity --kubeconfig /home/knarra/Downloads/kubeconfig_47 --podspec=examples/pod.yaml
[knarra@knarra cluster-capacity]$ ./cluster-capacity --kubeconfig /home/knarra/Downloads/kubeconfig_47 --podspec=examples/pod.yaml 
I1015 18:03:39.848230   17988 registry.go:173] Registering SelectorSpread plugin
I1015 18:03:39.848272   17988 registry.go:173] Registering SelectorSpread plugin
16
5) using verbose
[knarra@knarra cluster-capacity]$ ./cluster-capacity --kubeconfig /home/knarra/Downloads/kubeconfig_47 --podspec=examples/pod.yaml --verbose
I1015 18:03:56.767065   18022 registry.go:173] Registering SelectorSpread plugin
I1015 18:03:56.767091   18022 registry.go:173] Registering SelectorSpread plugin
small-pod pod requirements:
	- CPU: 150m
	- Memory: 100Mi

The cluster can schedule 16 instance(s) of the pod small-pod.

Termination reason: Unschedulable: 0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

Pod distribution among nodes:
small-pod
	- ip-10-0-137-24.us-east-2.compute.internal: 7 instance(s)
	- ip-10-0-222-86.us-east-2.compute.internal: 5 instance(s)
	- ip-10-0-182-124.us-east-2.compute.internal: 4 instance(s)

Based on the above moving bug to verified state

Comment 5 errata-xmlrpc 2021-02-24 15:23:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633