Bug 1695180 - [rebase] kube-scheduler needs to be secure
Summary: [rebase] kube-scheduler needs to be secure
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.0
Assignee: ravig
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-02 15:59 UTC by David Eads
Modified: 2019-06-04 10:46 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:50 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:46:57 UTC

Description David Eads 2019-04-02 15:59:52 UTC
The kube-scheduler was never secure, but we noticed during the rebase.  It should be secured.

Comment 1 Seth Jennings 2019-04-03 14:46:45 UTC
Ravi, you might send this over to Stephan if he is the one doing the work for it

Comment 8 Seth Jennings 2019-04-23 14:29:11 UTC
Let's ignore that /healthz is running on both secure and insecure ports.  The important things is that the secure port is an option now and only the /healthz endpoint is on the insecure port.

I would move to VERIFIED, but back to ON_QA in case there are any other questions.

Comment 9 Sunil Choudhary 2019-04-26 04:18:30 UTC
Thanks for explaining, I do see it is listening on both 10251 & 10259.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.1.0-0.nightly-2019-04-25-002910   True        False         21h     Cluster version is 4.1.0-0.nightly-2019-04-25-002910


#  netstat -lnptu | grep -e PID -e 10250 -e 10251 -e 10259
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp6       0      0 :::10250                :::*                    LISTEN      1017/hyperkube      
tcp6       0      0 :::10251                :::*                    LISTEN      24711/hyperkube     
tcp6       0      0 :::10259                :::*                    LISTEN      24711/hyperkube 

# ps alxwww | grep -e PID -e 1017 -e 24711 -e 10259
F   UID    PID   PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0   1017      1  20   0 2000220 204948 -    Ssl  ?         85:33 /usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --rotate-certificates --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --allow-privileged --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_version=4.1,node.openshift.io/os_id=rhcos --minimum-container-ttl-duration=6m0s --client-ca-file=/etc/kubernetes/ca.crt --cloud-provider=aws --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --anonymous-auth=false --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
4     0  24711  24698  20   0 1074216 120120 -    Ssl  ?         15:50 hyperkube kube-scheduler --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml --cert-dir=/var/run/kubernetes --port=0 --authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig --authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig --feature-gates=ExperimentalCriticalPodAnnotation=true,LocalStorageCapacityIsolation=false,RotateKubeletServerCertificate=true,SupportPodPidsLimit=true -v=2 --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key

Comment 11 errata-xmlrpc 2019-06-04 10:46:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.