Bug 1839694

Summary: [vSphere] Add ability to read port from provider config
Product: OpenShift Container Platform Reporter: Alexander Demicev <ademicev>
Component: Cloud ComputeAssignee: Alexander Demicev <ademicev>
Cloud Compute sub component: Other Providers QA Contact: Milind Yadav <miyadav>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: agarcial
Version: 4.5   
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:41:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexander Demicev 2020-05-25 09:15:49 UTC
Description of problem:

We should have the ability to read port for reaching vCenter from provider config.

https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/k8s-secret.html#create-a-k8s-secret

https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html


Steps for QE:

1. oc edit cm cloud-provider-config -n openshift-config 
2. Add port under Global section
[Global]
port = "443"

Comment 5 Milind Yadav 2020-06-01 09:28:33 UTC
Cluster version is 4.5.0-0.nightly-2020-06-01-043833

Steps :
1.Edited the cm successfully ( oc edit cm cloud-provider-config -n openshift-config)
.
.
.
apiVersion: v1
data:
  config: |
    [Global]
    secret-name = "vsphere-creds"
    secret-namespace = "kube-system"
    insecure-flag = "1"
    port = "443"

.
.
Step 2. Created a new machineset and scaled 
[miyadav@miyadav ManualRun]$ oc get machineset
NAME                            DESIRED   CURRENT   READY   AVAILABLE   AGE
miyadav-0601-jgfsb-worker       2         2         2       2           54m
miyadav-0601-jgfsb-worker-new   1         1         1       1           7m30s

[miyadav@miyadav ManualRun]$ oc get machines -o wide
NAME                                  PHASE     TYPE   REGION   ZONE   AGE     NODE                                  PROVIDERID                                       STATE
miyadav-0601-jgfsb-master-0           Running                          54m     miyadav-0601-jgfsb-master-0           vsphere://420b9520-1c09-fbbc-bfde-215172b6b6b3   poweredOn
miyadav-0601-jgfsb-master-1           Running                          54m     miyadav-0601-jgfsb-master-1           vsphere://420b18d9-23e8-5712-00de-53564336f755   poweredOn
miyadav-0601-jgfsb-master-2           Running                          54m     miyadav-0601-jgfsb-master-2           vsphere://420bddcd-7a29-0d86-4ebc-432f02346a14   poweredOn
miyadav-0601-jgfsb-worker-9cw9m       Running                          46m     miyadav-0601-jgfsb-worker-9cw9m       vsphere://420be8f0-ef7a-6078-90f1-7d0021f96202   poweredOn
miyadav-0601-jgfsb-worker-new-2wdpn   Running                          7m37s   miyadav-0601-jgfsb-worker-new-2wdpn   vsphere://420be039-0fd8-8d48-ebae-631a486fdba4   poweredOn
miyadav-0601-jgfsb-worker-tjqct       Running                          46m     miyadav-0601-jgfsb-worker-tjqct       vsphere://420b48ce-178c-eef0-3ac2-be325f7d3a69   poweredOn

Actual & Expected : Machinescaling works fine , with or without port config ( creating deleting machines , machineset looks good) 


Additional Info :
I observed below please have a look - 

[miyadav@miyadav ManualRun]$  oc get pods
NAME                                           READY   STATUS    RESTARTS   AGE
cluster-autoscaler-operator-6cddc8fc88-n7xv5   2/2     Running   0          2m45s
machine-api-controllers-845c987f7b-lm2dc       4/4     Running   0          2m45s
machine-api-operator-6d66bcbffc-9t44h          2/2     Running   0          2m45s


scaled down machineset
[miyadav@miyadav ManualRun]$ oc scale machineset miyadav-0601-jgfsb-worker-new --replicas=1
machineset.machine.openshift.io/miyadav-0601-jgfsb-worker-new scaled

All pods of the openshift-machine-api namespace gets recreated
[miyadav@miyadav ManualRun]$ oc get pods
NAME                                           READY   STATUS              RESTARTS   AGE
cluster-autoscaler-operator-6cddc8fc88-hpcjz   0/2     ContainerCreating   0          8s
cluster-autoscaler-operator-6cddc8fc88-n7xv5   0/2     Terminating         0          4m23s
machine-api-controllers-845c987f7b-kzst7       0/4     ContainerCreating   0          7s
machine-api-controllers-845c987f7b-lm2dc       0/4     Terminating         0          4m23s
machine-api-operator-6d66bcbffc-9t44h          0/2     Terminating         0          4m23s
machine-api-operator-6d66bcbffc-rlgf9          0/2     ContainerCreating   0          8s

This doesnt happen when we do not have port 443 in provider config(scale up and down happens and same pod is present ) , it happens only when we scale down for provider config having 443 port

When ever this happens we get below logs :
.
.
.
I0601 09:12:01.846947       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0601 09:12:01.847502       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
error: unexpected EOF
.
.

Comment 6 Milind Yadav 2020-06-01 12:34:36 UTC
After checking with Alex , they are  internal k8s errors and not related to machine-api , moving this PR as VERIFIED.

Comment 7 errata-xmlrpc 2020-07-13 17:41:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409