Bug 1839694 - [vSphere] Add ability to read port from provider config
Summary: [vSphere] Add ability to read port from provider config
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Alexander Demicev
QA Contact: Milind Yadav
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-25 09:15 UTC by Alexander Demicev
Modified: 2020-07-13 17:41 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:41:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-api-operator pull 596 0 None closed Bug 1839694: [vSphere] Add ability to read port from provider config 2020-06-24 03:19:43 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:41:45 UTC

Description Alexander Demicev 2020-05-25 09:15:49 UTC
Description of problem:

We should have the ability to read port for reaching vCenter from provider config.

https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/k8s-secret.html#create-a-k8s-secret

https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html


Steps for QE:

1. oc edit cm cloud-provider-config -n openshift-config 
2. Add port under Global section
[Global]
port = "443"

Comment 5 Milind Yadav 2020-06-01 09:28:33 UTC
Cluster version is 4.5.0-0.nightly-2020-06-01-043833

Steps :
1.Edited the cm successfully ( oc edit cm cloud-provider-config -n openshift-config)
.
.
.
apiVersion: v1
data:
  config: |
    [Global]
    secret-name = "vsphere-creds"
    secret-namespace = "kube-system"
    insecure-flag = "1"
    port = "443"

.
.
Step 2. Created a new machineset and scaled 
[miyadav@miyadav ManualRun]$ oc get machineset
NAME                            DESIRED   CURRENT   READY   AVAILABLE   AGE
miyadav-0601-jgfsb-worker       2         2         2       2           54m
miyadav-0601-jgfsb-worker-new   1         1         1       1           7m30s

[miyadav@miyadav ManualRun]$ oc get machines -o wide
NAME                                  PHASE     TYPE   REGION   ZONE   AGE     NODE                                  PROVIDERID                                       STATE
miyadav-0601-jgfsb-master-0           Running                          54m     miyadav-0601-jgfsb-master-0           vsphere://420b9520-1c09-fbbc-bfde-215172b6b6b3   poweredOn
miyadav-0601-jgfsb-master-1           Running                          54m     miyadav-0601-jgfsb-master-1           vsphere://420b18d9-23e8-5712-00de-53564336f755   poweredOn
miyadav-0601-jgfsb-master-2           Running                          54m     miyadav-0601-jgfsb-master-2           vsphere://420bddcd-7a29-0d86-4ebc-432f02346a14   poweredOn
miyadav-0601-jgfsb-worker-9cw9m       Running                          46m     miyadav-0601-jgfsb-worker-9cw9m       vsphere://420be8f0-ef7a-6078-90f1-7d0021f96202   poweredOn
miyadav-0601-jgfsb-worker-new-2wdpn   Running                          7m37s   miyadav-0601-jgfsb-worker-new-2wdpn   vsphere://420be039-0fd8-8d48-ebae-631a486fdba4   poweredOn
miyadav-0601-jgfsb-worker-tjqct       Running                          46m     miyadav-0601-jgfsb-worker-tjqct       vsphere://420b48ce-178c-eef0-3ac2-be325f7d3a69   poweredOn

Actual & Expected : Machinescaling works fine , with or without port config ( creating deleting machines , machineset looks good) 


Additional Info :
I observed below please have a look - 

[miyadav@miyadav ManualRun]$  oc get pods
NAME                                           READY   STATUS    RESTARTS   AGE
cluster-autoscaler-operator-6cddc8fc88-n7xv5   2/2     Running   0          2m45s
machine-api-controllers-845c987f7b-lm2dc       4/4     Running   0          2m45s
machine-api-operator-6d66bcbffc-9t44h          2/2     Running   0          2m45s


scaled down machineset
[miyadav@miyadav ManualRun]$ oc scale machineset miyadav-0601-jgfsb-worker-new --replicas=1
machineset.machine.openshift.io/miyadav-0601-jgfsb-worker-new scaled

All pods of the openshift-machine-api namespace gets recreated
[miyadav@miyadav ManualRun]$ oc get pods
NAME                                           READY   STATUS              RESTARTS   AGE
cluster-autoscaler-operator-6cddc8fc88-hpcjz   0/2     ContainerCreating   0          8s
cluster-autoscaler-operator-6cddc8fc88-n7xv5   0/2     Terminating         0          4m23s
machine-api-controllers-845c987f7b-kzst7       0/4     ContainerCreating   0          7s
machine-api-controllers-845c987f7b-lm2dc       0/4     Terminating         0          4m23s
machine-api-operator-6d66bcbffc-9t44h          0/2     Terminating         0          4m23s
machine-api-operator-6d66bcbffc-rlgf9          0/2     ContainerCreating   0          8s

This doesnt happen when we do not have port 443 in provider config(scale up and down happens and same pod is present ) , it happens only when we scale down for provider config having 443 port

When ever this happens we get below logs :
.
.
.
I0601 09:12:01.846947       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0601 09:12:01.847502       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
error: unexpected EOF
.
.

Comment 6 Milind Yadav 2020-06-01 12:34:36 UTC
After checking with Alex , they are  internal k8s errors and not related to machine-api , moving this PR as VERIFIED.

Comment 7 errata-xmlrpc 2020-07-13 17:41:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.