Bug 1733955 - The kubefed downstream image creates and updates kubefedconfig failed
Summary: The kubefed downstream image creates and updates kubefedconfig failed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Federation
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: 4.2.0
Assignee: Paul Morie
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-29 10:20 UTC by Qin Ping
Modified: 2019-10-16 06:34 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:33:51 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 None None None 2019-10-16 06:34:05 UTC

Description Qin Ping 2019-07-29 10:20:35 UTC
Description of problem:
The kubefed downstream image creates and updates kubefedconfig failed

Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-28-222114   True        False         3h58m   Cluster version is 4.2.0-0.nightly-2019-07-28-222114

KubeFed controller-manager version: version.Info{Version:"v0.1.0-rc4", GitCommit:"2dbec10d6bef12ebcd21744ccb50eb4e8cfcfeaa", GitTreeState:"clean", BuildDate:"2019-07-20T00:00:44Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

How reproducible:
100%

Steps to Reproduce:
1. Push the manifests(kubefed-operator release-4.2) to piqin app registry
2. Created an operatorsource instance to test downstream image
$ cat operatorsource.yaml 
apiVersion: operators.coreos.com/v1
kind: OperatorSource
metadata:
  name: kubefed
  namespace: openshift-marketplace
spec:
  authorizationToken:
    secretName: kubefed
  displayName: kubefed downstream Operators
  endpoint: https://quay.io/cnr
  publisher: kubefed
  registryNamespace: piqin
  type: appregistry
3. Create a subscription instance to install a namespace scoped kubefed-operator
$ cat sub.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kubefed-operator
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: kubefed-operator
  source: kubefed
  sourceNamespace: openshift-marketplace
  startingCSV: kubefed-operator.v0.1.0
4. Create a kubefedwebhook instance to install kubefed webhook
$ oc get kubefedwebhook kubefedwebhook-resource -oyaml
apiVersion: operator.kubefed.io/v1alpha1
kind: KubeFedWebHook
metadata:
  creationTimestamp: "2019-07-29T07:50:27Z"
  generation: 1
  name: kubefedwebhook-resource
  namespace: federation-system
  resourceVersion: "87155"
  selfLink: /apis/operator.kubefed.io/v1alpha1/namespaces/federation-system/kubefedwebhooks/kubefedwebhook-resource
  uid: 87802d5d-b1d5-11e9-b09b-0ef3c9acac02
spec:
  scope: Cluster
status:
  version: 0.1.0

5. Create a kubefed instance to install kubefed controller manager
$ oc get kubefed kubefed-resource -oyaml
apiVersion: operator.kubefed.io/v1alpha1
kind: KubeFed
metadata:
  creationTimestamp: "2019-07-29T07:51:15Z"
  generation: 1
  name: kubefed-resource
  namespace: federation-system
  resourceVersion: "87401"
  selfLink: /apis/operator.kubefed.io/v1alpha1/namespaces/federation-system/kubefeds/kubefed-resource
  uid: a48b9a5c-b1d5-11e9-b09b-0ef3c9acac02
spec:
  scope: Namespaced
status:
  version: 0.1.0

6. Update the kubefed webhook controller to use the downstream image
deployment.kubefed-admission-webhook.spec.template.spec.containers[0].command = 
                      [
                            "/root/webhook",
                            "--secure-port=8443",
                            "--audit-log-path=-",
                            "--tls-cert-file=/var/serving-cert/tls.crt",
                            "--tls-private-key-file=/var/serving-cert/tls.key",
                            "--v=8"
                        ],

deployment.kubefed-admission-webhook.spec.template.spec.containers[0].image = image-registry.openshift-image-registry.svc:5000/openshift/ose-kubefed:v4.2.0(it's the same with brew registry)
7. Update the kubefed controller manager to use the downstream image 
deployment.kubefed-controller-manager.spec.template.spec.containers[0].command = 

                        [
                            "/root/controller-manager"
                        ],
deployment.kubefed-controller-manager.spec.template.spec.containers[0].image = image-registry.openshift-image-registry.svc:5000/openshift/ose-kubefed:v4.2.0(it's the same with brew registry)

Actual results:
kubefed webhook controller runs successfully with the downstream image
kubefed controller manager starts failed.

$ oc get pod
NAME                                          READY   STATUS             RESTARTS   AGE
kubefed-admission-webhook-66d75947c7-vbbcp    1/1     Running            0          49m
kubefed-controller-manager-6959b74698-4lcsb   0/1     CrashLoopBackOff   2          35s
kubefed-controller-manager-6959b74698-lbq8x   0/1     CrashLoopBackOff   2          30s
kubefed-controller-manager-84d756db66-5krz8   1/1     Running            7          58m
kubefed-operator-869757c688-wxf2b             1/1     Running            0          150m


Expected results:

kubefed controller manager can be started successfully with the downstream image.

Additional info:

$ oc logs kubefed-controller-manager-6959b74698-4lcsb
KubeFed controller-manager version: version.Info{Version:"", GitCommit:"", GitTreeState:"clean", BuildDate:"2019-07-28T19:31:05Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"}
I0729 10:19:47.473541       1 controller-manager.go:365] FLAG: --alsologtostderr="false"
I0729 10:19:47.473633       1 controller-manager.go:365] FLAG: --help="false"
I0729 10:19:47.473638       1 controller-manager.go:365] FLAG: --kubeconfig=""
I0729 10:19:47.473642       1 controller-manager.go:365] FLAG: --kubefed-config=""
I0729 10:19:47.473645       1 controller-manager.go:365] FLAG: --kubefed-namespace="federation-system"
I0729 10:19:47.473648       1 controller-manager.go:365] FLAG: --log-backtrace-at=":0"
I0729 10:19:47.473653       1 controller-manager.go:365] FLAG: --log-dir=""
I0729 10:19:47.473656       1 controller-manager.go:365] FLAG: --log-file=""
I0729 10:19:47.473658       1 controller-manager.go:365] FLAG: --log-flush-frequency="5s"
I0729 10:19:47.473664       1 controller-manager.go:365] FLAG: --log_backtrace_at=":0"
I0729 10:19:47.473667       1 controller-manager.go:365] FLAG: --log_dir=""
I0729 10:19:47.473670       1 controller-manager.go:365] FLAG: --log_file=""
I0729 10:19:47.473673       1 controller-manager.go:365] FLAG: --logtostderr="true"
I0729 10:19:47.473675       1 controller-manager.go:365] FLAG: --master=""
I0729 10:19:47.473678       1 controller-manager.go:365] FLAG: --skip-headers="false"
I0729 10:19:47.473681       1 controller-manager.go:365] FLAG: --skip_headers="false"
I0729 10:19:47.473683       1 controller-manager.go:365] FLAG: --stderrthreshold="0"
I0729 10:19:47.473686       1 controller-manager.go:365] FLAG: --v="5"
I0729 10:19:47.473689       1 controller-manager.go:365] FLAG: --version="false"
I0729 10:19:47.473691       1 controller-manager.go:365] FLAG: --vmodule=""
W0729 10:19:47.473711       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0729 10:19:47.876591       1 controller-manager.go:206] Setting Options with KubeFedConfig "federation-system/kubefed"
F0729 10:19:48.097017       1 controller-manager.go:304] Error updating KubeFedConfig "federation-system/kubefed": admission webhook "kubefedconfigs.core.kubefed.k8s.io" denied the request: [spec.clusterHealthCheck.period: Required value, spec.clusterHealthCheck.timeout: Required value]

Comment 2 Qin Ping 2019-08-07 07:57:28 UTC
Verified with image: quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061126-ose-kubefed

Comment 3 errata-xmlrpc 2019-10-16 06:33:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.