Bug 1594171
Summary: | Registry not working when deploying OCP with Kuryr | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Luis Tomas Bolivar <ltomasbo> |
Component: | Installer | Assignee: | Luis Tomas Bolivar <ltomasbo> |
Status: | CLOSED ERRATA | QA Contact: | Jon Uriarte <juriarte> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 3.10.0 | CC: | aos-bugs, asegurap, jokerman, juriarte, mmccomas, tsedovic |
Target Milestone: | --- | ||
Target Release: | 3.10.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
After the change that removed the proxy and dns consiguration by default in favor of expecting SDN plugins to run containerized, SkyDNS was not enabled when running kuryr. As a result it was not posible to push images to the registry due to not being able to resolve the host names. The solution is using a similar approach to what is done for openshift-sdn role, but enabling just the SkyDNS:
exec openshift start network --enable=dns --config=/etc/origin/node/node-config.yaml --kubeconfig=/tmp/kubeconfig --loglevel=${DEBUG_LOGLEVEL:-2}
This enables successful pushes of new images to the registry
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-01-10 09:27:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Luis Tomas Bolivar
2018-06-22 09:59:21 UTC
After some investigation (thanks to Antoni) we saw on kuryr deployment there is no process listening on the VM nodes on port 53 127.0.0.1, which is there on openshift-sdn based one: tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 17516/openshift root 17516 17500 0 103481 61720 0 10:21 ? 00:00:25 openshift start network --config=/etc/origin/node/node-config.yaml --kubeconfig=/tmp/kubeconfig --loglevel=2 It seems that after the change that remove the proxy and dns by default by expecting SDN plugins to run containerized, we stop having to disable the proxy as it was already disabled, but we forgot to enable the dns. Solution seems to be to do something similar to what is done for openshift-sdn role: https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_sdn/files/sdn.yaml#L105 but enabling just the SkyDNS: exec openshift start network --enable=dns --config=/etc/origin/node/node-config.yaml --kubeconfig=/tmp/kubeconfig --loglevel=${DEBUG_LOGLEVEL:-2} Should be in openshift-ansible-3.10.28-1 Verified in openshift-ansible-3.10.50-1.git.0.96a93c5.el7.noarch. Verification steps: 1. Deploy OCP 3.10 on OSP 3.10, with kuryr enabled $ oc get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default docker-registry-1-j9q8p 1/1 Running 0 2h 10.11.0.11 infra-node-0.openshift.example.com default registry-console-1-hqrx4 1/1 Running 0 2h 10.11.0.3 master-0.openshift.example.com default router-1-rpjg7 1/1 Running 0 2h 192.168.99.5 infra-node-0.openshift.example.com kube-system master-api-master-0.openshift.example.com 1/1 Running 0 2h 192.168.99.15 master-0.openshift.example.com kube-system master-controllers-master-0.openshift.example.com 1/1 Running 1 2h 192.168.99.15 master-0.openshift.example.com kube-system master-etcd-master-0.openshift.example.com 1/1 Running 1 2h 192.168.99.15 master-0.openshift.example.com openshift-infra kuryr-cni-ds-9xs42 2/2 Running 0 2h 192.168.99.5 infra-node-0.openshift.example.com openshift-infra kuryr-cni-ds-k9b6c 2/2 Running 0 2h 192.168.99.10 app-node-0.openshift.example.com openshift-infra kuryr-cni-ds-nw82s 2/2 Running 0 2h 192.168.99.15 master-0.openshift.example.com openshift-infra kuryr-cni-ds-znwrt 2/2 Running 0 2h 192.168.99.4 app-node-1.openshift.example.com openshift-infra kuryr-controller-59fc7f478b-dvvvm 1/1 Running 0 2h 192.168.99.4 app-node-1.openshift.example.com openshift-node sync-fpmst 1/1 Running 0 2h 192.168.99.15 master-0.openshift.example.com openshift-node sync-qzzvp 1/1 Running 0 2h 192.168.99.5 infra-node-0.openshift.example.com openshift-node sync-s7xzt 1/1 Running 0 2h 192.168.99.4 app-node-1.openshift.example.com openshift-node sync-zmqbh 1/1 Running 0 2h 192.168.99.10 app-node-0.openshift.example.com $ oc get all NAME READY STATUS RESTARTS AGE pod/docker-registry-1-j9q8p 1/1 Running 0 2h pod/registry-console-1-hqrx4 1/1 Running 0 2h pod/router-1-rpjg7 1/1 Running 0 2h NAME DESIRED CURRENT READY AGE replicationcontroller/docker-registry-1 1 1 1 2h replicationcontroller/registry-console-1 1 1 1 2h replicationcontroller/router-1 1 1 1 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/docker-registry ClusterIP 172.30.155.8 <none> 5000/TCP 2h service/kubernetes ClusterIP 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 2h service/registry-console ClusterIP 172.30.217.70 <none> 9000/TCP 2h service/router ClusterIP 172.30.152.34 <none> 80/TCP,443/TCP,1936/TCP 2h NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/docker-registry 1 1 1 config deploymentconfig.apps.openshift.io/registry-console 1 1 1 config deploymentconfig.apps.openshift.io/router 1 1 1 config NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/docker-registry docker-registry-default.apps.openshift.example.com docker-registry <all> passthrough None route.route.openshift.io/registry-console registry-console-default.apps.openshift.example.com registry-console <all> passthrough None 2. Create a new project and deploy the sample application that will try to push a new image to the registry: $ oc new-project test $ oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git 3. Check the process: $ oc logs -f bc/ruby-ex Cloning "https://github.com/openshift/ruby-ex.git" ... Commit: fa07571e8bbaa408126c4a197980076d90c1bc47 (Merge pull request #22 from jankleinert/readme-updates) Author: Ben Parees <bparees.github.com> Date: Fri Sep 7 15:23:15 2018 -0400 ---> Installing application source ... ---> Building your Ruby application from source ... ... Installing puma 3.10.0 Installing rack 2.0.3 Using bundler 1.7.8 Your bundle is complete! Gems in the groups development and test were not installed. It was installed into ./bundle ---> Cleaning up unused ruby gems ... Pushing image docker-registry.default.svc:5000/test/ruby-ex:latest ... Pushed 0/10 layers, 13% complete Pushed 1/10 layers, 19% complete Pushed 2/10 layers, 36% complete Pushed 3/10 layers, 41% complete Pushed 4/10 layers, 46% complete Pushed 5/10 layers, 55% complete Pushed 6/10 layers, 66% complete Pushed 7/10 layers, 74% complete Pushed 8/10 layers, 82% complete Pushed 9/10 layers, 100% complete Pushed 10/10 layers, 100% complete Push successful The image is pushed successfully. $ oc get all -o wide NAME READY STATUS RESTARTS AGE IP NODE pod/ruby-ex-1-6g2qs 1/1 Running 0 24m 10.11.0.28 app-node-1.openshift.example.com pod/ruby-ex-1-build 0/1 Completed 0 29m 10.11.0.7 app-node-0.openshift.example.com NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicationcontroller/ruby-ex-1 1 1 1 24m ruby-ex docker-registry.default.svc:5000/test/ruby-ex@sha256:2e3ac075e9975fbc9128fe16975da030653f05e05650ffa6f3b93fea03975145 app=ruby-ex,deployment=ruby-ex-1,deploymentconfig=ruby-ex NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/ruby-ex ClusterIP 172.30.172.155 <none> 8080/TCP 29m app=ruby-ex,deploymentconfig=ruby-ex NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/ruby-ex 1 1 1 config,image(ruby-ex:latest) NAME TYPE FROM LATEST buildconfig.build.openshift.io/ruby-ex Source Git 1 NAME TYPE FROM STATUS STARTED DURATION build.build.openshift.io/ruby-ex-1 Source Git@fa07571 Complete 29 minutes ago 4m48s NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/ruby-22-centos7 docker-registry.default.svc:5000/test/ruby-22-centos7 latest 29 minutes ago imagestream.image.openshift.io/ruby-ex docker-registry.default.svc:5000/test/ruby-ex latest 24 minutes ago 4. Check the deployed app and image: $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE ruby-ex-1-6g2qs 1/1 Running 0 35m 10.11.0.28 app-node-1.openshift.example.com ruby-ex-1-build 0/1 Completed 0 40m 10.11.0.7 app-node-0.openshift.example.com In app-node-1.openshift.example.com: $ sudo docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker-registry.default.svc:5000/test/ruby-ex <none> 8a417147c48a 19 minutes ago 568 MB registry.reg-aws.openshift.com:443/openshift3/ose-node v3.10 ccaabbeb169b 3 days ago 1.27 GB registry.reg-aws.openshift.com:443/openshift3/ose-pod v3.10 ac24c586c79b 3 days ago 214 MB registry.reg-aws.openshift.com:443/openshift3/ose-pod v3.10.50 ac24c586c79b 3 days ago 214 MB docker-registry.engineering.redhat.com/rhosp13/openstack-kuryr-cni latest 200e053f01d8 3 weeks ago 388 MB docker-registry.engineering.redhat.com/rhosp13/openstack-kuryr-controller latest 95371e0317f5 3 weeks ago 354 MB 5. Delete the project: $ oc delete project test Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0026 |