So i have an internal docker registry that i use for disconnected deployments. This registry has a self-signed cert, but that cert is in the PKI store on every openshift node. In 3.9.x, i am able to 'oc import-image' without having to '--insecure', however in 3.10.x it's failing x509. [root@master01 ~]# oc import-image repo.home.nicknach.net/rhel7.5 --confirm -n openshift The import completed with errors. Name: rhel7.5 Namespace: openshift Created: Less than a second ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2018-06-19T15:34:02Z Docker Pull Spec: docker-registry.default.svc:5000/openshift/rhel7.5 Image Lookup: local=false Unique Images: 0 Tags: 1 latest tagged from repo.home.nicknach.net/rhel7.5 ! error: Import failed (InternalError): Internal error occurred: Get https://repo.home.nicknach.net/v2/: x509: certificate signed by unknown authority Less than a second ago error: tag latest failed: Internal error occurred: Get https://repo.home.nicknach.net/v2/: x509: certificate signed by unknown authority It looks like openshift is not using my system-level PKI store to import these images.
Also, i tried setting: openshift_additional_ca=/etc/pki/ca-trust/source/anchors/repo.home.nicknach.net.crt but that didnt work either.
As of 3.10, openshift runs the image import process inside the api pod rather than directly on your host, so it no longer has access to your host certs. To use your host certs, you will need to mount them into the api pod at the appropriate location. You can add a hostpath mount to the api pod by editing the apiserver.yaml which is the pod definition. If you used cluster up, you can find this file in: openshift.local.clusterup/static-pod-manifests/apiserver.yaml If you did an ansible install, the file will be in: /etc/origin/node/pods (We are working on improving this story but it will always require some amount of manual configuration from 3.10 onward).
Ok, thanks for that explanation. So if i add something like this to /etc/origin/node/pods/apiserver.yaml - hostPath: path: /etc/pki/ca-trust/source/anchors name: certs It should work? provided my .crt is in that dir on all nodes?
you'll also need to add the mount to the "volumeMounts" section: volumeMounts: - mountPath: /etc/pki/ca-trust/source/anchors name: certs Note that you're the first to try this, I haven't had a chance to try it myself yet, but I have no reason to think it won't work. and you don't need the cert on all nodes, just the master node(s) (the one where the apiserver pod is going to run).
btw this is being discussed in https://github.com/openshift/origin/issues/20022
I added the hostPath and volMounts settings to /etc/origin/node/pods/apiserver.yaml on all nodes and it still doesnt work. getting the x509 errors still. Attached is my full apiserver.yaml. I'll keep an eye on the upstream issue. Thanks! -Nick
Created attachment 1453030 [details] apiserver.yaml
I assume you restarted the apiserver service after making the change? (you can exec into the pod to confirm the content is mounted correctly)
I bounced all 9 nodes just to be sure. looking at the pod, i dont see my cert file in there anywhere. [root@master01 ~]# oc rsh apiserver-9cjxs sh-4.2# ls /etc/pki/ca-trust/source/anchors/ sh-4.2#
David, any thoughts on why it appears the certs are not getting mounted into the api server pod given the apiserver.yaml in comment 7? What's the right way to get the pod recreated after modifying the apiserver.yaml?
Created attachment 1453259 [details] yikes
working with Ben, we were able to get a work-around for this issue. We need to add two things to /etc/origin/node/pods/apiserver.yaml on all three masters. - mountPath: /etc/pki name: certs and - hostPath: path: /etc/pki name: certs Once the changes pick up, then i am amble to 'oc import-image' with my self-signed repo cert. You then have to re-import all the ImageStreams. Thanks!! -Nick
I figured out that you can get ahead of this issue (and not have to re-import imagestreams) if you watch for the deployment of the apiserver.yaml file and quickly edit or replace it before the playbook gets to the importing IS part. while executing the deploy_cluster playbook, in a separate window, run this: watch -n3 cat /etc/origin/node/pods/apiserver.yaml wait until you see that the file exists, then quickly make the edits that Ben and i figured out above (on all masters). If you do it in time, the import-image commands run by the playbook will succeed and you wont have to re-import anything later. -Nick
So i'm also getting x509 errors when attempting to pull from the registry, in build pods. pulling image error : unknown: unable to pull manifest from repo.home.nicknach.net/rhscl/python-36-rhel7:latest: Get https://repo.home.nicknach.net/v2/: x509: certificate signed by unknown authority error: build error: unable to get docker-registry.default.svc:5000/openshift/python@sha256:faa41271420f7e198e887f74cdcdad89474dc13acde54a227bb5fbe40fe1ead7 Any idea how i could get this cert into build pods as well? -Nick
I think the issue here is actually that your registry can't pull the image (it's pulling it on behalf of the build). you need to mount your certs into the registry pod, which you can do by adding a volume+mount to the registry deploymentconfig resource.
I did this: oc patch dc docker-registry -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","volumeMounts":[{"mountPath":"/etc/pki","name":"certs"}]}],"volumes":[{"hostPath":{"path":"/etc/pki","type":"Directory"},"name":"certs"}]}}}}' and then adjusting the scc: oc adm policy add-scc-to-user hostaccess -z registry I am now able to see the cert in the docker-registry pod. But, i'm still getting the same x509 error.
actually, i take that back. i'm getting different error now. dial tcp: lookup docker-registry.default.svc on 192.168.0.254:53: no such host error: build error: unable to get docker-registry.default.svc:5000/openshift/python@sha256:faa41271420f7e198e887f74cdcdad89474dc13acde54a227bb5fbe40fe1ead7 192.168.0.254 is my upstream DNS server. Hmm
related: https://bugzilla.redhat.com/show_bug.cgi?id=1594485
fyi here is how i added the CAs to the docker-registry pod via configmap: 1) oc create configmap mycerts --from-file=cert.pem=/etc/pki/tls/cert.pem -n default 2) oc edit dc/docker-registry -n default edit volume mounts to add the certs mount: volumeMounts: - mountPath: /registry name: registry-storage - mountPath: /etc/pki/tls name: certs edit volumes to add the configMap volume: volumes: - hostPath: path: /tmp/clusterup/openshift.local.clusterup/openshift.local.pv/registry type: "" name: registry-storage - configMap: defaultMode: 420 name: mycerts name: certs Note that I am completely overwriting the cert.pem file, so you need to provide a cert.pem that contains all your CAs, not just your additional ones.
when can we get this in by default? I just need 'oc image-image' to work from the deployment so that i dont x509 errors on all of my image streams. As it is now, i have to quickly swap out the apiserver.yaml while it deploys as a work-around.
you don't have to do it during startup, you can just do it afterwards and then run oc import-image after the apiserver restarts which will force a re-import. or delete and recreate the imagestreams. We aren't going to be allowed to default mount the host path into the pod, i've already proposed that to the powers that be.
what's the easiest way to re-import them all?
delete the old ones: oc delete is --all -n openshift recreate them by following the manual install steps: https://docs.openshift.org/latest/install_config/imagestreams_templates.html#creating-image-streams-for-openshift-images Either that or script something that gets all the imagestreams from the openshift namespace and iterates invoking "oc import-image $NAME --all" on each imagestream name.
https://github.com/openshift/origin/pull/20120 adds a "AdditionalTrustedCA" field to the master config's imagepolicyconfig. You can point this field to a pem file that is accessible to the apiserver and it will load the additional CAs from that file. These CAs will be used during imageimport. (Since the api server runs inside a pod, you will still need to mount the pem file into the apiserver pod at the location you indicate in the master config).
docs: https://github.com/openshift/openshift-docs/pull/10740
QE also hit the same issue against stage registry. According the above discussion, user have to modify master-config file, and manually mount CA pem to master static pod, why not ask installer to mount /etc/pki hostPath when creating master api/controller static pod json file, that would be more clean and easier, I saw node system container would mount /etc/pki hostPath by defult, why master static do not. Description of problem: master static pod did not mount /etc/pki as hostPath volume, which lead to import image stream failed with x509 error against stage registry Version-Release number of the following components: openshift-ansible-3.10.21-1.git.0.6446011.el7.noarch How reproducible: always against stage registry Steps to Reproduce: 1. Download stage registry ca file, and install it onto system. # cp regsitry-ca.crt /etc/pki/ca-trust/source/anchors/regsitry-ca.crt 2. trigger an installation against stage registry. openshift_examples_modify_imagestreams=true oreg_url=registry.access.stage.redhat.com/openshift3/ose-${component}:${version} 3. after installation, check image stream. 3. Actual results: # oc get is ruby -o yaml -n openshift apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: Ruby openshift.io/image.dockerRepositoryCheck: 2018-07-26T08:31:01Z creationTimestamp: 2018-07-26T08:31:00Z generation: 2 name: ruby namespace: openshift resourceVersion: "1236" selfLink: /apis/image.openshift.io/v1/namespaces/openshift/imagestreams/ruby uid: 3a07df5d-90ae-11e8-afd0-fa163ef75882 spec: lookupPolicy: local: false tags: - annotations: description: Build and run Ruby 2.0 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.0/README.md. iconClass: icon-ruby openshift.io/display-name: Ruby 2.0 openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/openshift/ruby-ex.git supports: ruby:2.0,ruby tags: hidden,builder,ruby version: "2.0" from: kind: DockerImage name: registry.access.stage.redhat.com/openshift3/ruby-20-rhel7:latest generation: 2 importPolicy: {} name: "2.0" referencePolicy: type: Local - annotations: description: Build and run Ruby 2.2 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.2/README.md. iconClass: icon-ruby openshift.io/display-name: Ruby 2.2 openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/openshift/ruby-ex.git supports: ruby:2.2,ruby tags: hidden,builder,ruby version: "2.2" from: kind: DockerImage name: registry.access.stage.redhat.com/rhscl/ruby-22-rhel7:latest generation: 2 importPolicy: {} name: "2.2" referencePolicy: type: Local - annotations: description: Build and run Ruby 2.3 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.3/README.md. iconClass: icon-ruby openshift.io/display-name: Ruby 2.3 openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/openshift/ruby-ex.git supports: ruby:2.3,ruby tags: builder,ruby version: "2.3" from: kind: DockerImage name: registry.access.stage.redhat.com/rhscl/ruby-23-rhel7:latest generation: 2 importPolicy: {} name: "2.3" referencePolicy: type: Local - annotations: description: Build and run Ruby 2.4 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.4/README.md. iconClass: icon-ruby openshift.io/display-name: Ruby 2.4 openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/openshift/ruby-ex.git supports: ruby:2.4,ruby tags: builder,ruby version: "2.4" from: kind: DockerImage name: registry.access.stage.redhat.com/rhscl/ruby-24-rhel7:latest generation: 2 importPolicy: {} name: "2.4" referencePolicy: type: Local - annotations: description: Build and run Ruby 2.5 applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/blob/master/2.5/README.md. iconClass: icon-ruby openshift.io/display-name: Ruby 2.5 openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/sclorg/ruby-ex.git supports: ruby:2.5,ruby tags: builder,ruby version: "2.5" from: kind: DockerImage name: registry.access.stage.redhat.com/rhscl/ruby-25-rhel7:latest generation: 2 importPolicy: {} name: "2.5" referencePolicy: type: Source - annotations: description: |- Build and run Ruby applications on RHEL 7. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-ruby-container/tree/master/2.3/README.md. WARNING: By selecting this tag, your application will automatically update to use the latest version of Ruby available on OpenShift, including major versions updates. iconClass: icon-ruby openshift.io/display-name: Ruby (Latest) openshift.io/provider-display-name: Red Hat, Inc. sampleRepo: https://github.com/openshift/ruby-ex.git supports: ruby tags: builder,ruby from: kind: ImageStreamTag name: "2.5" generation: 1 importPolicy: {} name: latest referencePolicy: type: Local status: dockerImageRepository: docker-registry.default.svc:5000/openshift/ruby tags: - conditions: - generation: 2 lastTransitionTime: 2018-07-26T08:31:01Z message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" type: ImportSuccess items: null tag: "2.0" - conditions: - generation: 2 lastTransitionTime: 2018-07-26T08:31:01Z message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" type: ImportSuccess items: null tag: "2.2" - conditions: - generation: 2 lastTransitionTime: 2018-07-26T08:31:01Z message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" type: ImportSuccess items: null tag: "2.3" - conditions: - generation: 2 lastTransitionTime: 2018-07-26T08:31:01Z message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" type: ImportSuccess items: null tag: "2.4" - conditions: - generation: 2 lastTransitionTime: 2018-07-26T08:31:01Z message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" type: ImportSuccess items: null tag: "2.5" Expected results: /etc/pki should be mounted in master static pod, so that image stream could be imported successfully. Additional info: node system container already mount host's /etc/pki # cat /var/lib/containers/atomic/atomic-openshift-node.0/config.json |grep pki -A 5 -B 2 { "type": "bind", "source": "/etc/pki", "destination": "/etc/pki", "options": [ "bind", "ro" ] }, master/controller static pod does not define /etc/pki volume. # cat controller.yaml <--snip--> volumeMounts: - mountPath: /etc/origin/master/ name: master-config - mountPath: /etc/origin/cloudprovider/ name: master-cloud-provider - mountPath: /etc/containers/registries.d/ name: signature-import - mountPath: /etc/origin/kubelet-plugins mountPropagation: HostToContainer name: kubelet-plugins <--snip--> volumes: - hostPath: path: /etc/origin/master/ name: master-config - hostPath: path: /etc/origin/cloudprovider name: master-cloud-provider - hostPath: path: /etc/containers/registries.d name: signature-import - hostPath: path: /etc/origin/kubelet-plugins name: kubelet-plugins After mount /etc/pki as a hostPath to master static pod, image stream is imported successfully. <--snip--> volumeMounts: - mountPath: /etc/origin/master/ name: master-config - mountPath: /etc/origin/cloudprovider/ name: master-cloud-provider - mountPath: /etc/containers/registries.d/ name: signature-import - mountPath: /etc/origin/kubelet-plugins mountPropagation: HostToContainer name: kubelet-plugins - mountPath: /etc/pki/ name: pki-data <--snip--> volumes: - hostPath: path: /etc/origin/master/ name: master-config - hostPath: path: /etc/origin/cloudprovider name: master-cloud-provider - hostPath: path: /etc/containers/registries.d name: signature-import - hostPath: path: /etc/origin/kubelet-plugins name: kubelet-plugins - hostPath: path: /etc/pki name: pki-data <--snip-->
> According the above discussion, user have to modify master-config file, and manually mount CA pem to master static pod, why not ask installer to mount /etc/pki hostPath when creating master api/controller static pod json file, that would be more clean and easier, I saw node system container would mount /etc/pki hostPath by defult, why master static do not. We are not allowed to add additional mounts, the intent is to remove, not add, dependencies on the host filesystem, that is why we had to go with this approach (with the intent being that you'd put your CAs into a configmap and mount the configmap, not mount the hostfilesystem).
Is it possible to add an ansible variable that would do this by default? So we could add the path to any additional CAs required for install in the ansible hosts file and then let the installer create the necessary ConfigMaps?
(In reply to Nicholas Nachefski from comment #33) > Is it possible to add an ansible variable that would do this by default? So > we could add the path to any additional CAs required for install in the > ansible hosts file and then let the installer create the necessary > ConfigMaps? Agree, I think most of users like the automated way, but not have to run some manual post actions. At least, an automated way would help QE a lot.
I had a discussion w/ another engineer and configmaps are actually not viable since the apiserver has to be up to be able to mount a configmap, so it's a chicken+egg problem. So that leaves only a hostpath volume. Since the apiserver already mounts "/etc/origin/master/" from the host, you can simply put your certs in that path. It gets mounted as "/etc/origin/master" inside the apiserver pod, so you can specify your additional ca certs path in the master-config as something like "/etc/origin/master/mycerts/foo.pem" and place it on your master host at the same path.
After workaround image import issue, hit the same issue like comment 16. The following is QE's workaround for docker pull x509 issue. 1. add registry sa to privileged scc oc adm policy add-scc-to-user privileged -z registry 2. Mount hostpath to /etc/pki to container /etc/pki volumeMounts: - mountPath: /etc/pki name: pki-data volumes: - hostPath: path: /etc/pki type: Directory name: pki-data 3. run docker-registry as privileged=true securityContext: privileged: true ##The docker-registry dc yaml as below apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: creationTimestamp: 2018-07-26T18:28:38Z generation: 11 labels: docker-registry: default name: docker-registry namespace: default resourceVersion: "101368" selfLink: /apis/apps.openshift.io/v1/namespaces/default/deploymentconfigs/docker-registry uid: b6eaf986-9101-11e8-9a08-fa163e57801c spec: replicas: 1 revisionHistoryLimit: 10 selector: docker-registry: default strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: docker-registry: default spec: containers: - env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_HTTP_NET value: tcp - name: REGISTRY_HTTP_SECRET value: fW0qftVMgp0/NaBCyBOV6VQ0EppcPOYkkOXRqC4qJ9I= - name: REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA value: "false" - name: REGISTRY_OPENSHIFT_SERVER_ADDR value: docker-registry.default.svc:5000 - name: REGISTRY_HTTP_TLS_CERTIFICATE value: /etc/secrets/registry.crt - name: REGISTRY_HTTP_TLS_KEY value: /etc/secrets/registry.key image: registry.access.stage.redhat.com/openshift3/ose-docker-registry:v3.10 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 5000 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: registry ports: - containerPort: 5000 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 5000 scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: 100m memory: 256Mi securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /registry name: registry-storage - mountPath: /etc/secrets name: registry-certificates - mountPath: /etc/pki name: pki-data dnsPolicy: ClusterFirst nodeSelector: node-role.kubernetes.io/infra: "true" restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: registry serviceAccountName: registry terminationGracePeriodSeconds: 30 volumes: - name: registry-storage persistentVolumeClaim: claimName: regpv-claim - name: registry-certificates secret: defaultMode: 420 secretName: registry-certificates - hostPath: path: /etc/pki type: Directory name: pki-data test: false triggers: - type: ConfigChange status: availableReplicas: 1 conditions: - lastTransitionTime: 2018-07-27T06:35:11Z lastUpdateTime: 2018-07-27T06:35:11Z message: Deployment config has minimum availability. status: "True" type: Available - lastTransitionTime: 2018-07-27T07:06:31Z lastUpdateTime: 2018-07-27T07:06:33Z message: replication controller "docker-registry-11" successfully rolled out reason: NewReplicationControllerAvailable status: "True" type: Progressing details: causes: - type: ConfigChange message: config change latestVersion: 11 observedGeneration: 11 readyReplicas: 1 replicas: 1 unavailableReplicas: 0 updatedReplicas: 1
I also tried the workaround in comment #23, no chance, x509 error still reproduce. 1) # oc create configmap mycerts --from-file=cert.pem=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem -n default 2) oc edit dc/docker-registry -n default edit volume mounts to add the certs mount: volumeMounts: - mountPath: /registry name: registry-storage - mountPath: /etc/pki/tls name: certs edit volumes to add the configMap volume: volumes: - hostPath: path: /tmp/clusterup/openshift.local.clusterup/openshift.local.pv/registry type: "" name: registry-storage - configMap: defaultMode: 420 name: mycerts name: certs
Can you confirm the cert.pem is mounted into your registry pod? Did the registry restart/deploy since you added the mount? Does that tla-ca-bundle.pem contain a certificate that's valid for the registry you are trying to reach? (have you run update-ca-trust on your host?)
Ben, I used the externalRegistryHostname and additionalTrustedCA to try verify this issue. step 1: Generate a server certificate $oc adm ca create-server-cert --signer-cert=/etc/origin/master/ca.crt --signer-key=/etc/origin/master/ca.key --signer-serial=/etc/origin/master/ca.serial.txt --hostnames='docker-registry-default.apps.0830-ih9.qe.rhcloud.com,docker-registry.default.svc.cluster.local,docker-registry.default.svc,172.30.140.7' --cert=/etc/secrets/registry.crt --key=/etc/secrets/registry.key step2: Copy the generated cert to mounted dir $cp /etc/secrets/registry.crt /etc/origin/master/registry_external.crt step 3:Add externalRegistryHostname and additionalTrustedCA field imagePolicyConfig: externalRegistryHostname: docker-registry-default.apps.0830-ih9.qe.rhcloud.com:5000 #internalRegistryHostname: docker-registry.default.svc:5000 additionalTrustedCA: /etc/origin/master/registry_external.crt step4: Restart master api and controllers step5: Trigger a build, the build still failed to push image with x509 error $oc logs -f master-api-ip-172-18-11-184.ec2.internal -n kube-system | grep -i "image import" I0830 09:01:37.684483 1 master.go:43] Image import using additional CA path: /etc/origin/master/registry_external.crt Any other things I need do?
XiuJuan: I think you attempted to verify the wrong part of the process. What needs to be verified here is imagestream import from a registry that uses a self-signed cert. To do that: - create a registry using a self signed cert (as you did) - push an image to it - attempt to import that image from that registry using an imagestream - this should fail to import due to an x509 error - configure the additionalTrustedCA (as you did in your steps, including the restart of the api server). - reimport the imagestream. it should succeed now.
Ben, Thanks, I took a wrong way since my external registry has something wrong. Below is the correct steps: 1. Get the self signed cert for registry.access.stage.redhat.com registry, put the cert to /etc/origin/master/ 2. attempt to import that image from that registry using an imagestream - this should fail to import due to an x509 error. $ oc tag registry.access.stage.redhat.com/rhscl/ruby-25-rhel7:latest myruby:2.5 $oc get is myruby -o yaml message: 'Internal error occurred: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority' reason: InternalError status: "False" 3. configure the additionalTrustedCA in master config file. imagePolicyConfig: internalRegistryHostname: docker-registry.default.svc:5000 additionalTrustedCA: /etc/origin/master/stage_registry_ca.crt restart the api server. 4.reimport the imagestream. it succeeds now. 5.Create a build with the imported imagestream. pulling image succeed in build pod, build goes to complete. openshift v3.11.0-0.25.0 kubernetes v1.11.0+d4cacc0
After add additionalTrustedCA,pullthough policy also works well $oc tag registry.access.stage.redhat.com/rhscl/ruby-24-rhel7:latest myruby3:2.4 --reference-policy=local
(In reply to Ben Parees from comment #38) > Can you confirm the cert.pem is mounted into your registry pod? Did the > registry restart/deploy since you added the mount? Sorry for being late to reply. Yeah, I could confirm cert.pem is mounted into your registry pod. [root@preserve-jialiustg2-master-etcd-nfs-1 ~]# ll /etc/pki/tls/cert.pem lrwxrwxrwx. 1 root root 49 Sep 16 17:26 /etc/pki/tls/cert.pem -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem [root@preserve-jialiustg2-master-etcd-nfs-1 ~]# oc rsh docker-registry-7-hzjsf sh-4.2$ ls /etc/pki/tls/cert.pem /etc/pki/tls/cert.pem sh-4.2$ ls /etc/pki/tls/cert.pem -l lrwxrwxrwx. 1 root root 15 Sep 17 09:34 /etc/pki/tls/cert.pem -> ..data/cert.pem sh-4.2$ ls /etc/pki/tls/.. ../ ..2018_09_17_09_34_40.003170790/ ..data/ sh-4.2$ ls /etc/pki/tls/ ..2018_09_17_09_34_40.003170790/ ..data/ cert.pem # oc describe po mysql-1-lqnmx -n jialiu <--snip--> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 35s default-scheduler Successfully assigned mysql-1-lqnmx to preserve-jialiustg2-node-1 Normal SandboxChanged 32s kubelet, preserve-jialiustg2-node-1 Pod sandbox changed, it will be killed and re-created. Normal Pulling 18s (x2 over 33s) kubelet, preserve-jialiustg2-node-1 pulling image "docker-registry.default.svc:5000/openshift/mysql@sha256:40ab84bf25dc1766bb7f4cb7a554acf4a011761ee3241964c5bc0153c450ab14" Warning Failed 17s (x2 over 33s) kubelet, preserve-jialiustg2-node-1 Failed to pull image "docker-registry.default.svc:5000/openshift/mysql@sha256:40ab84bf25dc1766bb7f4cb7a554acf4a011761ee3241964c5bc0153c450ab14": rpc error: code = Unknown desc = unknown: unable to pull manifest from registry.access.stage.redhat.com/rhscl/mysql-57-rhel7:latest: Get https://registry.access.stage.redhat.com/v2/: x509: certificate signed by unknown authority Warning Failed 17s (x2 over 33s) kubelet, preserve-jialiustg2-node-1 Error: ErrImagePull Normal BackOff 6s (x3 over 31s) kubelet, preserve-jialiustg2-node-1 Back-off pulling image "docker-registry.default.svc:5000/openshift/mysql@sha256:40ab84bf25dc1766bb7f4cb7a554acf4a011761ee3241964c5bc0153c450ab14" Warning Failed 6s (x3 over 31s) kubelet, preserve-jialiustg2-node-1 Error: ImagePullBackOff > Does that tla-ca-bundle.pem contain a certificate that's valid for the registry you are trying to reach? (have you run update-ca-trust on your host?) Yeah, I could docker pull the image - registry.access.stage.redhat.com/rhscl/mysql-57-rhel7:latest successfully. Once I mount the whole hostdir /etc/pki to docker-registry, the issue would be disappeared.
Per the docs at https://docs.okd.io/latest/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough it looks like /etc/pki/tls/certs/ is where the content needs to be mounted for the registry to pick it up. Please try mounting your certificates there.
(In reply to Ben Parees from comment #57) > Per the docs at > https://docs.okd.io/latest/install_config/registry/ > extended_registry_configuration.html#middleware-repository-pullthrough > > it looks like /etc/pki/tls/certs/ is where the content needs to be mounted > for the registry to pick it up. Please try mounting your certificates there. Still no chance, seem like mount the whole /etc/pki dir for registry pod is QE's final choice.
> Still no chance, seem like mount the whole /etc/pki dir for registry pod is QE's final choice. I guess do that then, I don't have time to investigate this further right now.
Even after adding additionalTrustedCA: /etc/origin/master/ca-bundle.crt to my master-config I can not import images from a https site signed with a cert inside that bundle. I would guess it's not using those certificates but the "system cert store" inside the container which does not include the cert.
> Even after adding additionalTrustedCA: /etc/origin/master/ca-bundle.crt to my master-config I can not import images from a https site signed with a cert inside that bundle. how did you mount that path into your api server pod?
(In reply to Ben Parees from comment #61) > > Even after adding additionalTrustedCA: /etc/origin/master/ca-bundle.crt to my master-config I can not import images from a https site signed with a cert inside that bundle. > > how did you mount that path into your api server pod? Running 3.10 containerized /etc/origin/master gets mapped from the host. ie I didn't mount it, it's already there.
Please open a new bug, as this functionality has been verified. Gather verbose debug logs from the api server startup so we can confirm it is picking up the CA configuration, and include that with your new bug.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652