Bug 1542302

Summary: [Free-stg]Failed to pull image with error:certificate is not valid for 'docker-registry.default.svc'
Product: OpenShift Online Reporter: ge liu <geliu>
Component: Image RegistryAssignee: Scott Dodson <sdodson>
Status: CLOSED DUPLICATE QA Contact: Dongbo Yan <dyan>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.xCC: aos-bugs, bparees, mfojtik, yufchang
Target Milestone: ---Keywords: OnlineStarter
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-06 13:44:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description ge liu 2018-02-06 03:06:41 UTC
Description of problem:

Deployment fail with error msg:  Failed to pull image "docker-registry.default.svc:5000/lgp/origin-ruby-sample@sha256:b2b66940052bb4005ae4bf36e07232d9002e492f507ae45008d21e04fa678318": rpc error: code = 2 desc = Get https://docker-registry.default.svc:5000/v2/: x509: certificate is valid for docker-registry-default.1b7d.free-stg.openshiftapps.com, docker-registry.default.svc.cluster.local, 172.30.44.xx, not docker-registry.default.svc

Pls see the detail reproduce steps below.

Server https://api.free-stg.openshift.com:443
openshift v3.7.23
kubernetes v1.7.6+a08f5eeb62


How reproducible:
Always

Steps to Reproduce:
1. # oc process -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/deployment/OCP-11384/application-template-stibuild.json| oc create -f -
service "frontend" created
route "route-edge" created
imagestream "origin-ruby-sample" created
imagestream "ruby-22-centos7" created
buildconfig "ruby-sample-build" created
deploymentconfig "frontend" created
service "database" created
deploymentconfig "database" created

2.# oc get pods
NAME                        READY     STATUS         RESTARTS   AGE
database-1-deploy           0/1       Error          0          1m
frontend-1-deploy           1/1       Running        0          1m
frontend-1-hook-pre         0/1       ErrImagePull   0          1m
ruby-sample-build-1-build   0/1       Completed      0          1m

3. # oc describe pods frontend-1-hook-pre
Name:         frontend-1-hook-pre
Namespace:    lgp
Node:         ip-172-31-76-2xx.us-east-2.compute.internal/172.31.76.2xx
Start Time:   Tue, 06 Feb 2018 10:06:39 +0800
Labels:       openshift.io/deployer-pod-for.name=frontend-1
              openshift.io/deployer-pod.type=hook-pre
Annotations:  kubernetes.io/limit-ranger=LimitRanger plugin set: cpu, memory request for container lifecycle; cpu, memory limit for container lifecycle
              openshift.io/deployment.name=frontend-1
              openshift.io/scc=restricted
Status:       Pending
IP:           
Containers:
  lifecycle:
    Container ID:  
    Image:         docker-registry.default.svc:5000/lgp/origin-ruby-sample@sha256:b2b66940052bb4005ae4bf36e07232d9002e492f507ae45008d21e04fa678318
    Image ID:      
    Port:          <none>
    Command:
      /bin/true
    State:          Waiting
      Reason:       ErrImagePull
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  512Mi
    Requests:
      cpu:     50m
      memory:  256Mi
    Environment:
      MYSQL_DATABASE:                  root
      CUSTOM_VAR1:                     custom_value1
      MYSQL_USER:                      <set to the key 'mysql-user' in secret 'dbsecret'>      Optional: false
      MYSQL_PASSWORD:                  <set to the key 'mysql-password' in secret 'dbsecret'>  Optional: false
      OPENSHIFT_DEPLOYMENT_NAME:       frontend-1
      OPENSHIFT_DEPLOYMENT_NAMESPACE:  lgp
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sq87t (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-sq87t:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sq87t
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  type=compute
Tolerations:     <none>
Events:
  Type     Reason                 Age               From                                                  Message
  ----     ------                 ----              ----                                                  -------
  Normal   Scheduled              1m                default-scheduler                                     Successfully assigned frontend-1-hook-pre to ip-172-31-76-21x.us-east-2.compute.internal
  Normal   SuccessfulMountVolume  1m                kubelet, ip-172-31-76-21x.us-east-2.compute.internal  MountVolume.SetUp succeeded for volume "default-token-sq87t"
  Normal   Pulling                1m                kubelet, ip-172-31-76-21x.us-east-2.compute.internal  pulling image "docker-registry.default.svc:5000/lgp/origin-ruby-sample@sha256:b2b66940052bb4005ae4bf36e07232d9002e492f507ae45008d21e04fa678318"
  Warning  Failed                 1m                kubelet, ip-172-31-76-218.us-east-2.compute.internal  Failed to pull image "docker-registry.default.svc:5000/lgp/origin-ruby-sample@sha256:b2b66940052bb4005ae4bf36e07232d9002e492f507ae45008d21e04fa678318": rpc error: code = 2 desc = Get https://docker-registry.default.svc:5000/v2/: x509: certificate is valid for docker-registry-default.1b7d.free-stg.openshiftapps.com, docker-registry.default.svc.cluster.local, 172.30.44.19x, not docker-registry.default.svc
  Warning  Failed                 1m                kubelet, ip-172-31-76-21x.us-east-2.compute.internal  Error: ErrImagePull
  Normal   SandboxChanged         11s (x4 over 1m)  kubelet, ip-172-31-76-21x.us-east-2.compute.internal  Pod sandbox changed, it will be killed and re-created.


Actual results:
Deployment fail with error msg above.
Expected results:
Deployment succeed.

Comment 2 Michal Fojtik 2018-02-06 13:08:49 UTC
/cc Ben

Comment 4 Scott Dodson 2018-02-06 13:44:54 UTC

*** This bug has been marked as a duplicate of bug 1527787 ***