Bug 1310052 - ImagePullSecrets not work well and got error "Back-off pulling image"
ImagePullSecrets not work well and got error "Back-off pulling image"
Status: CLOSED WONTFIX
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE (Show other bugs)
3.2.0
Unspecified Unspecified
high Severity medium
: ---
: ---
Assigned To: Ben Parees
Johnny Liu
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-19 05:34 EST by weiwei jiang
Modified: 2016-10-30 18:54 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-07-19 15:04:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Bugzilla 1292962 None None None 2016-07-11 13:51 EDT
Red Hat Bugzilla 1324213 None None None 2016-07-11 13:51 EDT

  None (edit)
Description weiwei jiang 2016-02-19 05:34:49 EST
Description of problem:
When try to create an app via a private image with correct pullsecret, got "Back-off pulling image".

Can not reproduce this on devenv-rhel7_3467.

Version-Release number of selected component (if applicable):
openshift v3.1.1.903

How reproducible:
always

Steps to Reproduce:
1. Create a dockercfg type secrets for private dockerhub image
oc secrets new-dockercfg hub  --docker-username=wjiang --docker-password=xxxxxx --docker-email=wjiang@redhat.com
2. Add the secret as a pull secret for serviceaccount default
oc secrets add serviceaccount/default secrets/hub -n wjiang --for=pull
3. Create a new app with a private with the image.
oc new-app wjiang/node:latest

Actual results:
ImagePullSecret should work well to pull the private image.

Expected results:
Got "Back-off pulling image" when generate deploying.

Additional info:
Comment 1 Ben Parees 2016-02-19 10:25:23 EST
Can you provide a json dump of the deployment config that was created by new-app?
Comment 2 weiwei jiang 2016-02-22 02:48:21 EST
# oc get dc node -o yaml 
apiVersion: v1
kind: DeploymentConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftNewApp
  creationTimestamp: 2016-02-22T07:46:50Z
  labels:
    app: node
  name: node
  namespace: wjiang
  resourceVersion: "13855"
  selfLink: /oapi/v1/namespaces/wjiang/deploymentconfigs/node
  uid: 6ef5d9ee-d938-11e5-9cf1-fa163e544e12
spec:
  replicas: 1
  selector:
    app: node
    deploymentconfig: node
  strategy:
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      annotations:
        openshift.io/generated-by: OpenShiftNewApp
      creationTimestamp: null
      labels:
        app: node
        deploymentconfig: node
    spec:
      containers:
      - image: wjiang/node:latest
        imagePullPolicy: Always
        name: node
        resources: {}
        terminationMessagePath: /dev/termination-log
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
  triggers:
  - type: ConfigChange
status:
  details:
    causes:
    - type: ConfigChange
  latestVersion: 1
Comment 3 Dan Mace 2016-02-22 09:00:12 EST
The deployer pods use the 'default/deployer' service account for execution, not 'default:default'. Try adding your secret to the 'default/deployer' SA and see if the secret is automatically mounted.
Comment 4 weiwei jiang 2016-02-23 02:12:35 EST
(In reply to Dan Mace from comment #3)
> The deployer pods use the 'default/deployer' service account for execution,
> not 'default:default'. Try adding your secret to the 'default/deployer' SA
> and see if the secret is automatically mounted.

I do not think so. 
1. I have tried on devenv-rhel7_3509 with the same step, can not reproduce this.
2. Normal user have no permission to do things on default namespace, and the private image pulling is designed for normal user.

Guessing if this bug block this.
https://bugzilla.redhat.com/show_bug.cgi?id=1309195
Comment 5 Dan Mace 2016-02-23 09:17:22 EST
If the container image you specify in a pod spec needs a secret to pull, you must specify the secrets in pod.spec.imagePullSecrets. This is as designed.

Maybe you're asking for the automatic addition of any SecretTypeDockercfg or SecretTypeDockerConfigJson secrets to the pod spec when generating a deploymentConfig via new-app? That sounds like a feature request for new-app. The deployment system shouldn't be responsible for automatically adding pull secrets to your deploymentConfig.


In the meantime you could simply edit the deploymentConfig to add the imagePullSecrets to the pod spec.
Comment 6 Ben Parees 2016-02-23 11:25:58 EST
It seems like there are two potential RFEs out of this:

1) if the pod service account has a Dockercfg or DockerConfigJson secret, it should be added as an imagePullSecret to the pod as part of admission (how are multiple secrets handled?)

2) new-app could take a --pull-secret argument which it would use to properly add a pullSecret to the generated DeploymentConfig podtemplate
Comment 7 Ben Parees 2016-02-23 11:29:21 EST
I've created https://trello.com/c/iatchS02/861-argument-to-set-secrets-in-new-app for (2).

Dan Mace, handing back to you for what, if anything, you want to do about (1).

Once you create a trello card (or reject the idea) I think this can be closed as upstream.
Comment 8 weiwei jiang 2016-04-06 03:18:17 EDT
Can be reproduced on the dev-preview-int now.
Comment 9 weiwei jiang 2016-04-20 06:26:20 EDT
Checked with
# openshift version 
openshift v3.2.0.17
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5

for both `oc secrets new hub .dockercfg=.docker/config.json --type=kubernetes.io/dockercfg` and `oc secrets new-dockercfg hub --docker-username=wjiang --docker-password=xxxxxx --docker-email=wjiang@redhat.com`

All can not work well.
Comment 10 weiwei jiang 2016-04-20 06:32:58 EDT
And the pod.spec.imagePullSecrets contain the secrets.

# oc get pod node-3-cx0aa -o yaml 
<-------------snip----------->
spec:
<-------------snip----------->
  imagePullSecrets:
  - name: default-dockercfg-qpmo4
  - name: hub
<---------------snip--------------->
Comment 11 Jan Chaloupka 2016-04-20 07:09:44 EDT
weiwei, Dan Mace, what would be the resolution? From the conversation it seems to me that from this point on, this is RFE. Not a bug.

Or is there something that needs to be fixed?
Comment 12 weiwei jiang 2016-04-21 21:57:50 EDT
Checked with both latest ose 3.2 and dev-preview-int again, can reproduce this issue, should be a bug.
Comment 13 Andy Goldstein 2016-04-21 22:03:30 EDT
Could you please provide the exact reproduction steps, including any json/yaml for pods/deployment configs/etc? It would also be helpful if you could include the pull secret (feel free to sanitize the data, but I'm interested in seeing the keys). Also please make sure you run the node at log level 4, so we can see the node's logs with the info about pull secrets. Thanks!
Comment 14 weiwei jiang 2016-04-21 22:35:01 EDT
Reproduce steps:
1. oc new-project wjiang
2. oc secrets new-dockercfg hub --docker-username=wjiang --docker-password=xxxxxx --docker-email=wjiang@redhat.com
3. oc secrets add sa/default secret/hub --for=pull
4. oc new-app wjiang/node:latest --name=node

And the node log(loglevel=5):
Apr 21 22:18:44 openshift-229.lab.sjc.redhat.com atomic-openshift-node[10999]: I0421 22:18:44.503608   10999 manager.go:1784] Got container changes for pod "node-1-rcrrf_wjiang(3be2b955-0830-11e6-9a36-fa163e0784e5)": {StartInfraContainer:false InfraChanged:false InfraContainerId:bf6c6c99c2558757c62684c1394bb32e6c7b7a0ca88dd1361f322212d4262b26 ContainersToStart:map[0:Container {Name:node Image:wjiang/node@sha256:78aa9f9f8e314449cfe79b4e4fdb820208ead3bca2d010d470a7ca7d1755b917 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8080 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-4sn2k ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:Always SecurityContext:0xc20a3527e0 Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.] ContainersToKeep:map[bf6c6c99c2558757c62684c1394bb32e6c7b7a0ca88dd1361f322212d4262b26:-1]}
Apr 21 22:18:44 openshift-229.lab.sjc.redhat.com atomic-openshift-node[10999]: I0421 22:18:44.503660   10999 manager.go:1921] Creating container &{Name:node Image:wjiang/node@sha256:78aa9f9f8e314449cfe79b4e4fdb820208ead3bca2d010d470a7ca7d1755b917 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:8080 Protocol:TCP HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-4sn2k ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:Always SecurityContext:0xc20a3527e0 Stdin:false StdinOnce:false TTY:false} in pod node-1-rcrrf_wjiang(3be2b955-0830-11e6-9a36-fa163e0784e5)
Apr 21 22:18:44 openshift-229.lab.sjc.redhat.com atomic-openshift-node[10999]: E0421 22:18:44.506409   10999 pod_workers.go:138] Error syncing pod 3be2b955-0830-11e6-9a36-fa163e0784e5, skipping: failed to "StartContainer" for "node" with ImagePullBackOff: "Back-off pulling image \"wjiang/node@sha256:78aa9f9f8e314449cfe79b4e4fdb820208ead3bca2d010d470a7ca7d1755b917\""
Apr 21 22:18:44 openshift-229.lab.sjc.redhat.com atomic-openshift-node[10999]: I0421 22:18:44.506507   10999 server.go:606] Event(api.ObjectReference{Kind:"Pod", Namespace:"wjiang", Name:"node-1-rcrrf", UID:"3be2b955-0830-11e6-9a36-fa163e0784e5", APIVersion:"v1", ResourceVersion:"81222", FieldPath:"spec.containers{node}"}): type: 'Normal' reason: 'BackOff' Back-off pulling image "wjiang/node@sha256:78aa9f9f8e314449cfe79b4e4fdb820208ead3bca2d010d470a7ca7d1755b917"
Apr 21 22:18:44 openshift-229.lab.sjc.redhat.com atomic-openshift-node[10999]: I0421 22:18:44.506577   10999 server.go:606] Event(api.ObjectReference{Kind:"Pod", Namespace:"wjiang", Name:"node-1-rcrrf", UID:"3be2b955-0830-11e6-9a36-fa163e0784e5", APIVersion:"v1", ResourceVersion:"81222", FieldPath:""}): type: 'Warning' reason: 'FailedSync' Error syncing pod, skipping: failed to "StartContainer" for "node" with ImagePullBackOff: "Back-off pulling image \"wjiang/node@sha256:78aa9f9f8e314449cfe79b4e4fdb820208ead3bca2d010d470a7ca7d1755b917\""



And can reproduce this with the same steps on dev-preview-int, but can not reproduce this on devenv-rhel7_4008.
Comment 15 Dan Mace 2016-04-22 08:57:40 EDT
I think there's some confusion because this bug was established as being an RFE and never got re-componentized to prevent further testing pending the resolution of the new Trello cards[1]. Pull secrets aren't automatically assigned to the DC created by new-app. Is this still the same case as originally reported? Please provide the deploymentConfig YAML that results from the `new-app` command. The steps provided don't work for me in dev-preview-int[2].

My current impression is that we shouldn't be testing this behavior at the moment. I'm changing the component to RFE.

[1]https://bugzilla.redhat.com/show_bug.cgi?id=1310052#c7
[2] (error: only a partial match was found for "wjiang/node:latest": "openshift/node:latest").
Comment 19 Ben Parees 2016-07-19 15:04:18 EDT
We are not considering adding the ability to specify push/pull secrets as part of new-app invocation right now, it's not a common use case.

Note You need to log in before you can comment on or make changes to this bug.