Bug 1270187 - [platformmanagement_public_518]The deployment hook can't inherit the volume
[platformmanagement_public_518]The deployment hook can't inherit the volume
Status: CLOSED CURRENTRELEASE
Product: OpenShift Origin
Classification: Red Hat
Component: Deployments (Show other bugs)
3.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Dan Mace
zhou ying
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-10-09 04:52 EDT by zhou ying
Modified: 2015-11-23 16:16 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-23 16:16:37 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description zhou ying 2015-10-09 04:52:19 EDT
Description of problem:
Specify a volume in the deploymentconfig, on the pre-hook declare to use the volume, check the pre-hook pod, could not find the volume.

Version-Release number of selected component (if applicable):
devenv_fedora_2444
oc v1.0.6-328-gdf1f19e
kubernetes v1.1.0-alpha.1-653-g86b4e77

How reproducible:
Always

Steps to Reproduce:
1. Create a dc with hook specify a hostPath volume;
2. Check the pre-hook pod:
   `oc get pod/hooks-2-prehook -o yaml`

Actual results:
Could not find the hostPath volume on pre-hook pod, but after the deployment the command pod can inherit the hostPath volume:
oc get pod hooks-1-prehook -o yaml
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-vqgs8
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ip-172-18-8-165
  imagePullSecrets:
  - name: default-dockercfg-xe5k0
  nodeName: ip-172-18-8-165
  restartPolicy: OnFailure
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: default-token-vqgs8
    secret:
      secretName: default-token-vqgs8
oc get pod hooks-1-yn0q2 -o yaml
    volumeMounts:
    - mountPath: /opt1h
      name: data
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-vqgs8
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ip-172-18-8-165
  imagePullSecrets:
  - name: default-dockercfg-xe5k0
  nodeName: ip-172-18-8-165
  restartPolicy: Always
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /usr
    name: data
  - name: default-token-vqgs8
    secret:
      secretName: default-token-vqgs8

Expected results:
The hook pod should also inherit the volume in dc like the command pod.

Additional info:
These other volume: pvc/empty/ also can't be inherited by hook pod.
The json file to create dc is:
{
    "kind": "DeploymentConfig",
    "apiVersion": "v1",
    "metadata": {
        "name": "hooks",
        "creationTimestamp": null,
        "labels": {
            "name": "mysql"
        }
    },
    "spec": {
        "strategy": {
            "type": "Recreate",
            "recreateParams": {
                "pre": {
                    "failurePolicy": "Retry",
                    "execNewPod": {
                        "command": [
                             "/bin/bash",
                             "-c",
                             "/usr/bin/sleep 200"
                        ],
                        "env": [
                            {
                                "name": "VAR",
                                "value": "pre-deployment"
                            }
                        ],
                        "volumes": ["data"],
                        "containerName": "mysql-55-centos7"
                    }
                },
                "post": {
                    "failurePolicy": "Ignore",
                    "execNewPod": {
                        "command": [
                            "/bin/false"
                        ],
                        "env": [
                            {
                                "name": "VAR",
                                "value": "post-deployment"
                            }
                        ],
                        "containerName": "mysql-55-centos7"
                    }
                }
            },
            "resources": {}
        },
        "triggers": [
            {
                "type": "ConfigChange"
            }
        ],
        "replicas": 1,
        "selector": {
            "name": "mysql"
        },
        "template": {
            "metadata": {
                "creationTimestamp": null,
                "labels": {
                    "name": "mysql"
                }
            },
            "spec": {
                "volumes": [
                    {
                        "name": "data",
                        "hostPath": {
                            "path": "/usr"
                        }
                    }
                ],
                "containers": [
                    {
                        "name": "mysql-55-centos7",
                        "image": "openshift/mysql-55-centos7:latest",
                        "ports": [
                            {
                                "containerPort": 3306,
                                "protocol": "TCP"
                            }
                        ],
                        "env": [
                            {
                                "name": "MYSQL_USER",
                                "value": "user8Y2"
                            },
                            {
                                "name": "MYSQL_PASSWORD",
                                "value": "Plqe5Wev"
                            },
                            {
                                "name": "MYSQL_DATABASE",
                                "value": "root"
                            }
                        ],
                        "resources": {},
                        "volumeMounts": [
                            {
                                "name": "data",
                                "mountPath": "/opt1h"
                            }
                        ],
                        "terminationMessagePath": "/dev/termination-log",
                        "imagePullPolicy": "Always",
                        "securityContext": {
                            "capabilities": {},
                            "privileged": false
                        }
                    }
                ],
                "restartPolicy": "Always",
                "dnsPolicy": "ClusterFirst"
            }
        }
    },
    "status": {}
}
Comment 1 Dan Mace 2015-10-13 14:46:41 EDT
Using the JSON you pasted, the deployment hook pod on my machine successfully inherited the "data" volume from the template. Can you please paste your deployer pod logs? I did have to enable host mounted volumes generally before creating my namespace, otherwise hook pod creation fails because the host volume isn't permitted.

I'm a little confused about your first example, which shows the "data" volume being inherited by the "hooks-1-prehook" pod.
Comment 2 zhou ying 2015-10-20 02:13:36 EDT
pod Info:
http://pastebin.test.redhat.com/321038


[root@ip-172-18-10-105 amd64]# oc logs -f hooks-1-deploy
I1020 06:08:14.162627       1 deployer.go:195] Deploying zhouy/hooks-1 for the first time (replicas: 1)
I1020 06:08:14.172572       1 lifecycle.go:79] Created lifecycle pod hooks-1-prehook for deployment zhouy/hooks-1
I1020 06:08:14.172619       1 lifecycle.go:86] Waiting for hook pod zhouy/hooks-1-prehook to complete
I1020 06:08:14.162627       1 deployer.go:195] Deploying zhouy/hooks-1 for the first time (replicas: 1)
I1020 06:08:14.172572       1 lifecycle.go:79] Created lifecycle pod hooks-1-prehook for deployment zhouy/hooks-1
I1020 06:08:14.172619       1 lifecycle.go:86] Waiting for hook pod zhouy/hooks-1-prehook to complete
I1020 06:10:04.558334       1 recreate.go:95] Pre hook finished
I1020 06:10:04.558595       1 recreate.go:126] Scaling zhouy/hooks-1 to 1
I1020 06:10:06.675422       1 lifecycle.go:79] Created lifecycle pod hooks-1-posthook for deployment zhouy/hooks-1
I1020 06:10:06.675470       1 lifecycle.go:86] Waiting for hook pod zhouy/hooks-1-posthook to complete
I1020 06:10:16.940384       1 lifecycle.go:49] Hook failed, ignoring: 
I1020 06:10:16.940410       1 recreate.go:140] Post hook finished
I1020 06:10:16.940419       1 recreate.go:144] Deployment hooks-1 successfully made active
Comment 3 Dan Mace 2015-10-21 09:31:20 EDT
Can you report what version of the openshift/origin-deployer docker image is being used for these tests? We need to verify that it's a recent one containing the following commit:

https://github.com/openshift/origin/commit/b4011ff730247b6e8182d9f530a0c0df546bcd5c
Comment 4 zhou ying 2015-10-21 22:34:43 EDT
On latest instance:devenv_rhel7_2515
[root@ip-172-18-0-45 amd64]# oc version
oc v1.0.6-823-g23eaf25
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4

[root@ip-172-18-0-45 amd64]# docker images|grep deployer
openshift/origin-deployer                    23eaf25             d5bdec61a73b        3 hours ago         446.2 MB
openshift/origin-deployer                    latest              d5bdec61a73b        3 hours ago         446.2 MB
docker.io/openshift/origin-deployer          v1.0.6              3738f952ffbe        5 weeks ago         421.1 MB
Comment 5 Dan Mace 2015-10-23 13:22:30 EDT
I need to know which version of the deployer image is being used in the test server. I see which images you have available (1.0.6 might predate the volume support). Only your origin server log will say which version of the image is actually in use. There should be an entry like:


I1023 13:22:11.554540   12964 start_master.go:393] Using images from "openshift/origin-<component>:latest"
Comment 6 zhou ying 2015-10-25 23:32:57 EDT
When start openshift use parameter '--latest-images=true' , then the deployer will :
 Using images from "openshift/origin-<component>:latest"

otherwise, will be:
Using images from "openshift/origin-<component>:v1.0.6"


So I'll test the card with the latest images .
Comment 7 zhou ying 2015-10-26 01:57:42 EDT
When using the latest-images always met this error:
[root@ip-172-18-5-12 amd64]# oc get pods
NAME             READY     STATUS    RESTARTS   AGE
hooks-1-deploy   0/1       Error     0          2m
[root@ip-172-18-5-12 amd64]# oc logs hooks-1-deploy
F1026 05:51:38.787390       1 deployer.go:65] couldn't get deployment zhouy/hooks-1: Get https://172.18.5.12:8443/api/v1/namespaces/zhouy/replicationcontrollers/hooks-1: dial tcp 172.18.5.12:8443: no route to host
Comment 8 zhou ying 2015-10-26 02:31:48 EDT
When using the latest deployer images, only can see the volume info, but the mountinfo could not found.
[root@ip-172-18-5-51 amd64]# oc get pods
NAME              READY     STATUS    RESTARTS   AGE
hooks-1-deploy    1/1       Running   0          6s
hooks-1-prehook   1/1       Running   0          5s
[root@ip-172-18-5-51 amd64]# oc get pod hooks-1-prehook -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/deployment.name: hooks-1
    openshift.io/scc: super-zhouy
  creationTimestamp: 2015-10-26T06:28:47Z
  labels:
    openshift.io/deployer-pod-for.name: hooks-1
  name: hooks-1-prehook
  namespace: zhouy
  resourceVersion: "385"
  selfLink: /api/v1/namespaces/zhouy/pods/hooks-1-prehook
  uid: d0a1225b-7baa-11e5-a949-0e20fb73c14f
spec:
  activeDeadlineSeconds: 21600
  containers:
  - command:
    - /bin/bash
    - -c
    - /usr/bin/sleep 200
    env:
    - name: MYSQL_USER
      value: user8Y2
    - name: MYSQL_PASSWORD
      value: Plqe5Wev
    - name: MYSQL_DATABASE
      value: root
    - name: VAR
      value: pre-deployment
    - name: OPENSHIFT_DEPLOYMENT_NAME
      value: hooks-1
    - name: OPENSHIFT_DEPLOYMENT_NAMESPACE
      value: zhouy
    image: openshift/mysql-55-centos7:latest
    imagePullPolicy: Always
    name: lifecycle
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-1orov
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ip-172-18-5-51.ec2.internal
  imagePullSecrets:
  - name: default-dockercfg-3pher
  nodeName: ip-172-18-5-51.ec2.internal
  restartPolicy: OnFailure
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /usr
    name: data
  - name: default-token-1orov
    secret:
      secretName: default-token-1orov
Comment 9 Dan Mace 2015-10-26 10:05:49 EDT
Using the latest deployer image, your pod spec has the correct volume, which is the correct behavior. The container volumeMounts are outside the scope of the deployment system and are handled by the kubelet/kubernetes (I believe).

As to your connectivity issues, I can't say without a lot more information. Are you using a multi-node cluster? It might be better to use IRC or email to diagnose the overall test cluster setup. Can you verify the behavior with a simple all-in-one setup?
Comment 10 zhou ying 2015-10-27 01:01:41 EDT
About the connectivity issues, I tested on the all-in-one instance, but I can't reproduce it today, I'll pay attention to it in the followning test.
Comment 11 Paul Weil 2015-10-29 09:15:15 EDT
It looks like this is in a state where the original issue is resolved based on Dan's comments.  If there is a mounting issue we should get the storage or upstream team involved so we can close this out.

Zhou/Dan, is that accurate?
Comment 12 Dan Mace 2015-10-29 09:24:09 EDT
Paul, that's my perception of the current state of affairs. There's an email chain regarding this that I've CC'd Mark Turansky on to investigate the storage side.
Comment 13 Mark Turansky 2015-11-03 09:19:40 EST
What is the "mounting issue" referred to?  I don't see any mounting error in this comment thread.

"The container volumeMounts are outside the scope of the deployment system and are handled by the kubelet/kubernetes"

This is not correct.   The pod spec must contain the volumeMounts.
Comment 14 Mark Turansky 2015-11-03 10:39:56 EST
Clarification: podSpec.Container has volumeMounts.
Comment 15 Dan Mace 2015-11-03 10:46:24 EST
Thanks for the clarification, Mark. It looks like we have a gap in the deployments implementation. We copy the Volumes from the target container for the hook pod, but not the container's VolumeMounts.
Comment 17 zhou ying 2015-11-05 00:53:48 EST
Confirmed on, the bug have fixed.
openshift v1.0.7-287-g60781e3
kubernetes v1.2.0-alpha.1-1107-g4c8e6f4
etcd 2.1.2

[root@ip-172-18-3-247 amd64]# oc get pods
NAME              READY     STATUS    RESTARTS   AGE
hooks-1-deploy    1/1       Running   0          3s
hooks-1-prehook   1/1       Running   0          2s
[root@ip-172-18-3-247 amd64]# oc rsh hooks-1-prehook
bash-4.2$ ls
bin  etc   lib	  lost+found  mnt  opt1h  root	sbin  sys  usr
dev  home  lib64  media       opt  proc   run	srv   tmp  var
bash-4.2$ ls /opt1h
github.com  golang.org


[root@ip-172-18-3-247 amd64]# oc get pod hooks-1-prehook -o yaml
    volumeMounts:
    - mountPath: /opt1h
      name: data
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-8fn0o
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ip-172-18-3-247.ec2.internal
  imagePullSecrets:
  - name: default-dockercfg-t2zpk
  nodeName: ip-172-18-3-247.ec2.internal
  restartPolicy: OnFailure
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - hostPath:
      path: /data/src
    name: data
  - name: default-token-8fn0o
    secret:
      secretName: default-token-8fn0o

Note You need to log in before you can comment on or make changes to this bug.