Description of problem: After we add readinessProbe for mongodb template, the mongodb pod cannot startup after we update the admin password using oc env command. Version-Release number of selected component (if applicable): openshift v3.1.1.911 kubernetes v1.2.0-alpha.7-703-gbc4550d etcd 2.2.5 How reproducible: Steps to Reproduce: 1.Create a project 2.oc new-app mongodb-ephemeral --param=MONGODB_ADMIN_PASSWORD=admin 3.oc env dc/mongodb -e MONGODB_ADMIN_PASSWORD=newadmin Actual results: the pod cannot be ready and the livenessProbe are still using old password "admin" to check whether is ready. Expected results: the pod should be ready Additional info: [vagrant@ose ~]$ oc get dc -o json { "kind": "List", "apiVersion": "v1", "metadata": {}, "items": [ { "kind": "DeploymentConfig", "apiVersion": "v1", "metadata": { "name": "mongodb", "namespace": "haowang", "selfLink": "/oapi/v1/namespaces/haowang/deploymentconfigs/mongodb", "uid": "11dbb56b-e501-11e5-a697-fa163ed32d7c", "resourceVersion": "29214", "creationTimestamp": "2016-03-08T07:40:46Z", "labels": { "template": "mongodb-ephemeral-template" }, "annotations": { "openshift.io/generated-by": "OpenShiftNewApp" } }, "spec": { "strategy": { "type": "Recreate", "recreateParams": { "timeoutSeconds": 600 }, "resources": {} }, "triggers": [ { "type": "ImageChange", "imageChangeParams": { "containerNames": [ "mongodb" ], "from": { "kind": "ImageStreamTag", "namespace": "openshift", "name": "mongodb:latest" }, "lastTriggeredImage": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhscl/mongodb-26-rhel7:latest" } }, { "type": "ConfigChange" } ], "replicas": 1, "test": false, "selector": { "name": "mongodb" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "mongodb" }, "annotations": { "openshift.io/generated-by": "OpenShiftNewApp" } }, "spec": { "volumes": [ { "name": "mongodb-data", "emptyDir": {} } ], "containers": [ { "name": "mongodb", "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhscl/mongodb-26-rhel7:latest", "ports": [ { "containerPort": 27017, "protocol": "TCP" } ], "env": [ { "name": "MONGODB_USER", "value": "userVFK" }, { "name": "MONGODB_PASSWORD", "value": "auVDR0M7gUKjHIoP" }, { "name": "MONGODB_DATABASE", "value": "sampledb" }, { "name": "MONGODB_ADMIN_PASSWORD", "value": "newadmin" } ], "resources": { "limits": { "memory": "512Mi" } }, "volumeMounts": [ { "name": "mongodb-data", "mountPath": "/var/lib/mongodb/data" } ], "livenessProbe": { "tcpSocket": { "port": 27017 }, "initialDelaySeconds": 30, "timeoutSeconds": 1, "periodSeconds": 10, "successThreshold": 1, "failureThreshold": 3 }, "readinessProbe": { "exec": { "command": [ "/bin/sh", "-i", "-c", "mongostat --host 127.0.0.1 -u admin -p admin -n 1 --noheaders" ] }, "initialDelaySeconds": 3, "timeoutSeconds": 1, "periodSeconds": 10, "successThreshold": 1, "failureThreshold": 3 }, "terminationMessagePath": "/dev/termination-log", "imagePullPolicy": "IfNotPresent", "securityContext": { "capabilities": {}, "privileged": false } } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "securityContext": {} } } }, "status": { "latestVersion": 2, "details": { "causes": [ { "type": "ConfigChange" } ] } } } ] }
I think , we should write a script to do the liveness check, using the env inside the container, not the env defined in the dc.
Commit pushed to master at https://github.com/openshift/origin https://github.com/openshift/origin/commit/b3980c05ef4e74cea6973dd5a7ec00beb721a54b Bug 1315595: Use in-container env vars for liveness/readiness probes
Verified ,the templates works well ,but need brenton sync the templates to openshift-ansible.
Hi, scott, please help update the templates, thanks.
Updated content for the installer. I've updated both v1.1 and v1.2 content, I assume these changes are backwards compatible. If they're not can someone please speak up. https://github.com/openshift/openshift-ansible/pull/1451
Installer updates have been merged.
Hi,Michal, I am not sure whether the templates is backwards compatible, could you please have a look ?
verified with : [root@openshift-106 ~]# openshift version openshift v3.2.0.4 kubernetes v1.2.0-origin-41-g91d3e75 etcd 2.2.5
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1064