Hide Forgot
Description of problem: After upgrading Openshift from 3.2 -> 3.3.1 the cassandra and heapster fail the output of 'oc get pods': # oc get pod NAME READY STATUS RESTARTS AGE hawkular-cassandra-1-7p8i3 0/1 CrashLoopBackOff 44 3h hawkular-metrics-yc0o3 0/1 Running 34 3h heapster-su6jw 0/1 CrashLoopBackOff 29 3h metrics-deployer-mdyur 0/1 Error 0 3h recycler-for-pv048 0/1 ContainerCreating 0 1m The cassandra fails to startup. Attached the logs. ERROR 12:04:15 Exception encountered during startup: Unknown listen_address 'hawkular-cassandra-1-7p8i3 Version-Release number of selected component (if applicable): OCP 3.3 # oc version oc v3.3.1.7 kubernetes v1.3.0+52492b4 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://itsrv1528.esrv.local:8443 openshift v3.3.1.7 kubernetes v1.3.0+52492b4 # rpm -qa '*openshift*' openshift-ansible-3.3.54-1.git.0.61a1dee.el7.noarch openshift-ansible-lookup-plugins-3.3.54-1.git.0.61a1dee.el7.noarch tuned-profiles-atomic-openshift-node-3.3.1.7-1.git.0.0988966.el7.x86_64 openshift-ansible-filter-plugins-3.3.54-1.git.0.61a1dee.el7.noarch openshift-ansible-playbooks-3.3.54-1.git.0.61a1dee.el7.noarch atomic-openshift-3.3.1.7-1.git.0.0988966.el7.x86_64 atomic-openshift-master-3.3.1.7-1.git.0.0988966.el7.x86_64 openshift-ansible-docs-3.3.54-1.git.0.61a1dee.el7.noarch openshift-ansible-roles-3.3.54-1.git.0.61a1dee.el7.noarch atomic-openshift-clients-3.3.1.7-1.git.0.0988966.el7.x86_64 atomic-openshift-sdn-ovs-3.3.1.7-1.git.0.0988966.el7.x86_64 openshift-ansible-callback-plugins-3.3.54-1.git.0.61a1dee.el7.noarch atomic-openshift-node-3.3.1.7-1.git.0.0988966.el7.x86_64 atomic-openshift-utils-3.3.54-1.git.0.61a1dee.el7.noarch How reproducible: On customer side Steps to Reproduce: 1. Upgrade from 3.2 -> 3.3.1 2. 3. Actual results: After upgrading Openshift from 3.2 -> 3.3.1 the cassandra and heapster fail Expected results: After upgrading Openshift from 3.2 -> 3.3.1 the cassandra and heapster shall not fail Additional info:
> @liggit: do you know of any security related changes a customer could make that would make accessing /etc/hosts denied? Nothing I'm aware of. /etc/hosts is mounted into pods... could this be a mount propagation issue with a different docker version?
Did you ever figure out if this as a docker version problem as requested in https://bugzilla.redhat.com/show_bug.cgi?id=1404282#c6 ?
Closing this issue as insufficient_data. If we can get back the requested information this issue can be reopened.