Bug 1664497
Summary: | ES pod isn't upgraded when the image tag changed in CLO env vars. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
Component: | Logging | Assignee: | ewolinet |
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 4.1.0 | CC: | aos-bugs, ewolinet, jcantril, qitang, rmeggins |
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:41:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Qiaoling Tang
2019-01-09 02:19:46 UTC
I was able to recreate this. To resolve this I needed to remove the `paused: True` field from the Deployment to cause it to roll out the pod with the updated image value. ES pod isn't upgraded, besides, the image tag in es deployment doesn't change after changing the clo env vars, but the es image tag in elasticsearch CR changed. $ oc exec cluster-logging-operator-5666d54945-hc99z env |grep IMAGE ELASTICSEARCH_IMAGE=docker.io/openshift/origin-logging-elasticsearch5:v4.0 FLUENTD_IMAGE=docker.io/openshift/origin-logging-fluentd:v4.0 KIBANA_IMAGE=docker.io/openshift/origin-logging-kibana5:v4.0 CURATOR_IMAGE=docker.io/openshift/origin-logging-curator5:v4.0 OAUTH_PROXY_IMAGE=docker.io/openshift/oauth-proxy:v1.1.0 RSYSLOG_IMAGE=docker.io/viaq/rsyslog:8.38.0 $ oc get pod elasticsearch-clientdatamaster-0-1-84d764899d-mjvrt -o yaml |grep image: image: docker.io/openshift/origin-logging-elasticsearch5:latest image: docker.io/openshift/origin-logging-elasticsearch5:latest $ oc get deploy elasticsearch-clientdatamaster-0-1 -o yaml |grep image image: docker.io/openshift/origin-logging-elasticsearch5:latest imagePullPolicy: IfNotPresent $ oc get elasticsearch -o yaml |grep image image: docker.io/openshift/origin-logging-elasticsearch5:v4.0 $ oc get pod NAME READY STATUS RESTARTS AGE cluster-logging-operator-5666d54945-hc99z 1/1 Running 0 3m elasticsearch-clientdatamaster-0-1-84d764899d-mjvrt 1/1 Running 0 8m elasticsearch-operator-86599f8849-pvpj5 1/1 Running 0 9m fluentd-5d8nx 1/1 Running 0 2m fluentd-7658g 1/1 Running 0 2m fluentd-8jq5b 1/1 Running 0 2m fluentd-cwmnq 1/1 Running 0 2m fluentd-ndzch 1/1 Running 0 2m fluentd-ng7n6 1/1 Running 0 2m kibana-797c89966d-7xq6b 2/2 Running 0 2m $ oc logs elasticsearch-operator-86599f8849-pvpj5 time="2019-01-22T03:04:16Z" level=info msg="Go Version: go1.10.3" time="2019-01-22T03:04:16Z" level=info msg="Go OS/Arch: linux/amd64" time="2019-01-22T03:04:16Z" level=info msg="operator-sdk Version: 0.0.7" time="2019-01-22T03:04:16Z" level=info msg="Metrics service elasticsearch-operator created" time="2019-01-22T03:04:16Z" level=info msg="Watching logging.openshift.io/v1alpha1, Elasticsearch, openshift-logging, 5000000000" time="2019-01-22T03:04:48Z" level=info msg="Constructing new resource elasticsearch-clientdatamaster-0-1" time="2019-01-22T03:04:53Z" level=info msg="Updating node resource to be paused again elasticsearch-clientdatamaster-0-1" time="2019-01-22T03:10:45Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:10:49Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:10:54Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:10:58Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:03Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:08Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:12Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:17Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:21Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:26Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:11:31Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." ----snip---- time="2019-01-22T03:24:19Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." time="2019-01-22T03:24:23Z" level=warning msg="Cluster Rolling Restart requested but cluster isn't ready." $ oc get pod elasticsearch-operator-86599f8849-pvpj5 -o yaml |grep image image: openshift/origin-elasticsearch-operator:latest imagePullPolicy: IfNotPresent imagePullSecrets: image: docker.io/openshift/origin-elasticsearch-operator:latest imageID: docker.io/openshift/origin-elasticsearch-operator@sha256:28138a39f8b3db638fc44eff0b43713cfa24f1e0373f1fc7858dd3deae7a53fa It was SingleRedundancy. I tried to set redundancy policy to ZeroRedundancy with nodeCount=1 and redundancy policy: FullRedundancy with nodeCount=3, all of the tests are passed. Thanks for your correction. Move this bug to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |