Created attachment 1220798 [details] migrate_log
You should not be using MODE=migrate for this, that mode was only to add project UUIDs to indices. MODE=upgrade handles the logic to move to the current index pattern. In your upgrade pod logs I see that the migration to the new pattern was skipped, were there any log entries in Elasticsearch prior to upgrading? Can you rerun this test but do the following: 1) Install with 3.2.0 2) Check that data was populated with old index (note, operations will not be migrated) in ES logs/Kibana -- host path may not reflect this due to how we 'migrate'. 3) Upgrade to 3.4 4) Observe in upgrade pod that this isn't seen "No matching indexes found - skipping update_for_common_data_model" 5) Verify in Kibana
(In reply to ewolinet from comment #3) > You should not be using MODE=migrate for this, that mode was only to add > project UUIDs to indices. MODE=upgrade handles the logic to move to the > current index pattern. > > In your upgrade pod logs I see that the migration to the new pattern was > skipped, were there any log entries in Elasticsearch prior to upgrading? > > Can you rerun this test but do the following: > 1) Install with 3.2.0 > > 2) Check that data was populated with old index (note, operations will not > be migrated) in ES logs/Kibana -- host path may not reflect this due to how > we 'migrate'. > > 3) Upgrade to 3.4 > > 4) Observe in upgrade pod that this isn't seen "No matching indexes found - > skipping update_for_common_data_model" > > 5) Verify in Kibana Yes, the above scenarios 1) -5) is exactly what I did. Today I double checked on the env in comment #2, issue reproduced. Bug title was changed to reflect the real problem during upgrade mode instead of migrate mode, thanks for the info.
And I also tested the scenario when user-project exist in 3.2.0 level, upgrade pod failed by this error: ]$ oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-lea6r 0/1 Error 3 13m logging-deployer-6olir 0/1 Completed 0 51m logging-deployer-nzdn6 0/1 Error 0 15m logging-es-h26vke78-7-91rlq 0/1 CrashLoopBackOff 7 13m logging-fluentd-p75ai 1/1 Running 0 13m Unable to find log message from cluster.service from pod logging-es-h26vke78-7-91rlq within 300 seconds ++ cluster_service='oc logs logging-es-h26vke78-7-91rlq | grep '\''\[cluster\.service[[:space:]]*\]'\'' not found within 300 seconds' ++ echo 'Unable to find log message from cluster.service from pod logging-es-h26vke78-7-91rlq within 300 seconds' And index migration can't be performed in es pod: java.lang.IllegalStateException: unable to upgrade the mappings for the index [user-project.2d886d3f-abcb-11e6-aeff-fa163e8aa368.2016.11.16], reason: [Field name [kubernetes_labels_openshift.io/build.name] cannot contain '.'] at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:308) at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116) at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228) at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87) Detailed upgrade logs and es pod logs attached, test env is in comment #2: upgrade_log_when_user_project_exist es_log_unable_to_upgrade_mapping
Created attachment 1221054 [details] es_log_unable_to_upgrade_mapping
Created attachment 1221055 [details] upgrade_log_when_user_project_exist
test of github linking
12122014 buildContainer (noarch) completed successfully koji_builds: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=524995 repositories: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.4-rhel-7-docker-candidate-20161117165122 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.4.0-10 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.4.0 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.4
Tested with these latest logging images on brew, with tag 3.4.0: openshift3/logging-elasticsearch 6716a0ad8b2b openshift3/logging-deployer acad3da7b4ad openshift3/logging-fluentd 2cb15a5ae51e openshift3/logging-auth-proxy ec334b0c2669 openshift3/logging-kibana 7fc9916eea4d openshift3/logging-curator 9af78fc06248 Encountered this line in upgrade pod log: No matching indexes found - skipping update_for_common_data_model And after upgrade, the old format index of user-project still exist, so the index migration did not actually done successfully even though the upgrade pod ended successfully: $ oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-bt71k 1/1 Running 0 15m logging-deployer-jj0lr 0/1 Completed 0 18m logging-deployer-s9zg2 0/1 Completed 0 1h logging-es-5n2klra5-6-dc4ax 1/1 Running 0 15m logging-fluentd-z1hbt 1/1 Running 0 15m logging-kibana-2-nhq4g 2/2 Running 0 15m $ oc exec logging-curator-1-bt71k -- curator --host logging-es --use_ssl --certificate /etc/curator/keys/ca --client-cert /etc/curator/keys/cert --client-key /etc/curator/keys/key --loglevel INFO show indices --all-indices 2016-11-18 05:37:08,071 INFO Job starting: show indices 2016-11-18 05:37:08,072 INFO Attempting to verify SSL certificate. 2016-11-18 05:37:08,265 INFO Matching all indices. Ignoring flags other than --exclude. 2016-11-18 05:37:08,265 INFO Action show will be performed on the following indices: [u'.kibana', u'.kibana.91938315022b77cf223d212e426080092f1aafcf', u'.operations.2016.11.18', u'.searchguard.logging-es-5n2klra5-3-falzn', u'.searchguard.logging-es-5n2klra5-6-dc4ax', u'install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18', u'logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18', u'project.install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18', u'project.logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18', u'user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18'] 2016-11-18 05:37:08,266 INFO Matching indices: .kibana .kibana.91938315022b77cf223d212e426080092f1aafcf .operations.2016.11.18 .searchguard.logging-es-5n2klra5-3-falzn .searchguard.logging-es-5n2klra5-6-dc4ax install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18 logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18 project.install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18 project.logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18 user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18 Also the 3.2.0 level log entries for user-project that were saved in hostPath PV disappeared after upgrading, get "No results found " when search for it on 3.4.0 kibana UI. (3.2.0 level kibana was able to show it). New logs of upgrade pod was attached: upgrade_log_20161118
Created attachment 1221813 [details] upgrade_log_20161118
Xia, Do we still see the same error in Elasticsearch where it is unable to upgrade the mappings for user indices? This is independent of the upgrade mode index migration.
This is the way it should work: 1) old indexes are not touched - you should still be able to view them with curl at ES, or with kibana For example, if you had in 3.3 indices for projects "foo" and "bar", and the ".operations", you should still be able to query "foo.*" and "bar.*" and ".operations.*" after upgrading to 3.4. 2) once you upgrade, new logs for projects "foo" and "bar" will be in indices matching "project.foo.*" and "project.bar.*". There is no change for ".operations.*". 3) upgrade creates an alias for older projects. For example, it will create an alias which will allow searches for "project.foo.*" to return data from new "project.foo.*" indices as well as older "foo.*" indices. This allows you to view both old and new data using the single "project.foo.*" index pattern. Is this what you see?
(In reply to ewolinet from comment #15) > Xia, > > Do we still see the same error in Elasticsearch where it is unable to > upgrade the mappings for user indices? > > This is independent of the upgrade mode index migration. The issue about unable to upgrade mappings was fixed, the thing is I see this line in upgrade log (my apologize that I should emphasize this when I originally mentioned it in #12): No matching indexes found - skipping update_for_common_data_model
Created attachment 1222336 [details] No result found for 3.2.0 level index post upgrade to 3.4.0
(In reply to Rich Megginson from comment #16) > This is the way it should work: > > 1) old indexes are not touched - you should still be able to view them with > curl at ES, or with kibana > For example, if you had in 3.3 indices for projects "foo" and "bar", and the > ".operations", you should still be able to query "foo.*" and "bar.*" and > ".operations.*" after upgrading to 3.4. > > 2) once you upgrade, new logs for projects "foo" and "bar" will be in > indices matching "project.foo.*" and "project.bar.*". There is no change > for ".operations.*". > > 3) upgrade creates an alias for older projects. For example, it will create > an alias which will allow searches for "project.foo.*" to return data from > new "project.foo.*" indices as well as older "foo.*" indices. This allows > you to view both old and new data using the single "project.foo.*" index > pattern. > > Is this what you see? 1) I'm upgrading from 3.2.0 instead of 3.3.1 to 3.4.0 2) After upgrade, I can see the alias for older 3.2.0 projects when curl from ES: # oc exec logging-es-5n2klra5-6-r4m8m -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/*user-project*/_search | python -mjson.tool | more { "_shards": { "failed": 0, "successful": 9, "total": 9 }, "hits": { "hits": [ { "_id": "AVh4tP8wPIdnhYKj-5VB", "_index": "project.user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18", "_score": 1.0, "_source": { ... } 3)After upgrade, the data I get in step 2) is not present on kibana UI. Attached the screenshot. 4)From the screenshot, we can also find all older 3.2.0 indices is in old index format there. 5)Because of 3) and 4), kibana dropped log entries for the older 3.2.0 indices after upgrade. Which sounded like an issue.
Xia, I see in your screenshot that the time range to retrieve data is only for the last 15 minutes (Kibana default). Can you confirm that changing that time allows you to see your old log records?
Thank you for the reminder, Eric. After changing time range on Kibana, the 3.2.0 level log entries are shown on 3.4.0 level Kibana now. My apologies for didn't notice about this previously. Could you please help double confirm if this line is expected in upgrade log? If yes, please feel free to transfer back to ON_QA for closure. Thanks! No matching indexes found - skipping update_for_common_data_model
I found some problems with the upgrade common data model script - https://github.com/openshift/origin-aggregated-logging/pull/289 After upgrading to 3.4, but before running kibana, do this to confirm that you can view both old and new indices: oc exec logging-es-46228ioa-3-zu174 -- curl --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XGET 'https://localhost:9200/_cat/indices'
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging https://github.com/openshift/origin-aggregated-logging/commit/1fef1bc7c9ac81d9ca3b341c399b139710a3681a Bug 1395170 - Logging upgrade mode: kibana can't present log entries for the older 3.2.0 indices after upgrade https://bugzilla.redhat.com/show_bug.cgi?id=1395170 Fix some bugs in the upgrade common data model script.
To ssh://rmeggins.redhat.com/rpms/logging-deployment-docker fc5ffc6..14ee75a rhaos-3.4-rhel-7 -> rhaos-3.4-rhel-7 koji_builds: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=525838 repositories: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:rhaos-3.4-rhel-7-docker-candidate-20161122193239 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.0.28-5 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.0.28 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:latest brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:3.4.0
(In reply to Rich Megginson from comment #22) > I found some problems with the upgrade common data model script - > https://github.com/openshift/origin-aggregated-logging/pull/289 > > > After upgrading to 3.4, but before running kibana, do this to confirm that > you can view both old and new indices: > > oc exec logging-es-46228ioa-3-zu174 -- curl --cacert > /etc/elasticsearch/secret/admin-ca --cert > /etc/elasticsearch/secret/admin-cert --key > /etc/elasticsearch/secret/admin-key -XGET > 'https://localhost:9200/_cat/indices' Here is the output, but please note that this was actually get after running kibana UI: $ oc exec logging-es-iai5xdha-6-bhbno -- curl --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XGET 'https://localhost:9200/_cat/indices' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed green open project.install-test.fb3cae5f-b150-11e6-ac02-fa163e5c6618.2016.11.23 1 0 180 0 284.5kb 284.5kb green open .kibana 1 0 1 0 3.1kb 3.1kb green open project.logging.c7c08a9d-b15e-11e6-ac02-fa163e5c6618.2016.11.23 1 0 2794 0 847.6kb 847.6kb green open .searchguard.logging-es-iai5xdha-6-bhbno 1 0 4 0 34.1kb 34.1kb yellow open .kibana.f7724d98466ed7391e970202dc54a6460046aadb 5 1 8 1 38.8kb 38.8kb yellow open .searchguard.logging-es-iai5xdha-3-2m9t9 5 1 1 1 6.7kb 6.7kb yellow open install-test.fb3cae5f-b150-11e6-ac02-fa163e5c6618.2016.11.23 5 1 242 0 123.2kb 123.2kb yellow open logging.c7c08a9d-b15e-11e6-ac02-fa163e5c6618.2016.11.23 5 1 141 0 118kb 118kb yellow open user-project.90c7c38f-b160-11e6-ac02-fa163e5c6618.2016.11.23 5 1 42 0 51.5kb 51.5kb yellow open .operations.2016.11.23 5 1 101901 0 51mb 51mb 100 1110 100 1110 0 0 8664 0 --:--:-- --:--:-- --:--:-- 8671
(In reply to Rich Megginson from comment #22) The upgrade completed successfully, and I can see the 3.2.0 log entries on kibana post upgrade. But still see these line in my upgrade log (attaching the log with name Upgrade_log_Nov23): + update_for_common_data_model ++ oc get pods -l component=es -o 'jsonpath={.items[?(@.status.phase == "Running")].metadata.name}' + [[ -z logging-es-iai5xdha-6-bhbno ]] ++ get_list_of_proj_uuid_indices ++ wc -l ++ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices ++ awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}' ++ sort -u No matching indexes found - skipping update_for_common_data_model + count=0 + '[' 0 -eq 0 ']' + echo No matching indexes found - skipping update_for_common_data_model + return 0 + upgrade_notify + set +x Assign it back to double check with dev on if this is expected, even from the aspect of an end user I didn't see any impact, my test was actually passed. Please feel free to transfer back for closure after confirmation, thanks.
Created attachment 1223132 [details] Upgrade_log_Nov23
+ echo No matching indexes found - skipping update_for_common_data_model This means we still have a bug. I do not understand what's wrong. If I take the output of the script and run it manually, it works. I wonder if it is some LANG setting? Different version of awk?
(In reply to Rich Megginson from comment #28) > + echo No matching indexes found - skipping update_for_common_data_model > > This means we still have a bug. I do not understand what's wrong. If I > take the output of the script and run it manually, it works. I wonder if it > is some LANG setting? Different version of awk? Hi Rich, FYI. Here is the locale and awk version on my working machine(Fedora 22 desktop) where I used the oc client to do upgrade: $ awk --version GNU Awk 4.1.1, API: 1.1 Copyright (C) 1989, 1991-2014 Free Software Foundation. $ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC=zh_CN.UTF-8 LC_TIME=zh_CN.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=zh_CN.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER=zh_CN.UTF-8 LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT=zh_CN.UTF-8 LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= Please let me know if anything else I can assist. Thanks, Xia
I am able to reproduce. I think the problem is that we cannot use the admin cert/key if it does not yet exist, and curl will silently fail :-(
So how is it possible that you had the admin cert/key? That is, before doing > 5.Migrate index by re-running deployer with MODE=migrate: Did the following command return a value? oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}' If so, how is this possible? The upgrade script assumes that if is not present, the uuid_migrate needs to be run: function getDeploymentVersion() { #base this on what isn't installed # Check for the admin cert if [[ -z "$(oc get secrets -o jsonpath='{.items[?(@.data.admin-cert)].metadata.name}')" ]]; then echo 0 return fi ... function upgrade_logging() { installedVersion=$(getDeploymentVersion) # VERSIONS # 0 -- initial EFK # 1 -- add admin cert ... for version in $(seq $installedVersion $LOGGING_VERSION); do case "${version}" in 0) migrate=true ;; ... if [[ $installedVersion -ne $LOGGING_VERSION ]]; then if [[ -n "$migrate" ]]; then uuid_migrate fi It is the uuid_migrate function that creates the admin cert and sets the cert/key files needed to use curl later. How is it possible that the admin cert existed? I can fix the upgrade script, but I want to know how this happened in the first place.
Created attachment 1226102 [details] 3.2.0 deployer pod log
(In reply to Rich Megginson from comment #31) > So how is it possible that you had the admin cert/key? That is, before doing > > > 5.Migrate index by re-running deployer with MODE=migrate: > > Did the following command return a value? > > oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}' Here is the output of 3.2.0 level logging (deployed by "$ oc secrets new logging-deployer nothing=/dev/null"): $ oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}' logging-elasticsearch
submitted PR upstream: https://github.com/openshift/origin-aggregated-logging/pull/296
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging https://github.com/openshift/origin-aggregated-logging/commit/6acba6a8f45f198ac8b27fa0ff2056e51757a17b Bug 1395170 - Logging upgrade mode: Upgrade pod log states "No matching indexes found - skipping update_for_common_data_model" https://bugzilla.redhat.com/show_bug.cgi?id=1395170 If the admin-cert exists, upgrade will skip the uuid_migrate step which sets up the cert/key needed to use curl for the common data model upgrade code. The fix is to call those shell functions as needed if the variables and files do not exist. This also allows test-upgrade.sh to be run standalone, outside of the context of logging.sh, and tests specifically for the existing admin-cert case by skipping the removeAdminCert step in the test. Also changes the tests so that they clean up old indices created for testing.
Verified with the latest images on ops registry, issue has been fixed: openshift3/logging-deployer c74b066ec917 openshift3/logging-fluentd 7b11a29c82c1 openshift3/logging-elasticsearch 6716a0ad8b2b openshift3/logging-auth-proxy ec334b0c2669 openshift3/logging-kibana 7fc9916eea4d openshift3/logging-curator 9af78fc06248 # openshift version openshift v3.4.0.32+d349492 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Test result: 1. This line no longer exist in upgrade pod log: No matching indexes found - skipping update_for_common_data_model 2. Log snip about "update_for_common_data_model": + initialize_es_vars + OPS_PROJECTS=("default" "openshift" "openshift-infra" "kube-system") + CA=/etc/deploy/scratch/admin-ca.crt + KEY=/etc/deploy/scratch/admin-key.key + CERT=/etc/deploy/scratch/admin-cert.crt + PROJ_PREFIX=project. + es_host=logging-es + es_port=9200 + '[' '!' -f /etc/deploy/scratch/admin-cert.crt ']' + recreate_admin_certs ++ echo hxB ++ grep x + usingx=hxB + [[ -n hxB ]] + set +x + PROJ_PREFIX=project. + CA=/etc/deploy/scratch/admin-ca.crt + KEY=/etc/deploy/scratch/admin-key.key + CERT=/etc/deploy/scratch/admin-cert.crt + es_host=logging-es + es_port=9200 + update_for_common_data_model ++ oc get pods -l component=es -o 'jsonpath={.items[?(@.status.phase == "Running")].metadata.name}' + [[ -z logging-es-6ktwdglf-6-b3h3d ]] ++ wc -l ++ get_list_of_proj_uuid_indices ++ set -o pipefail ++ awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}' ++ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices ++ sort -u ++ rc=0 ++ set +o pipefail ++ return 0 + count=3 + '[' 3 -eq 0 ']' + echo Creating aliases for 3 index patterns . . . Creating aliases for 3 index patterns . . . + curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt -XPOST -d @- https://logging-es:9200/_aliases + echo '{"actions":[' + IFS=. + read proj uuid rest + get_list_of_proj_uuid_indices + set -o pipefail + curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices + sort -u + awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}' + echo '{"add":{"index":"install-test.58e09465-b842-11e6-9f05-42010af00006.*","alias":"project.install-test.58e09465-b842-11e6-9f05-42010af00006.*"}}' + comma=, + IFS=. + read proj uuid rest + echo ',{"add":{"index":"logging.308efdac-b865-11e6-9f05-42010af00006.*","alias":"project.logging.308efdac-b865-11e6-9f05-42010af00006.*"}}' + comma=, + IFS=. + read proj uuid rest + echo ',{"add":{"index":"logging.cd3c3086-b86c-11e6-9f05-42010af00006.*","alias":"project.logging.cd3c3086-b86c-11e6-9f05-42010af00006.*"}}' + comma=, + IFS=. + read proj uuid rest + rc=0 + set +o pipefail + return 0 + echo ']}' + upgrade_notify + set +x {"acknowledged":true} ================================= 3. Kibana UI is able to present the 3.2.0 level old log entries (saved in PV) within old format index pattern on it's UI.
This bug was fixed with the latest OCP 3.4.0 that is already released.