Bug 1395170 - Logging upgrade mode: Upgrade pod log states "No matching indexes found - skipping update_for_common_data_model"
Summary: Logging upgrade mode: Upgrade pod log states "No matching indexes found - ski...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Rich Megginson
QA Contact: Xia Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-15 10:35 UTC by Xia Zhao
Modified: 2017-03-08 18:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
This was a pre-release issue, no doc update required.
Clone Of:
Environment:
Last Closed: 2017-02-16 21:02:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
migrate_log (14.59 KB, text/plain)
2016-11-15 10:35 UTC, Xia Zhao
no flags Details
es_log_unable_to_upgrade_mapping (18.41 KB, text/plain)
2016-11-16 07:48 UTC, Xia Zhao
no flags Details
upgrade_log_when_user_project_exist (179.94 KB, text/plain)
2016-11-16 07:48 UTC, Xia Zhao
no flags Details
upgrade_log_20161118 (135.30 KB, text/plain)
2016-11-18 10:50 UTC, Xia Zhao
no flags Details
No result found for 3.2.0 level index post upgrade to 3.4.0 (124.66 KB, image/png)
2016-11-21 09:10 UTC, Xia Zhao
no flags Details
Upgrade_log_Nov23 (131.44 KB, text/plain)
2016-11-23 10:24 UTC, Xia Zhao
no flags Details
3.2.0 deployer pod log (22.81 KB, text/plain)
2016-11-30 03:30 UTC, Xia Zhao
no flags Details

Comment 1 Xia Zhao 2016-11-15 10:35:40 UTC
Created attachment 1220798 [details]
migrate_log

Comment 3 ewolinet 2016-11-15 14:05:42 UTC
You should not be using MODE=migrate for this, that mode was only to add project UUIDs to indices. MODE=upgrade handles the logic to move to the current index pattern.

In your upgrade pod logs I see that the migration to the new pattern was skipped, were there any log entries in Elasticsearch prior to upgrading?

Can you rerun this test but do the following:
1) Install with 3.2.0

2) Check that data was populated with old index (note, operations will not be migrated) in ES logs/Kibana -- host path may not reflect this due to how we 'migrate'.

3) Upgrade to 3.4

4) Observe in upgrade pod that this isn't seen "No matching indexes found - skipping update_for_common_data_model"

5) Verify in Kibana

Comment 4 Xia Zhao 2016-11-16 06:32:53 UTC
(In reply to ewolinet from comment #3)
> You should not be using MODE=migrate for this, that mode was only to add
> project UUIDs to indices. MODE=upgrade handles the logic to move to the
> current index pattern.
> 
> In your upgrade pod logs I see that the migration to the new pattern was
> skipped, were there any log entries in Elasticsearch prior to upgrading?
> 
> Can you rerun this test but do the following:
> 1) Install with 3.2.0
> 
> 2) Check that data was populated with old index (note, operations will not
> be migrated) in ES logs/Kibana -- host path may not reflect this due to how
> we 'migrate'.
> 
> 3) Upgrade to 3.4
> 
> 4) Observe in upgrade pod that this isn't seen "No matching indexes found -
> skipping update_for_common_data_model"
> 
> 5) Verify in Kibana

Yes, the above scenarios 1) -5) is exactly what I did. Today I double checked on the env in comment #2, issue reproduced. Bug title was changed to reflect the real problem during upgrade mode instead of migrate mode, thanks for the info.

Comment 5 Xia Zhao 2016-11-16 07:46:46 UTC
And I also tested the scenario when user-project exist in 3.2.0 level, upgrade pod failed by this error:
]$ oc get po
NAME                          READY     STATUS             RESTARTS   AGE
logging-curator-1-lea6r       0/1       Error              3          13m
logging-deployer-6olir        0/1       Completed          0          51m

logging-deployer-nzdn6        0/1       Error              0          15m
logging-es-h26vke78-7-91rlq   0/1       CrashLoopBackOff   7          13m
logging-fluentd-p75ai         1/1       Running            0          13m

Unable to find log message from cluster.service from pod logging-es-h26vke78-7-91rlq within 300 seconds
++ cluster_service='oc logs logging-es-h26vke78-7-91rlq | grep '\''\[cluster\.service[[:space:]]*\]'\'' not found within 300 seconds'
++ echo 'Unable to find log message from cluster.service from pod logging-es-h26vke78-7-91rlq within 300 seconds'

And index migration can't be performed in es pod:
java.lang.IllegalStateException: unable to upgrade the mappings for the index [user-project.2d886d3f-abcb-11e6-aeff-fa163e8aa368.2016.11.16], reason: [Field name [kubernetes_labels_openshift.io/build.name] cannot contain '.']
	at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkMappingsCompatibility(MetaDataIndexUpgradeService.java:308)
	at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:116)
	at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:228)
	at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:87)

Detailed upgrade logs and es pod logs attached, test env is in comment #2:
upgrade_log_when_user_project_exist
es_log_unable_to_upgrade_mapping

Comment 6 Xia Zhao 2016-11-16 07:48:27 UTC
Created attachment 1221054 [details]
es_log_unable_to_upgrade_mapping

Comment 7 Xia Zhao 2016-11-16 07:48:57 UTC
Created attachment 1221055 [details]
upgrade_log_when_user_project_exist

Comment 9 Rich Megginson 2016-11-16 18:07:40 UTC
test of github linking

Comment 11 ewolinet 2016-11-17 22:04:06 UTC
12122014 buildContainer (noarch) completed successfully
koji_builds:
  https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=524995
repositories:
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.4-rhel-7-docker-candidate-20161117165122
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.4.0-10
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.4.0
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.4

Comment 12 Xia Zhao 2016-11-18 10:50:27 UTC
Tested with these latest logging images on brew, with tag 3.4.0:
openshift3/logging-elasticsearch    6716a0ad8b2b
openshift3/logging-deployer    acad3da7b4ad
openshift3/logging-fluentd    2cb15a5ae51e
openshift3/logging-auth-proxy    ec334b0c2669
openshift3/logging-kibana    7fc9916eea4d
openshift3/logging-curator    9af78fc06248


Encountered this line in upgrade pod log:
No matching indexes found - skipping update_for_common_data_model

And after upgrade, the old format index of user-project still exist, so the index migration did not actually done successfully even though the upgrade pod ended successfully:

$ oc get po
NAME                          READY     STATUS      RESTARTS   AGE
logging-curator-1-bt71k       1/1       Running     0          15m
logging-deployer-jj0lr        0/1       Completed   0          18m
logging-deployer-s9zg2        0/1       Completed   0          1h
logging-es-5n2klra5-6-dc4ax   1/1       Running     0          15m
logging-fluentd-z1hbt         1/1       Running     0          15m
logging-kibana-2-nhq4g        2/2       Running     0          15m

$ oc  exec logging-curator-1-bt71k -- curator --host logging-es --use_ssl  --certificate /etc/curator/keys/ca --client-cert /etc/curator/keys/cert  --client-key /etc/curator/keys/key --loglevel INFO show indices  --all-indices
2016-11-18 05:37:08,071 INFO      Job starting: show indices
2016-11-18 05:37:08,072 INFO      Attempting to verify SSL certificate.
2016-11-18 05:37:08,265 INFO      Matching all indices. Ignoring flags other than --exclude.
2016-11-18 05:37:08,265 INFO      Action show will be performed on the following indices: [u'.kibana', u'.kibana.91938315022b77cf223d212e426080092f1aafcf', u'.operations.2016.11.18', u'.searchguard.logging-es-5n2klra5-3-falzn', u'.searchguard.logging-es-5n2klra5-6-dc4ax', u'install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18', u'logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18', u'project.install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18', u'project.logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18', u'user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18']
2016-11-18 05:37:08,266 INFO      Matching indices:
.kibana
.kibana.91938315022b77cf223d212e426080092f1aafcf
.operations.2016.11.18
.searchguard.logging-es-5n2klra5-3-falzn
.searchguard.logging-es-5n2klra5-6-dc4ax
install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18
logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18
project.install-test.3434eaef-ac62-11e6-8d0a-fa163e89df0c.2016.11.18
project.logging.2b664cc6-ad70-11e6-8d0a-fa163e89df0c.2016.11.18
user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18

Also the 3.2.0 level log entries for user-project that were saved in hostPath PV disappeared after upgrading, get "No results found " when search for it on 3.4.0 kibana UI. (3.2.0 level kibana was able to show it).

New logs of upgrade pod was attached: upgrade_log_20161118

Comment 13 Xia Zhao 2016-11-18 10:50:58 UTC
Created attachment 1221813 [details]
upgrade_log_20161118

Comment 15 ewolinet 2016-11-18 15:00:02 UTC
Xia,

Do we still see the same error in Elasticsearch where it is unable to upgrade the mappings for user indices?

This is independent of the upgrade mode index migration.

Comment 16 Rich Megginson 2016-11-18 16:58:21 UTC
This is the way it should work:

1) old indexes are not touched - you should still be able to view them with curl at ES, or with kibana
For example, if you had in 3.3 indices for projects "foo" and "bar", and the ".operations", you should still be able to query "foo.*" and "bar.*" and ".operations.*" after upgrading to 3.4.

2) once you upgrade, new logs for projects "foo" and "bar" will be in indices matching "project.foo.*" and "project.bar.*".  There is no change for ".operations.*".

3) upgrade creates an alias for older projects.  For example, it will create an alias which will allow searches for "project.foo.*" to return data from new "project.foo.*" indices as well as older "foo.*" indices.  This allows you to view both old and new data using the single "project.foo.*" index pattern.

Is this what you see?

Comment 17 Xia Zhao 2016-11-21 09:00:11 UTC
(In reply to ewolinet from comment #15)
> Xia,
> 
> Do we still see the same error in Elasticsearch where it is unable to
> upgrade the mappings for user indices?
> 
> This is independent of the upgrade mode index migration.

The issue about unable to upgrade mappings was fixed, the thing is I see this line in upgrade log (my apologize that I should emphasize this when I originally mentioned it in #12):
No matching indexes found - skipping update_for_common_data_model

Comment 18 Xia Zhao 2016-11-21 09:10:04 UTC
Created attachment 1222336 [details]
No result found for 3.2.0 level index post upgrade to 3.4.0

Comment 19 Xia Zhao 2016-11-21 09:26:32 UTC
(In reply to Rich Megginson from comment #16)
> This is the way it should work:
> 
> 1) old indexes are not touched - you should still be able to view them with
> curl at ES, or with kibana
> For example, if you had in 3.3 indices for projects "foo" and "bar", and the
> ".operations", you should still be able to query "foo.*" and "bar.*" and
> ".operations.*" after upgrading to 3.4.
> 
> 2) once you upgrade, new logs for projects "foo" and "bar" will be in
> indices matching "project.foo.*" and "project.bar.*".  There is no change
> for ".operations.*".
> 
> 3) upgrade creates an alias for older projects.  For example, it will create
> an alias which will allow searches for "project.foo.*" to return data from
> new "project.foo.*" indices as well as older "foo.*" indices.  This allows
> you to view both old and new data using the single "project.foo.*" index
> pattern.
> 
> Is this what you see?

1) I'm upgrading from 3.2.0 instead of 3.3.1 to 3.4.0

2) After upgrade, I can see the alias for older 3.2.0 projects when curl from ES:
# oc exec logging-es-5n2klra5-6-r4m8m -- curl -s -k --cert  /etc/elasticsearch/secret/admin-cert --key  /etc/elasticsearch/secret/admin-key https://logging-es:9200/*user-project*/_search | python -mjson.tool | more
{
    "_shards": {
        "failed": 0,
        "successful": 9,
        "total": 9
    },
    "hits": {
        "hits": [
            {
                "_id": "AVh4tP8wPIdnhYKj-5VB",
                "_index": "project.user-project.c162eb1c-ad75-11e6-8d0a-fa163e89df0c.2016.11.18",
                "_score": 1.0,
                "_source": {
...
}

3)After upgrade, the data I get in step 2) is not present on kibana UI. Attached the screenshot.

4)From the screenshot, we can also find all older 3.2.0 indices is in old index format there. 

5)Because of 3) and 4), kibana dropped log entries for the older 3.2.0 indices after upgrade. Which sounded like an issue.

Comment 20 ewolinet 2016-11-21 14:45:02 UTC
Xia,

I see in your screenshot that the time range to retrieve data is only for the last 15 minutes (Kibana default). Can you confirm that changing that time allows you to see your old log records?

Comment 21 Xia Zhao 2016-11-22 00:43:49 UTC
Thank you for the reminder, Eric. After changing time range on Kibana, the 3.2.0 level log entries are shown on 3.4.0 level Kibana now. My apologies for didn't notice about this previously.

Could you please help double confirm if this line is expected in upgrade log? If yes, please feel free to transfer back to ON_QA for closure. Thanks!

No matching indexes found - skipping update_for_common_data_model

Comment 22 Rich Megginson 2016-11-22 17:37:34 UTC
I found some problems with the upgrade common data model script - https://github.com/openshift/origin-aggregated-logging/pull/289


After upgrading to 3.4, but before running kibana, do this to confirm that you can view both old and new indices:

oc exec logging-es-46228ioa-3-zu174 -- curl --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XGET 'https://localhost:9200/_cat/indices'

Comment 23 openshift-github-bot 2016-11-22 23:20:57 UTC
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/1fef1bc7c9ac81d9ca3b341c399b139710a3681a
Bug 1395170 - Logging upgrade mode: kibana can't present log entries for the older 3.2.0 indices after upgrade

https://bugzilla.redhat.com/show_bug.cgi?id=1395170
Fix some bugs in the upgrade common data model script.

Comment 24 Rich Megginson 2016-11-23 00:46:16 UTC
To ssh://rmeggins.redhat.com/rpms/logging-deployment-docker
   fc5ffc6..14ee75a  rhaos-3.4-rhel-7 -> rhaos-3.4-rhel-7

koji_builds:
  https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=525838
repositories:
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:rhaos-3.4-rhel-7-docker-candidate-20161122193239
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.0.28-5
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.0.28
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:latest
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:3.4.0

Comment 25 Xia Zhao 2016-11-23 10:10:34 UTC
(In reply to Rich Megginson from comment #22)
> I found some problems with the upgrade common data model script -
> https://github.com/openshift/origin-aggregated-logging/pull/289
> 
> 
> After upgrading to 3.4, but before running kibana, do this to confirm that
> you can view both old and new indices:
> 
> oc exec logging-es-46228ioa-3-zu174 -- curl --cacert
> /etc/elasticsearch/secret/admin-ca --cert
> /etc/elasticsearch/secret/admin-cert --key
> /etc/elasticsearch/secret/admin-key -XGET
> 'https://localhost:9200/_cat/indices'

Here is the output, but please note that this was actually get after running kibana UI:

$ oc exec logging-es-iai5xdha-6-bhbno -- curl --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key -XGET 'https://localhost:9200/_cat/indices'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
green  open project.install-test.fb3cae5f-b150-11e6-ac02-fa163e5c6618.2016.11.23 1 0    180 0 284.5kb 284.5kb 
green  open .kibana                                                              1 0      1 0   3.1kb   3.1kb 
green  open project.logging.c7c08a9d-b15e-11e6-ac02-fa163e5c6618.2016.11.23      1 0   2794 0 847.6kb 847.6kb 
green  open .searchguard.logging-es-iai5xdha-6-bhbno                             1 0      4 0  34.1kb  34.1kb 
yellow open .kibana.f7724d98466ed7391e970202dc54a6460046aadb                     5 1      8 1  38.8kb  38.8kb 
yellow open .searchguard.logging-es-iai5xdha-3-2m9t9                             5 1      1 1   6.7kb   6.7kb 
yellow open install-test.fb3cae5f-b150-11e6-ac02-fa163e5c6618.2016.11.23         5 1    242 0 123.2kb 123.2kb 
yellow open logging.c7c08a9d-b15e-11e6-ac02-fa163e5c6618.2016.11.23              5 1    141 0   118kb   118kb 
yellow open user-project.90c7c38f-b160-11e6-ac02-fa163e5c6618.2016.11.23         5 1     42 0  51.5kb  51.5kb 
yellow open .operations.2016.11.23                                               5 1 101901 0    51mb    51mb 
100  1110  100  1110    0     0   8664      0 --:--:-- --:--:-- --:--:--  8671

Comment 26 Xia Zhao 2016-11-23 10:23:24 UTC
(In reply to Rich Megginson from comment #22)
The upgrade completed successfully, and I can see the 3.2.0 log entries on kibana post upgrade. But still see these line in my upgrade log (attaching the log with name Upgrade_log_Nov23):

+ update_for_common_data_model
++ oc get pods -l component=es -o 'jsonpath={.items[?(@.status.phase == "Running")].metadata.name}'
+ [[ -z logging-es-iai5xdha-6-bhbno ]]
++ get_list_of_proj_uuid_indices
++ wc -l
++ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices
++ awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}'
++ sort -u
No matching indexes found - skipping update_for_common_data_model
+ count=0
+ '[' 0 -eq 0 ']'
+ echo No matching indexes found - skipping update_for_common_data_model
+ return 0
+ upgrade_notify
+ set +x

Assign it back to double check with dev on if this is expected, even from the aspect of an end user I didn't see any impact, my test was actually passed. 

Please feel free to transfer back for closure after confirmation, thanks.

Comment 27 Xia Zhao 2016-11-23 10:24:25 UTC
Created attachment 1223132 [details]
Upgrade_log_Nov23

Comment 28 Rich Megginson 2016-11-23 18:01:08 UTC
+ echo No matching indexes found - skipping update_for_common_data_model

This means we still have a bug.  I do not understand what's wrong.  If I take the output of the script and run it manually, it works.  I wonder if it is some LANG setting?  Different version of awk?

Comment 29 Xia Zhao 2016-11-24 05:38:08 UTC
(In reply to Rich Megginson from comment #28)
> + echo No matching indexes found - skipping update_for_common_data_model
> 
> This means we still have a bug.  I do not understand what's wrong.  If I
> take the output of the script and run it manually, it works.  I wonder if it
> is some LANG setting?  Different version of awk?

Hi Rich,

FYI. Here is the locale and awk version on my working machine(Fedora 22 desktop) where I used the oc client to do upgrade:

$ awk --version
GNU Awk 4.1.1, API: 1.1
Copyright (C) 1989, 1991-2014 Free Software Foundation.

$ locale
LANG=en_US.UTF-8
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=zh_CN.UTF-8
LC_TIME=zh_CN.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=zh_CN.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=zh_CN.UTF-8
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT=zh_CN.UTF-8
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

Please let me know if anything else I can assist.

Thanks,
Xia

Comment 30 Rich Megginson 2016-11-30 02:03:09 UTC
I am able to reproduce.  I think the problem is that we cannot use the admin cert/key if it does not yet exist, and curl will silently fail :-(

Comment 31 Rich Megginson 2016-11-30 02:29:25 UTC
So how is it possible that you had the admin cert/key?  That is, before doing

> 5.Migrate index by re-running deployer with MODE=migrate:

Did the following command return a value?

oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}'

If so, how is this possible?  The upgrade script assumes that if is not present, the uuid_migrate needs to be run:

function getDeploymentVersion() {
  #base this on what isn't installed

  # Check for the admin cert
  if [[ -z "$(oc get secrets -o jsonpath='{.items[?(@.data.admin-cert)].metadata.name}')" ]]; then
    echo 0
    return
  fi
...

function upgrade_logging() {

  installedVersion=$(getDeploymentVersion)
  # VERSIONS
  # 0 -- initial EFK
  # 1 -- add admin cert
...
    for version in $(seq $installedVersion $LOGGING_VERSION); do
      case "${version}" in
        0)
          migrate=true
          ;;
...
  if [[ $installedVersion -ne $LOGGING_VERSION ]]; then
    if [[ -n "$migrate" ]]; then
      uuid_migrate
    fi


It is the uuid_migrate function that creates the admin cert and sets the cert/key files needed to use curl later.

How is it possible that the admin cert existed?  I can fix the upgrade script, but I want to know how this happened in the first place.

Comment 32 Xia Zhao 2016-11-30 03:30:37 UTC
Created attachment 1226102 [details]
3.2.0 deployer pod log

Comment 33 Xia Zhao 2016-11-30 03:33:25 UTC
(In reply to Rich Megginson from comment #31)
> So how is it possible that you had the admin cert/key?  That is, before doing
> 
> > 5.Migrate index by re-running deployer with MODE=migrate:
> 
> Did the following command return a value?
> 
> oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}'

Here is the output of 3.2.0 level logging (deployed by "$ oc secrets new logging-deployer nothing=/dev/null"):

$ oc get secrets -o 'jsonpath={.items[?(@.data.admin-cert)].metadata.name}'
logging-elasticsearch

Comment 35 Rich Megginson 2016-12-01 14:59:55 UTC
submitted PR upstream: https://github.com/openshift/origin-aggregated-logging/pull/296

Comment 36 openshift-github-bot 2016-12-01 16:57:15 UTC
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/6acba6a8f45f198ac8b27fa0ff2056e51757a17b
Bug 1395170 - Logging upgrade mode: Upgrade pod log states "No matching indexes found - skipping update_for_common_data_model"

https://bugzilla.redhat.com/show_bug.cgi?id=1395170
If the admin-cert exists, upgrade will skip the uuid_migrate step
which sets up the cert/key needed to use curl for the common data
model upgrade code.  The fix is to call those shell functions as
needed if the variables and files do not exist.
This also allows test-upgrade.sh to be run standalone, outside of
the context of logging.sh, and tests specifically for the existing
admin-cert case by skipping the removeAdminCert step in the test.
Also changes the tests so that they clean up old indices created
for testing.

Comment 38 Xia Zhao 2016-12-02 09:53:16 UTC
Verified with the latest images on ops registry, issue has been fixed:

openshift3/logging-deployer    c74b066ec917
openshift3/logging-fluentd    7b11a29c82c1
openshift3/logging-elasticsearch    6716a0ad8b2b
openshift3/logging-auth-proxy    ec334b0c2669
openshift3/logging-kibana    7fc9916eea4d
openshift3/logging-curator    9af78fc06248

# openshift version
openshift v3.4.0.32+d349492
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Test result:

1. This line no longer exist in upgrade pod log:
No matching indexes found - skipping update_for_common_data_model

2. Log snip about "update_for_common_data_model":
+ initialize_es_vars
+ OPS_PROJECTS=("default" "openshift" "openshift-infra" "kube-system")
+ CA=/etc/deploy/scratch/admin-ca.crt
+ KEY=/etc/deploy/scratch/admin-key.key
+ CERT=/etc/deploy/scratch/admin-cert.crt
+ PROJ_PREFIX=project.
+ es_host=logging-es
+ es_port=9200
+ '[' '!' -f /etc/deploy/scratch/admin-cert.crt ']'
+ recreate_admin_certs
++ echo hxB
++ grep x
+ usingx=hxB
+ [[ -n hxB ]]
+ set +x
+ PROJ_PREFIX=project.
+ CA=/etc/deploy/scratch/admin-ca.crt
+ KEY=/etc/deploy/scratch/admin-key.key
+ CERT=/etc/deploy/scratch/admin-cert.crt
+ es_host=logging-es
+ es_port=9200
+ update_for_common_data_model
++ oc get pods -l component=es -o 'jsonpath={.items[?(@.status.phase == "Running")].metadata.name}'
+ [[ -z logging-es-6ktwdglf-6-b3h3d ]]
++ wc -l
++ get_list_of_proj_uuid_indices
++ set -o pipefail
++ awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}'
++ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices
++ sort -u
++ rc=0
++ set +o pipefail
++ return 0
+ count=3
+ '[' 3 -eq 0 ']'
+ echo Creating aliases for 3 index patterns . . .
Creating aliases for 3 index patterns . . .
+ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt -XPOST -d @- https://logging-es:9200/_aliases
+ echo '{"actions":['
+ IFS=.
+ read proj uuid rest
+ get_list_of_proj_uuid_indices
+ set -o pipefail
+ curl -s --cacert /etc/deploy/scratch/admin-ca.crt --key /etc/deploy/scratch/admin-key.key --cert /etc/deploy/scratch/admin-cert.crt https://logging-es:9200/_cat/indices
+ sort -u
+ awk -v 'daterx=[.]20[0-9]{2}[.][0-1]?[0-9][.][0-9]{1,2}$' '$3 !~ "^[.]" && $3 !~ "^project." && $3 ~ daterx {print gensub(daterx, "", 1, $3)}'
+ echo '{"add":{"index":"install-test.58e09465-b842-11e6-9f05-42010af00006.*","alias":"project.install-test.58e09465-b842-11e6-9f05-42010af00006.*"}}'
+ comma=,
+ IFS=.
+ read proj uuid rest
+ echo ',{"add":{"index":"logging.308efdac-b865-11e6-9f05-42010af00006.*","alias":"project.logging.308efdac-b865-11e6-9f05-42010af00006.*"}}'
+ comma=,
+ IFS=.
+ read proj uuid rest
+ echo ',{"add":{"index":"logging.cd3c3086-b86c-11e6-9f05-42010af00006.*","alias":"project.logging.cd3c3086-b86c-11e6-9f05-42010af00006.*"}}'
+ comma=,
+ IFS=.
+ read proj uuid rest
+ rc=0
+ set +o pipefail
+ return 0
+ echo ']}'
+ upgrade_notify
+ set +x
{"acknowledged":true}
=================================


3. Kibana UI is able to present the 3.2.0 level old log entries (saved in PV) within old format index pattern on it's UI.

Comment 39 Troy Dawson 2017-02-16 21:02:58 UTC
This bug was fixed with the latest OCP 3.4.0 that is already released.


Note You need to log in before you can comment on or make changes to this bug.