Bug 1429827 - searchguard index needs manual recreation
Summary: searchguard index needs manual recreation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.4.z
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-07 08:57 UTC by Ruben Romero Montes
Modified: 2020-05-14 15:43 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-04-04 14:28:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
configmap (3.56 KB, text/plain)
2017-03-07 08:59 UTC, Ruben Romero Montes
no flags Details
logging-es.log (2.95 MB, text/plain)
2017-03-07 09:00 UTC, Ruben Romero Montes
no flags Details
logging-es cluster health (484 bytes, text/plain)
2017-03-07 09:00 UTC, Ruben Romero Montes
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0865 0 normal SHIPPED_LIVE OpenShift Container Platform 3.4.1.12, 3.3.1.17-4, and 3.2.1.30 bug fix update 2017-04-04 18:27:43 UTC

Description Ruben Romero Montes 2017-03-07 08:57:06 UTC
Description of problem:
The pod searchguard index is created upon start but is not properly initialized and does not contain any data. Once the index is manually closed and deleted, it is recreated and works properly.

Error found in the logs. Exception encountered when seeding initial ACL

Version-Release number of selected component (if applicable):
    Image:      registry.access.redhat.com/openshift3/logging-curator:3.4.0
    Image:      registry.access.redhat.com/openshift3/logging-elasticsearch:3.4.0
    Image:      registry.access.redhat.com/openshift3/logging-kibana:3.4.0
    Image:      registry.access.redhat.com/openshift3/logging-auth-proxy:3.4.0

How reproducible:
Always on the customer environment after every restart

Steps to Reproduce:
1. Scale down elasticsearch pod
2. Scale up elasticsearch pod (new pod name logging-es-e2lqi0vi-2-3ft5s)
3. Wait some reasonable time just in case (even days)
4. Check indices
green   open   .searchguard.logging-es-e2lqi0vi-2-iqvqr		1   0    4      0     51.9kb    51.9kb
green   open   .searchguard.logging-es-e2lqi0vi-2-srbs8		1   0    0      0       159b      159b
green   open   .searchguard.logging-es-e2lqi0vi-2-3ft5s		1   0    0      0       159b      159b
4. Close and delete the index
$ curl --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca -XPOST 'https://localhost:9200/.searchguard.logging-es-e2lqi0vi-2-3ft5s/_close'
$ curl --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca -XDELETE 'https://localhost:9200/.searchguard.logging-es-e2lqi0vi-2-3ft5s'
5. New index is created
green   open   .searchguard.logging-es-e2lqi0vi-2-3ft5s		1   0    4      0       0b      0b
6. New index starts containing data
green   open   .searchguard.logging-es-e2lqi0vi-2-3ft5s		1   0    4      0       24.9kb      24.9kb

Actual results:
Index requires manual intervention and is not properly initialized on pod restart

Expected results:
We would expect the searchguard index to be initialized automatically.

Comment 1 Ruben Romero Montes 2017-03-07 08:59:06 UTC
Created attachment 1260716 [details]
configmap

Comment 2 Ruben Romero Montes 2017-03-07 09:00:31 UTC
Created attachment 1260717 [details]
logging-es.log

Comment 3 Ruben Romero Montes 2017-03-07 09:00:57 UTC
Created attachment 1260718 [details]
logging-es cluster health

Comment 4 Jeff Cantrill 2017-03-07 16:25:55 UTC
This is a duplicate that will be fixed with the release of 3.5.  The workaround is to deploy the logging pod again with something like 'oc rollout latest dc/$ES_DC_NAME'

*** This bug has been marked as a duplicate of bug 1416210 ***

Comment 5 Ruben Romero Montes 2017-03-10 14:03:17 UTC
@jcantril the customer is still having the same problem after trying this

Comment 9 Xia Zhao 2017-03-23 07:07:41 UTC
Verified with 3.4.1 logging images on brew registry:
openshift3/logging-elasticsearch    246537fe4546
openshift3/logging-deployer    0eeabd69aa6d
openshift3/logging-auth-proxy    d85303b2c262
openshift3/logging-kibana    03900b0b9416
openshift3/logging-fluentd    e4b97776c79b
openshift3/logging-curator    091de35492d6

The elasticsearch index contained data at the very beginning of es pod's start:

$ curl --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca -XGET 'https://localhost:9200/_cat/indices?v'
...
green  open   .searchguard.logging-es-5cbksa2h-1-dcrm8                               1   0          5            0     28.3kb         28.3kb

As the regression test, scaled down es pod and then scale up to let it back, the recreated index for #2 pod also caontaining data:

green  open   .searchguard.logging-es-j2kxv4at-2-m03c5                               1   0          5            0     28.3kb         28.3kb

Set to verified

Comment 11 errata-xmlrpc 2017-04-04 14:28:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0865


Note You need to log in before you can comment on or make changes to this bug.