Description of problem:
After applying last pathing of the OCP cluster the Elasticsearch goes to read.
The logs are exactly the same as for this BugZilla: 1626957
But here we have version 3.5 with the same behavior.
Will connect to localhost:9300 ... done
2019-01-03 09:15:47 INFO SearchGuardSSLPlugin:84 - Search Guard 2 plugin not available
2019-01-03 09:15:47 INFO SearchGuardPlugin:58 - Clustername: elasticsearch
2019-01-03 09:15:47 INFO SearchGuardPlugin:70 - Node [null] is a transportClient: true/tribeNode: false/tribeNodeClient: false
2019-01-03 09:15:47 INFO plugins:180 - [Sara Grey] modules , plugins [search-guard-ssl, search-guard2], sites 
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:423 - Open SSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNo
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:173 - Config directory is /usr/share/java/elasticsearch/config/, from there the key- and truststore files are resolved r
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:142 - sslTransportClientProvider:JDK with ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_C
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:144 - sslTransportServerProvider:JDK with ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_C
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:146 - sslHTTPProvider:null with ciphers 
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:148 - sslTransport protocols [TLSv1.2, TLSv1.1]
2019-01-03 09:15:47 INFO DefaultSearchGuardKeyStore:149 - sslHTTP protocols [TLSv1.2, TLSv1.1]
2019-01-03 09:15:48 INFO transport:99 - [Sara Grey] Using [com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] as transport, overridden by [search-guard-s
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Version-Release number of selected component (if applicable):
After patching OCP cluster
ES status: red
ES status: green (without any action after processing the upgrade and applying patches)
> I suggest customer to delere red indexes of searchguard, but still without answer:
> If you list red indexes of searchguard, do you have any?
> curl --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/indices -s | grep red
> You can delete them:
> curl --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/.searchguard.logging-es<some_identifier> -X DELETE
I'm not sure why you advised them to remove any indices since that fact they are 'red' is not an indication they are 'bad' or in an error state. The color of an index is an indication of the state of replication of the shards associated with the indices; that's it. The first thing I would advise, is to attempt to reseed all the searchguard indicies. This must be performed for each elasticsearch pod:
oc exec $espod -- es_seed_acl
We then need to figure out why the cluster is in the red state by looking at which indices are red. This is most easily achievable by rsh'ing into one of the ES pods:
oc rsh $espod
This may give us a clue from which we can further determine future action.
Lowering the priority as this cluster is older then N-2 where N is 3.11.
At the time of creating Bugzilla it looks like a bug, but finally, it was a configuration issue with NFS file system which is not intended for storing indexing database for Elasticsearch.