Created attachment 1296146 [details] inventory file used for logging deployment Description of problem: Specify openshift_logging_es_number_of_replicas=2 in logging 3.6.0 deployment inventory, only have 1 es pod existing, and Number of nodes/data nodes is still 1: Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ... Clustername: logging-es Clusterstate: GREEN Number of nodes: 1 Number of data nodes: 1 Also meet this exception when seeding .kibana index: [2017-07-11 09:33:55,079][WARN ][rest.suppressed ] path: /.kibana/config/4.6.4, params: {index=.kibana, op_type=create, id=4.6.4, type=config} RemoteTransportException[[logging-es-data-master-u09phwje][10.129.0.25:9300][indices:data/write/index[p]]]; nested: UnavailableShardsException[[.kibana][0] Not enough active copies to meet write consistency of [QUORUM] (have 1, needed 2). Timeout: [1m], request: [index {[.kibana][config][4.6.4], source[{"buildNum":10229}]}]]; Caused by: UnavailableShardsException[[.kibana][0] Not enough active copies to meet write consistency of [QUORUM] (have 1, needed 2). Timeout: [1m], request: [index {[.kibana][config][4.6.4], source[{"buildNum":10229}]}]] Version-Release number of selected component (if applicable): openshift3/logging-auth-proxy 4cf6b1d60d2b openshift3/logging-kibana 4563b27eac07 openshift3/logging-elasticsearch 8809f390a819 openshift3/logging-fluentd a2ea005ef4f6 openshift3/logging-curator ea1887b8e441 # rpm -qa | grep ansible openshift-ansible-roles-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-callback-plugins-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-filter-plugins-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-playbooks-3.6.140-1.git.0.4a02427.el7.noarch ansible-2.3.1.0-3.el7.noarch openshift-ansible-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-lookup-plugins-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-docs-3.6.140-1.git.0.4a02427.el7.noarch # openshift version openshift v3.6.140 kubernetes v1.6.1+5115d708d7 etcd 3.2.1 How reproducible: always Steps to Reproduce: 1.Deploy logging 3.6.0 , specify openshift_logging_es_number_of_replicas=2 in the inventory file 2. 3. Actual results: Can't scale up elasticsearch Expected results: elasticsearch should scale up Additional info: es log attached inventory file attached
Created attachment 1296148 [details] ES log
Created attachment 1296153 [details] ansible log
Unless this is a regression I don't think we should consider this a 3.6 blocker.
The parameter you are using is incorrect for having more than one ES pod. openshift_logging_es_number_of_replicas is specific to the ES indices and will change the number of shard replicas. What you should be setting instead is openshift_logging_es_cluster_size
Oh, sorry for my mistake and thanks for pointing this.-- redeployed the 3.6.0 logging stacks with openshift_logging_es_cluster_size=2 set in inventory file, and this time I can see 2 es-master pods running, even with https://bugzilla.redhat.com/show_bug.cgi?id=1469918 observed: Set this bz to verified since this deployment parameter: openshift_logging_es_cluster_size is working fine. Ansible version tested with: # rpm -qa | grep ansible openshift-ansible-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-roles-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-docs-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-callback-plugins-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-filter-plugins-3.6.140-1.git.0.4a02427.el7.noarch openshift-ansible-playbooks-3.6.140-1.git.0.4a02427.el7.noarch ansible-2.2.3.0-1.el7.noarch openshift-ansible-lookup-plugins-3.6.140-1.git.0.4a02427.el7.noarch # openshift version openshift v3.6.140 kubernetes v1.6.1+5115d708d7 etcd 3.2.1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188