Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1846458

Summary: Procedure to Expand ElasticSearch storage
Product: OpenShift Container Platform Reporter: Anshul Verma <ansverma>
Component: DocumentationAssignee: Bob Furu <bfuru>
Status: CLOSED CURRENTRELEASE QA Contact: Wei Duan <wduan>
Severity: high Docs Contact: Vikram Goyal <vigoyal>
Priority: high    
Version: 4.3.zCC: aos-bugs, bfuru, hgomes, jokerman, mfuruta, piqin, rh-container, vgoyal, vigoyal, wduan
Target Milestone: ---Keywords: Reopened
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-10 18:33:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Anshul Verma 2020-06-11 15:43:16 UTC
Description of problem:

There are certain customers who wants to expand their ES storage post-installation keeping the data intact.

This is specific to OpenShift running on vSphere.

There are procedures through which we can do that -
1.
~~~
If so, we need to do only the following steps for extending.

  1. Extend VMDK size on vSphere side
  2. Login the work node where ES pod is running, and extend PV's filesystem
    $ mount | grep /elasticsearch-[0-9]
    /dev/sdb on /var/lib/kubelet/pods/7a6774c9-1dff-4d92-89cd-fb61796951ff/volumes/kubernetes.io~vsphere-volume/elasticsearch-0 type ext4 (rw,relatime,seclabel)
    $ sudo resize2fs /dev/sdb
  3. Alter the PV/PVC accordingly
~~~

2.
~~~
  1. Set ClusterLogging to Unmanaged state
  2. Set Elasticsearch object to Unmanaged state.
  3. Scale down one of ElasticSearch pods by oc scale --replicas=0 deployments elastic...
  4. Delete its PVC and PV of the pod
  5. Extend the volume size
     I. Extend VMDK size
     II. Attach it to one of worker node, and then run xfs_growfs
     III. Detach it from the node
  6. Create its PVC and PV with the same names, but change the size
  7. Scale up one of ElasticSearch pods by oc scale --replicas=1 deployments elastic...
  8. Do Step 3 to 7 for other ElasticSearch pods.
  9. Set the Elasticsearch object to Managed state.
  10. Set ClusterLogging to Managed state
~~~
This procedure requires ES pods to be scaled down and again scaled up, but the documentation page[0] says that scaling down ES pods is not supported. By scaling down I here mean that scaling down replicas pods temporarily.


The ask is, will we support the above procedures?

Comment 6 Wei Duan 2020-10-20 02:19:54 UTC
@piqin, we did not support PVC expansion with vSphere in-tree storage.

Comment 10 Bob Furu 2020-11-13 22:00:23 UTC
As reported by QE, we do not support nor test PVC expansion with vSphere in-tree storage. Closing as WONTFIX.

Comment 12 Bob Furu 2020-12-01 19:57:39 UTC
Note that the location of this doc was moved starting in 4.5. Here is 4.6: https://docs.openshift.com/container-platform/4.6/logging/config/cluster-logging-log-store.html

Comment 13 Bob Furu 2020-12-01 21:02:36 UTC
Hi Anshul - To answer your question, a node in Elasticsearch is a pod in OCP.

As of OCP 4.5, the note about scaling ES nodes being unsupported should no longer be relevant. There is now a section added in docs on scaling down ES one node/pod at a time: https://github.com/openshift/openshift-docs/pull/27404/files#diff-133e6e62a5aac180545525ba042091c8662a66b3a160b188dfead122e1445ff9

I have created https://github.com/openshift/openshift-docs/pull/27773 to remove the note, and it is currently under dev review.

Thanks!

Comment 14 Anshul Verma 2020-12-03 10:23:50 UTC
(In reply to Bob Furu from comment #13)
> Hi Anshul - To answer your question, a node in Elasticsearch is a pod in OCP.
> 
> As of OCP 4.5, the note about scaling ES nodes being unsupported should no
> longer be relevant. There is now a section added in docs on scaling down ES
> one node/pod at a time:
> https://github.com/openshift/openshift-docs/pull/27404/files#diff-
> 133e6e62a5aac180545525ba042091c8662a66b3a160b188dfead122e1445ff9
> 
> I have created https://github.com/openshift/openshift-docs/pull/27773 to
> remove the note, and it is currently under dev review.
> 
> Thanks!

Thank You @Bob, I appreciate the PRs.

Comment 16 Bob Furu 2020-12-09 17:26:32 UTC
Hi Wei Duan - PTAL at this PR (https://github.com/openshift/openshift-docs/pull/27773) to confirm it is OK for QE. Thanks!

Comment 17 Wei Duan 2020-12-10 01:28:13 UTC
I see @ewolinetz have already reviewed and this PR is merged now.
Looks good from my side.

Comment 18 Bob Furu 2020-12-10 18:33:14 UTC
Thank you, Wei.

PR merged and content is verified and live in 4.5, 4.6, 4.7 docs: https://docs.openshift.com/container-platform/4.6/logging/config/cluster-logging-log-store.html

Closing BZ.