Bug 1660681 - Heketi does not list storage endpoint of newly added node
Summary: Heketi does not list storage endpoint of newly added node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: heketi
Version: ocs-3.11
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: OCS 3.11.z Batch Update 4
Assignee: Raghavendra Talur
QA Contact: Aditya Ramteke
URL:
Whiteboard:
: 1567236 1667031 (view as bug list)
Depends On:
Blocks: 1573420 1622458 1707226
TreeView+ depends on / blocked
 
Reported: 2018-12-19 01:11 UTC by vinutha
Modified: 2021-09-09 15:31 UTC (History)
16 users (show)

Fixed In Version: heketi-9.0.0-4.el7rhgs
Doc Type: Bug Fix
Doc Text:
When a node is removed or added to a gluster trusted storage pool using heketi, the existing endpoints do not get updated automatically. With this update, users can execute `heketi-cli volume endpoint patch <volume-id>`command to get a patch file to perform kubectl/oc patch on the endpoints.
Clone Of:
Environment:
Last Closed: 2019-10-30 12:34:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:3255 0 None None None 2019-10-30 12:34:25 UTC

Description vinutha 2018-12-19 01:11:19 UTC
Description of problem:
On adding a new node by running scaleup.yml playbook newly added node lists in
1. oc get nodes 
2. The node has a gluster pod hosted on it which is in 1/1 Running state 
3. heketi node list and topology info 
4. gluster peers 

BUT the newly added node does not list in end-points and heketi blockhosting volume info 

Version-Release number of selected component (if applicable):
]# rpm  -qa| grep openshift
openshift-ansible-docs-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-excluder-3.11.43-1.git.0.647ac05.el7.noarch
atomic-openshift-hyperkube-3.11.43-1.git.0.647ac05.el7.x86_64
atomic-openshift-node-3.11.43-1.git.0.647ac05.el7.x86_64
openshift-ansible-playbooks-3.11.43-1.git.0.fa69a02.el7.noarch
openshift-ansible-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-clients-3.11.43-1.git.0.647ac05.el7.x86_64
atomic-openshift-3.11.43-1.git.0.647ac05.el7.x86_64
openshift-ansible-roles-3.11.43-1.git.0.fa69a02.el7.noarch
atomic-openshift-docker-excluder-3.11.43-1.git.0.647ac05.el7.noarch

# oc rsh glusterfs-storage-xg8q2 rpm -qa| grep gluster 
glusterfs-server-3.12.2-32.el7rhgs.x86_64
gluster-block-0.2.1-30.el7rhgs.x86_64
glusterfs-api-3.12.2-32.el7rhgs.x86_64
glusterfs-cli-3.12.2-32.el7rhgs.x86_64
python2-gluster-3.12.2-32.el7rhgs.x86_64
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64
glusterfs-libs-3.12.2-32.el7rhgs.x86_64
glusterfs-3.12.2-32.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64

# oc rsh heketi-storage-1-lgqlm rpm -qa| grep heketi 
heketi-8.0.0-3.el7rhgs.x86_64
heketi-client-8.0.0-3.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Created a OCS setup with TSP of 4 nodes and having 25 file and 25 block pvcs.

2. To add a new node to the OCP cluster edited the inventory file and ran the scaleup.yml playbook. 

3. The newly added node lists in oc nodes and from heketi. There is also a gluster pod running on the new host 

4. Observed that the ep and heketi block hosting volume info do not list the newly added node . 

Actual results:
Heketi does not list the newly added nodes in storage emdpoint and block hosting volume info. 

Expected results:
Heketi should list the newly added node in storage endpoints and block hosting volume info. 

Additional info:

Comment 5 Niels de Vos 2019-02-06 19:07:58 UTC
Hi Vinutha,

What is the reason the newly added nodes should get listed in the existing endpoints for a PVC/volume? Because there are no bricks on the newly added node for the existing PVC/volume, there is no benefit to have them listed.

Possibly on PVC expansion the endpoints could get updated in case the newly added nodes hosts a brick that was added to the PVC/volume.

Comment 7 Yaniv Kaul 2019-05-02 09:43:57 UTC
Ping?

Comment 13 John Mulligan 2019-06-27 15:41:29 UTC
*** Bug 1567236 has been marked as a duplicate of this bug. ***

Comment 18 Raghavendra Talur 2019-07-24 12:06:36 UTC
Verification steps:

1. create a volume using provisioner in a 3 node trusted storage pool
2. add a node to the trusted storage pool using heketi
3. remove a node(from original 3 nodes) from the trusted storage pool using heketi
4. use the `heketi-cli volume endpoint patch VOLUMEID` command to generate the patch file for endpoint
5. use the patch file to oc patch the endpoint.
6. verify that the endpoint now show new IPs.

Comment 21 John Mulligan 2019-07-25 14:15:53 UTC
*** Bug 1667031 has been marked as a duplicate of this bug. ***

Comment 25 errata-xmlrpc 2019-10-30 12:34:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3255


Note You need to log in before you can comment on or make changes to this bug.