Bug 1406401 - [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to the new node
Summary: [GANESHA] Adding node to ganesha cluster is not assigning the correct VIP to ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: common-ha
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: ---
: RHGS 3.2.0
Assignee: Soumya Koduri
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1406410 1408110
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-12-20 12:56 UTC by Manisha Saini
Modified: 2017-03-23 05:58 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.4-10
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1406410 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:58:38 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Manisha Saini 2016-12-20 12:56:47 UTC
Description of problem:
When a new node is being added to ganesha cluster,it should get the VIP same as mentioned in add node command.Instead the new node is being assigned with VIP of 1 of the node in existing cluster.

Version-Release number of selected component (if applicable):
# rpm -qa | grep ganesha
nfs-ganesha-2.4.1-3.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-9.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-3.el7rhgs.x86_64

How reproducible:
Consistently

Steps to Reproduce:
1.Create 4 node ganesha cluster and enable ganesha on it.
2.Add the new node to the existing ganesha cluster.
# /usr/libexec/ganesha/ganesha-ha.sh --add /var/run/gluster/shared_storage/nfs-ganesha/ dhcp47-59.lab.eng.blr.redhat.com 10.70.44.157

Node 1:
[root@dhcp46-219 ganesha]# ip addr         VIP 10.70.44.156
Node 2:
[root@dhcp47-45 ~]# ip addr                VIP 10.70.44.154
Node 3:
[root@dhcp47-3 nfs-ganesha]# ip addr       VIP 10.70.44.155
Node 4:
[root@dhcp46-241 ~]# ip addr               VIP 10.70.44.153

New Node which is being added to ganesha cluster:

[root@dhcp47-59 nfs-ganesha]# ip addr      VIP 10.70.44.154

======

[root@dhcp47-59 nfs-ganesha]# pcs status
Cluster name: ganesha-ha-360
Stack: corosync
Current DC: dhcp46-241.lab.eng.blr.redhat.com (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Dec 20 18:00:33 2016		Last change: Tue Dec 20 17:36:01 2016 by root via crm_attribute on dhcp47-59.lab.eng.blr.redhat.com

5 nodes and 30 resources configured

Online: [ dhcp46-219.lab.eng.blr.redhat.com dhcp46-241.lab.eng.blr.redhat.com dhcp47-3.lab.eng.blr.redhat.com dhcp47-45.lab.eng.blr.redhat.com dhcp47-59.lab.eng.blr.redhat.com ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ dhcp46-219.lab.eng.blr.redhat.com dhcp46-241.lab.eng.blr.redhat.com dhcp47-3.lab.eng.blr.redhat.com dhcp47-45.lab.eng.blr.redhat.com dhcp47-59.lab.eng.blr.redhat.com ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ dhcp46-219.lab.eng.blr.redhat.com dhcp46-241.lab.eng.blr.redhat.com dhcp47-3.lab.eng.blr.redhat.com dhcp47-45.lab.eng.blr.redhat.com dhcp47-59.lab.eng.blr.redhat.com ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ dhcp46-219.lab.eng.blr.redhat.com dhcp46-241.lab.eng.blr.redhat.com dhcp47-3.lab.eng.blr.redhat.com dhcp47-45.lab.eng.blr.redhat.com dhcp47-59.lab.eng.blr.redhat.com ]
 Resource Group: dhcp46-219.lab.eng.blr.redhat.com-group
     dhcp46-219.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp46-219.lab.eng.blr.redhat.com
     dhcp46-219.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp46-219.lab.eng.blr.redhat.com
     dhcp46-219.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp46-219.lab.eng.blr.redhat.com
 Resource Group: dhcp46-241.lab.eng.blr.redhat.com-group
     dhcp46-241.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp46-241.lab.eng.blr.redhat.com
     dhcp46-241.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp46-241.lab.eng.blr.redhat.com
     dhcp46-241.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp46-241.lab.eng.blr.redhat.com
 Resource Group: dhcp47-3.lab.eng.blr.redhat.com-group
     dhcp47-3.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp47-3.lab.eng.blr.redhat.com
     dhcp47-3.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp47-3.lab.eng.blr.redhat.com
     dhcp47-3.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp47-3.lab.eng.blr.redhat.com
 Resource Group: dhcp47-45.lab.eng.blr.redhat.com-group
     dhcp47-45.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp47-45.lab.eng.blr.redhat.com
     dhcp47-45.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp47-45.lab.eng.blr.redhat.com
     dhcp47-45.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp47-45.lab.eng.blr.redhat.com
 Resource Group: dhcp47-59.lab.eng.blr.redhat.com-group
     dhcp47-59.lab.eng.blr.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started dhcp47-59.lab.eng.blr.redhat.com
     dhcp47-59.lab.eng.blr.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started dhcp47-59.lab.eng.blr.redhat.com
     dhcp47-59.lab.eng.blr.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started dhcp47-59.lab.eng.blr.redhat.com

Failed Actions:
* dhcp47-45.lab.eng.blr.redhat.com-cluster_ip-1_monitor_15000 on dhcp47-45.lab.eng.blr.redhat.com 'not running' (7): call=123, status=complete, exitreason='none',
    last-rc-change='Tue Dec 20 17:36:01 2016', queued=0ms, exec=0ms
* dhcp46-241.lab.eng.blr.redhat.com-nfs_block_monitor_10000 on dhcp46-241.lab.eng.blr.redhat.com 'not running' (7): call=36, status=complete, exitreason='none',
    last-rc-change='Tue Dec 20 14:41:24 2016', queued=0ms, exec=0ms
* nfs-grace_monitor_5000 on dhcp47-59.lab.eng.blr.redhat.com 'not running' (7): call=69, status=complete, exitreason='none',
    last-rc-change='Tue Dec 20 17:35:56 2016', queued=0ms, exec=0ms


Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

======

# cat ganesha-ha.conf
# Provide a unique name for the cluster.
HA_NAME="ganesha-ha-360"
# The subset of nodes of the Gluster Trusted Storage Pool that forms the ganesha
# HA cluster. Hostname should specified, IP addresses are not allowed.
# Maximum number of 16 nodes are supported.
HA_CLUSTER_NODES="dhcp46-219.lab.eng.blr.redhat.com,dhcp46-241.lab.eng.blr.redhat.com,dhcp47-3.lab.eng.blr.redhat.com,dhcp47-45.lab.eng.blr.redhat.com,dhcp47-59.lab.eng.blr.redhat.com"
# Virtual IPs of each of the nodes specified above.
VIP_dhcp46-241.lab.eng.blr.redhat.com="10.70.44.153"
VIP_dhcp47-45.lab.eng.blr.redhat.com="10.70.44.154"
VIP_dhcp47-3.lab.eng.blr.redhat.com="10.70.44.155"
VIP_dhcp46-219.lab.eng.blr.redhat.com="10.70.44.156"
VIP_dhcp47-59.lab.eng.blr.redhat.com="10.70.44.157"

Actual results:
New Node has the VIP 10.70.44.154 which is already assigned to 2nd node in existing ganesha cluster

Expected results:
New node should have VIP 10.70.44.157 which is being assigned in add node command

Additional info:

Comment 2 Soumya Koduri 2016-12-20 13:10:18 UTC
This is a regression and need to be fixed. 
Patch posted upstream for review -

http://review.gluster.org/#/c/16213/

Comment 5 Atin Mukherjee 2016-12-22 07:37:33 UTC
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/93569/

Comment 7 Manisha Saini 2016-12-23 08:34:58 UTC
Verified this Bug on 

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-10.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-3.el7rhgs.x86_64
nfs-ganesha-2.4.1-3.el7rhgs.x86_64

As the issue is no more observed,Hence marking this Bug as verified.

Comment 14 errata-xmlrpc 2017-03-23 05:58:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.