Bug 858189 - [FEAT] need 'gluster peer update' command/utility for updating IPs of peers in cluster information
[FEAT] need 'gluster peer update' command/utility for updating IPs of peers...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Nagaprasad Sathyanarayana
Matt Zywusko
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-18 05:42 EDT by Rachana Patel
Modified: 2016-02-17 19:21 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-24 03:57:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2012-09-18 05:42:51 EDT
Description of problem:
 need  'gluster peer update'  command/utility for updating IPs of peers in cluster information

If servers are migrated from one network to another or because of some other reasons IP address of all or few peers get changed,
   - User is not be able to use existing  cluster as IP has changed and adding same peer will give below error gluster peer status, will show old IP which is always discoonected

#######################################
[root@Rhs3 ~]# gluster p s 
Number of Peers: 1 

Hostname: 10.70.1.111 
Uuid: bb035a88-8a41-4fea-9e93-caca9a096d0a 
State: Peer in Cluster (Disconnected) 

[root@Rhs3 ~]# gluster peer probe 10.70.45.11
Probe on host 10.70.45.11 port 0 already in peer list
#######################################

solution can be 
--> force detach and attach again but in that case it need to be done (no of peers -1) times and that too from each peer 
or
--> delete /var/lib/glusterd and restart glusterd (which will delete everything)

above solutions will fail if  user want to use existing volumes and data stored in that

#######################################
[root@Rhs3 ~]# gluster volume info

Volume Name: d1
Type: Distribute
Volume ID: b13d261b-652d-41ba-98f4-743a88e4b1ee
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.1.3:/home/d11
Brick2: 10.70.1.3:/home/d22
Brick3: 10.70.1.111:/home/d11
Brick4: 10.70.1.111:/home/d22
[root@Rhs3 ~]# gluster volume status
Status of volume: d1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
NFS Server on localhost                                 38467   Y       4628
#######################################



Can we have gluster peer update utility which will update peers IP addresses and also updates volume info, so we can use same volume by mounting it again .
Comment 2 krishnan parthasarathi 2012-12-20 02:50:03 EST
Reducing the priority to medium for the following reasons,

- The issue is not seen in a normal storage administration workflow.
ie, migrating of the cluster to a different LAN is not likely to happen often.

- The issue/limitation has been present even before RHS-2.0.
Comment 6 Atin Mukherjee 2015-12-24 03:57:36 EST
Since our recommendation is to use fqdns over IP, we don't have any plan to introduce this feature at this time and hence closing this bug. Feel free to reopen if you have any strong use case where it is absolutely essential.

Note You need to log in before you can comment on or make changes to this bug.