|Summary:||[FEAT] Use uuid in volume info file for servers instead of hostname or ip address|
|Product:||[Community] GlusterFS||Reporter:||Joe Julian <joe>|
|Status:||CLOSED DEFERRED||QA Contact:|
|Version:||mainline||CC:||amukherj, bugs, chrisw, ian.conrad, jdarcy, joe, me, rwheeler, smohan, stefano.stagnaro|
|Target Milestone:||---||Keywords:||FutureFeature, Reopened, Triaged|
|Fixed In Version:||Doc Type:||Enhancement|
|Doc Text:||Story Points:||---|
|Last Closed:||2017-08-15 16:37:26 UTC||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
Description Joe Julian 2011-10-05 18:39:50 UTC
In the volume info file, if the servers were referenced by uuid if the hostnames or ip addresses of the hosts change, the bricks would still reference the correct machine. In the IRC channel today, we had someone whose network had been renumbered. He had created his volumes using IP addresses instead of hostnames. He was able to peer probe which changed the host information to accurately reflect the new ip addresses, but the volumes still pointed at the old addresses. They could not run replace-brick to update the addresses of their bricks. If the bricks were referenced by host uuid, the peer probe would have allowed the bricks to resolve to the new information without further changes.
Comment 1 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.
Comment 3 Logos01 2016-02-05 00:55:13 UTC
I myself encountered a situation where this feature would have prevented a disruption in my gluster volumes' availability after a hostname-change. Although detaching and re-attaching the peer was successful against the new hostname, at some point it reverted to the old hostname (which was to a different IP address), and my brick-hosts became detached as a result.
Comment 4 Kaushal 2017-03-08 10:55:48 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.
Comment 5 Atin Mukherjee 2017-08-15 14:37:44 UTC
We are thinking of addressing it in GD2, github reference https://github.com/gluster/glusterd2/issues/356 . Joe - since we don't have plan to fix this in current form of glusterd, do you have any objection if we close this bug and track it through the github reference mentioned?
Comment 6 Joe Julian 2017-08-15 16:37:26 UTC
Closing in favor of the github issue.