Red Hat Bugzilla – Bug 765380
[glusterfs-3.3.0qa11]: pathinfo xattr using hostname causes problems for machines with same hostname
Last modified: 2014-04-17 07:37:51 EDT
trusted.glusterfs.pathinfo xattr is using hostname for giving the pathinfo information as below.
getfattr -n trusted.glusterfs.pathinfo glusterfs-3.2.3
# file: glusterfs-3.2.3
trusted.glusterfs.pathinfo="(<REPLICATE:mirror-replicate-0> <POSIX:Centos1:/export/mirror/glusterfs-3.2.3> <POSIX:Centos1:/export/mirror/glusterfs-3.2.3>)
If 2 machines are having same hostname, then the path info will be the same.
Problem due to it:
machine1 - (hostname=host1, IP=x.x.x.x, export=/mnt/export)
machine2 - (hostname=host1, IP=y.y.y.y, export=/mnt/export)
Now Suppose a brick is running on machine1 and not on machine2. If machine1 goes down and comes back up, the glustershd running on machine2 will do getxattr on trusted.glusterfs.pathinfo to determine if the brick that came up is the one running in its machine only. Since the hostname and the export directories are same for both the machines, glustershd on machine2 will think that its own brick came up and try to self-heal.
Well, for pathinfo to be more clear IP addresses can be used instead of hostnames (and an option that determine which one is needed).
But, doesn't the self heal daemon makes use of glusterd uuid instead of pathinfo. I recall self heal on the client making use of pathinfo.
patch undergoing review: http://review.gluster.org/#change,4567
COMMIT: http://review.gluster.org/4567 committed in master by Anand Avati (firstname.lastname@example.org)
Author: Venky Shankar <email@example.com>
Date: Thu Feb 21 22:10:27 2013 +0530
storage/posix: introduce node-uuid-pathinfo
enabling this option has an effect on pathinfo xattr
request returning <node-uuid>:<path> instead of the
default - which is <hostname>:<path>.
Signed-off-by: Venky Shankar <firstname.lastname@example.org>
Reviewed-by: Amar Tumballi <email@example.com>
Reviewed-by: Jeff Darcy <firstname.lastname@example.org>
Tested-by: Gluster Build System <email@example.com>
Reviewed-by: Anand Avati <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.
glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.