+++ This bug was initially created as a clone of Bug #975599 +++ Description of problem: After enabling cluster.nufa on the volume, mounted clients continue to use cluster/distribute translator. On-disk volfile in /var/lib/glusterd is updated to cluster/nufa. New clients which are mounted freshly do use cluster/nufa. However clients which are already mounted get this in the logfile: [2013-06-18 21:31:37.367456] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed [2013-06-18 21:31:37.369219] I [io-cache.c:1549:check_cache_size_ok] 0-HadoopVol-io-cache: Max cache size is 50595778560 [2013-06-18 21:31:37.369281] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing --- Additional comment from Anand Avati on 2013-06-21 02:51:23 EDT --- REVIEW: http://review.gluster.org/5244 (glusterfsd: consider xlator type too in topology check) posted (#1) for review on master by Anand Avati (avati) --- Additional comment from Anand Avati on 2013-06-21 05:40:41 EDT --- COMMIT: http://review.gluster.org/5244 committed in master by Vijay Bellur (vbellur) ------ commit 4cde70a0e5be0a5e49e42c48365f3c0b205f9741 Author: Anand Avati <avati> Date: Mon Jun 17 06:19:47 2013 -0700 glusterfsd: consider xlator type too in topology check When cluster.nufa option is enabled, we only change the translator type, but leave the translator name as-is. This results in the topology change check to conclude that a graph switch is not needed. Change-Id: I4f4d0cec2bd4796c95274f0328584a4e7b8b5cd3 BUG: 975599 Signed-off-by: Anand Avati <avati> Reviewed-on: http://review.gluster.org/5244 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra Bhat <raghavendra>
Patch review URL: https://code.engineering.redhat.com/gerrit/#/c/11015
Enabling NUFA triggered a graph change on an already mounted volume on glusterfs-3.4.0.19rhs-2.el6.x86_64: Final graph: +------------------------------------------------------------------------------+ 1: volume testvol-client-0 2: type protocol/client 3: option remote-host storage-qe12.lab.eng.rdu2.redhat.com 4: option remote-subvolume /bricks/brick0 5: option transport-type socket 6: end-volume 7: 8: volume testvol-client-1 9: type protocol/client 10: option remote-host storage-qe14.lab.eng.rdu2.redhat.com 11: option remote-subvolume /bricks/brick1 12: option transport-type socket 13: end-volume 14: 15: volume testvol-dht 16: type cluster/nufa 17: subvolumes testvol-client-0 testvol-client-1 18: end-volume 19: 20: volume testvol-write-behind 21: type performance/write-behind 22: subvolumes testvol-dht 23: end-volume 24: 25: volume testvol-read-ahead 26: type performance/read-ahead 27: subvolumes testvol-write-behind 28: end-volume 29: 30: volume testvol-io-cache 31: type performance/io-cache 32: subvolumes testvol-read-ahead 33: end-volume 34: 35: volume testvol-quick-read 36: type performance/quick-read 37: subvolumes testvol-io-cache 38: end-volume 39: 40: volume testvol-md-cache 41: type performance/md-cache 42: subvolumes testvol-quick-read 43: end-volume 44: 45: volume testvol 46: type debug/io-stats 47: option latency-measurement off 48: option count-fop-hits off 49: subvolumes testvol-md-cache 50: end-volume Some quick sanity tests showed nufa was working. Marking verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html