Bug 990961 - enabling cluster.nufa on the fly does not change client side graph
enabling cluster.nufa on the fly does not change client side graph
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Ravishankar N
Ben Turner
:
Depends On: 975599
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-01 05:54 EDT by Ravishankar N
Modified: 2013-09-23 18:35 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.15rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 975599
Environment:
Last Closed: 2013-09-23 18:35:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ravishankar N 2013-08-01 05:54:03 EDT
+++ This bug was initially created as a clone of Bug #975599 +++

Description of problem:

After enabling cluster.nufa on the volume, mounted clients continue to use cluster/distribute translator. On-disk volfile in /var/lib/glusterd is updated to cluster/nufa. New clients which are mounted freshly do use cluster/nufa. However clients which are already mounted get this in the logfile:

[2013-06-18 21:31:37.367456] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed
[2013-06-18 21:31:37.369219] I [io-cache.c:1549:check_cache_size_ok] 0-HadoopVol-io-cache: Max cache size is 50595778560
[2013-06-18 21:31:37.369281] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing

--- Additional comment from Anand Avati on 2013-06-21 02:51:23 EDT ---

REVIEW: http://review.gluster.org/5244 (glusterfsd: consider xlator type too in topology check) posted (#1) for review on master by Anand Avati (avati@redhat.com)

--- Additional comment from Anand Avati on 2013-06-21 05:40:41 EDT ---

COMMIT: http://review.gluster.org/5244 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 4cde70a0e5be0a5e49e42c48365f3c0b205f9741
Author: Anand Avati <avati@redhat.com>
Date:   Mon Jun 17 06:19:47 2013 -0700

    glusterfsd: consider xlator type too in topology check
    
    When cluster.nufa option is enabled, we only change the translator
    type, but leave the translator name as-is. This results in the
    topology change check to conclude that a graph switch is not needed.
    
    Change-Id: I4f4d0cec2bd4796c95274f0328584a4e7b8b5cd3
    BUG: 975599
    Signed-off-by: Anand Avati <avati@redhat.com>
    Reviewed-on: http://review.gluster.org/5244
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
Comment 2 Ravishankar N 2013-08-01 06:12:34 EDT
Patch review URL:
https://code.engineering.redhat.com/gerrit/#/c/11015
Comment 3 Ben Turner 2013-08-15 16:07:33 EDT
Enabling NUFA triggered a graph change on an already mounted volume on glusterfs-3.4.0.19rhs-2.el6.x86_64:

Final graph:
+------------------------------------------------------------------------------+
  1: volume testvol-client-0
  2:     type protocol/client
  3:     option remote-host storage-qe12.lab.eng.rdu2.redhat.com
  4:     option remote-subvolume /bricks/brick0
  5:     option transport-type socket
  6: end-volume
  7: 
  8: volume testvol-client-1
  9:     type protocol/client
 10:     option remote-host storage-qe14.lab.eng.rdu2.redhat.com
 11:     option remote-subvolume /bricks/brick1
 12:     option transport-type socket
 13: end-volume
 14: 
 15: volume testvol-dht
 16:     type cluster/nufa
 17:     subvolumes testvol-client-0 testvol-client-1
 18: end-volume
 19: 
 20: volume testvol-write-behind
 21:     type performance/write-behind
 22:     subvolumes testvol-dht
 23: end-volume
 24: 
 25: volume testvol-read-ahead
 26:     type performance/read-ahead
 27:     subvolumes testvol-write-behind
 28: end-volume
 29: 
 30: volume testvol-io-cache
 31:     type performance/io-cache
 32:     subvolumes testvol-read-ahead
 33: end-volume
 34: 
 35: volume testvol-quick-read
 36:     type performance/quick-read
 37:     subvolumes testvol-io-cache
 38: end-volume
 39: 
 40: volume testvol-md-cache
 41:     type performance/md-cache
 42:     subvolumes testvol-quick-read
 43: end-volume
 44: 
 45: volume testvol
 46:     type debug/io-stats
 47:     option latency-measurement off
 48:     option count-fop-hits off
 49:     subvolumes testvol-md-cache
 50: end-volume

Some quick sanity tests showed nufa was working.  Marking verified.
Comment 4 Scott Haines 2013-09-23 18:35:58 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.