*** Bug 2766 has been marked as a duplicate of this bug. ***
The problem here is while merging the graphs, we over write the values of the previous volume (if it has a similar option configured for it) with the value of the current merged graph. Thus, at the end the volfile contains the values of the graph that was merged last.
When there are 2 or more volumes, and an option is changed on one of the volumes, this change is reflected in all the volumes in the nfs-server volfile. >volume info Volume Name: marker Type: Distribute Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: junaid-laptop:/export/marker Options Reconfigured: performance.cache-max-file-size: 4096 features.quota: on Volume Name: mark-test Type: Distribute Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: junaid-laptop:/export/t1 Brick2: junaid-laptop:/export/t2 Options Reconfigured: features.quota: on features.limit-usage: /:10MB,/d1:1MB Volume Name: str-marker Type: Distributed-Stripe Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: junaid-laptop:/export/str1 Brick2: junaid-laptop:/export/str2 Brick3: junaid-laptop:/export/str3 Brick4: junaid-laptop:/export/str4 Options Reconfigured: performance.cache-max-file-size: 4000 features.limit-usage: /:100MB,/d1:1GB features.quota: on diagnostics.brick-log-level: NONE ############################################################ NFS server volfile: --------------------------- volume str-marker-quota type features/quota option limit-set /:100MB,/d1:1GB option timeout 0 subvolumes str-marker-dht end-volume volume str-marker-io-cache type performance/io-cache option max-file-size 4000 subvolumes str-marker-read-ahead end-volume ---------------------------------------------- volume mark-test-quota type features/quota option limit-set /:100MB,/d1:1GB option timeout 0 subvolumes mark-test-dht end-volume volume mark-test-io-cache type performance/io-cache option max-file-size 4000 subvolumes mark-test-read-ahead end-volume ------------------------------------------------- volume marker-quota type features/quota option timeout 0 option limit-set /:100MB,/d1:1GB subvolumes marker-client-0 end-volume volume marker-io-cache type performance/io-cache option max-file-size 4000 subvolumes marker-read-ahead end-volume ----------------------------------------------------
Test case: 1. Create atleast two volumes say V1 and V2. 2. Then set an option like > volume set v1 performance.cache-max-file-size 2048 > volume set v2 performance.cache-max-file-size 4096 3. Then check the nfs-server.vol, it should show the values set for the corresponding volumes correctly.
Created attachment 512
Junaid, Please backport this to release-3.1 and release-3.2. Thanks, Vijay
PATCH: http://patches.gluster.com/patch/7476 in master (mgmt/glusterd: Set the generic options in the graph before merging it with the parent graph while building nfs-server volfile.)
PATCH: http://patches.gluster.com/patch/7466 in release-3.1 (mgmt/glusterd: Set the generic options in the graph before merging it with the parent graph while building nfs-server volfile.)
PATCH: http://patches.gluster.com/patch/7465 in release-3.2 (mgmt/glusterd: Set the generic options in the graph before merging it with the parent graph while building nfs-server volfile.)
Its fixed now. Now nfs server volfile contains the options specified for the volumes in their respective volume definitions.