Description of problem: if we have a volume say testvolume and if we set the snap-max-hard-limit to some value other than system's snap-max-hard-limit. The gluster volume info shows the snap-max-hard-limit of the system and not for the volume. Version-Release number of selected component (if applicable): glusterfs-3.7.5-0.3 How reproducible: Always Steps to Reproduce: 1.Create a dist-rep volume and start it. 2.Set the snap-max-hard-limit of the system as 200. 3.Set the snap-max-hard-limit of the volume as 100. [root@dhcp35-228 testvolume]# gluster snapshot config Snapshot System Configuration: snap-max-hard-limit : 200 snap-max-soft-limit : 20% auto-delete : disable activate-on-create : disable Snapshot Volume Configuration: Volume : testvolume snap-max-hard-limit : 100 Effective snap-max-hard-limit : 100 Effective snap-max-soft-limit : 20 (20%) 4.Do a gluster volume info vol-name and observe that the snap-max-hard-limit shown is 200 instead of 100. [root@dhcp35-228 testvolume]# gluster volume info testvolume Volume Name: testvolume Type: Distributed-Replicate Volume ID: fda7b28a-bedf-4a4c-9e5c-616be3916b91 Status: Started Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1: 10.70.35.228:/bricks/brick0/b0 Brick2: 10.70.35.141:/bricks/brick0/b0 Brick3: 10.70.35.142:/bricks/brick0/b0 Brick4: 10.70.35.140:/bricks/brick0/b0 Brick5: 10.70.35.228:/bricks/brick1/b1 Brick6: 10.70.35.141:/bricks/brick1/b1 Brick7: 10.70.35.142:/bricks/brick1/b1 Brick8: 10.70.35.140:/bricks/brick1/b1 Brick9: 10.70.35.228:/bricks/brick2/b2 Brick10: 10.70.35.141:/bricks/brick2/b2 Brick11: 10.70.35.142:/bricks/brick2/b2 Brick12: 10.70.35.140:/bricks/brick2/b2 Options Reconfigured: features.barrier: disable performance.readdir-ahead: on cluster.enable-shared-storage: enable snap-max-soft-limit: 20 snap-max-hard-limit: 200 5.However if we go under /var/lib/glusterd/vols/testvolume and see the info file, it shows the snap-max-hard-limit as 100. [root@dhcp35-228 testvolume]# cat info type=2 count=12 status=1 sub_count=3 stripe_count=1 replica_count=3 disperse_count=0 redundancy_count=0 version=5 transport-type=0 volume-id=fda7b28a-bedf-4a4c-9e5c-616be3916b91 username=222e71b0-7159-4fd1-82de-8127a3bacdfa password=3b1c2b52-114f-4cf9-85c9-07a8ca82ae18 op-version=3 client-op-version=3 parent_volname=N/A restored_from_snap=00000000-0000-0000-0000-000000000000 snap-max-hard-limit=100 features.barrier=disable performance.readdir-ahead=on Actual results: Wrong value of snap-max-hard-limit observed for the volume in 'gluster volume info vol-name' output. Expected results: It should show proper volume snap-max-hard-limit and not system's snap-max-hard-limit. Additional info:
The snap-max-hard-limit being displayed in the volume info currently is propagated from system's snap-max-hard-limit as that is a global option common for all volumes, and hence ends up showing the system's snap-max-hard-limit. IMO we should not be displaying snap-max-hard-limit and snap-max-soft-limit in the volume info at all, as these are snap config options and should be set and displayed via snap config command. Hence the fix will make sure of this particular behaviour.
Fix sent to master(upstream). http://review.gluster.org/12443
Master URL : http://review.gluster.org/#/c/12443/ Release 3.7 URL : http://review.gluster.org/#/c/12493/1 RHGS 3.1.2 URL : https://code.engineering.redhat.com/gerrit/#/c/60678/
Verified this bug with glusterfs-3.7.5-7 and its working according to the fix made. snap-max-hard-limit is no longer a part of gluster volume info and this entry is not seen under gluster volume info. And the snap-max-hard-limit is proper under /var/lib/glusterd/vols/testvolume info file.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html