Credit: Thanks to Krishnan Parthasarathi for identifying this. Description of problem: Performing volume reset on a volume with quota and deem-statfs enabled, deem-statfs is reset to default, which is "off". This should be left as "on" when quota is enabled on that volume. Version-Release number of selected component (if applicable): glusterfs-server-3.4.0.39rhs-1.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Enable quota 2. Set deem-statfs to on 3. Perform volume-reset Actual results: deem-statfs is reset to off (default). Expected results: deem-statfs should stay "on" along with quota. Additional info: Volume Name: shanks-quota Type: Distributed-Replicate Volume ID: 2e661bf1-828d-412e-92cb-8eceacf29f5f Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.3:/rhs/shanks-quota/quota1 Brick2: 10.70.43.199:/rhs/shanks-quota/quota1 Brick3: 10.70.43.156:/rhs/shanks-quota/quota1 Brick4: 10.70.43.1:/rhs/shanks-quota/quota1 Options Reconfigured: features.quota-deem-statfs: on features.quota: on [root@server1 ~]# gluster vol reset shanks-quota volume reset: success: All unprotected fields were reset. To reset the protected fields, use 'force'. [root@server1 ~]# Volume Name: shanks-quota Type: Distributed-Replicate Volume ID: 2e661bf1-828d-412e-92cb-8eceacf29f5f Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.43.3:/rhs/shanks-quota/quota1 Brick2: 10.70.43.199:/rhs/shanks-quota/quota1 Brick3: 10.70.43.156:/rhs/shanks-quota/quota1 Brick4: 10.70.43.1:/rhs/shanks-quota/quota1 Options Reconfigured: features.quota: on
With the current framework its not possible to fix the bug. Hence removing this from current release
upstream patch# http://review.gluster.org/11839
Patch posted downstream: https://code.engineering.redhat.com/gerrit/#/c/55299/
[root@darkknight ~]# gluster v info Volume Name: testvol Type: Distributed-Replicate Volume ID: 2535f3a2-5b0f-42cc-8131-36a5ce78d231 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.2:/rhs/brick1/b01 Brick2: 10.70.47.3:/rhs/brick1/b02 Brick3: 10.70.47.143:/rhs/brick1/b03 Brick4: 10.70.47.145:/rhs/brick1/b04 Options Reconfigured: features.quota-deem-statfs: on features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@darkknight ~]# gluster v reset testvol volume reset: success: All unprotected fields were reset. To reset the protected fields, use 'force'. [root@darkknight ~]# gluster v info Volume Name: testvol Type: Distributed-Replicate Volume ID: 2535f3a2-5b0f-42cc-8131-36a5ce78d231 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.2:/rhs/brick1/b01 Brick2: 10.70.47.3:/rhs/brick1/b02 Brick3: 10.70.47.143:/rhs/brick1/b03 Brick4: 10.70.47.145:/rhs/brick1/b04 Options Reconfigured: features.quota-deem-statfs: on performance.readdir-ahead: on features.inode-quota: on features.quota: on Bug verified on build glusterfs-3.7.1-12.el7rhgs.x86_64
Pleaser review and sign-off the edited doc text.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html