Bug 1228135 - [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is not supported to enable bitrot
Summary: [Bitrot] Gluster v set <volname> bitrot enable command succeeds , which is no...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: bitrot
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.1
Assignee: Gaurav Kumar Garg
QA Contact: RajeshReddy
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1223636 1229134 1232589 1251815
TreeView+ depends on / blocked
 
Reported: 2015-06-04 09:47 UTC by Anil Shah
Modified: 2016-09-17 14:24 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.7.1-13
Doc Type: Bug Fix
Doc Text:
Previously, all bitrot commands that use the "gluster volume set <volname> *" command to start or stop bitd and scrub daemon that set any value for bitrot and crubber daemon succeeded. But gluster did not support "gluster volume set <volname> *" command to reconfigure the BitRot options. Due to this, when the gluster volume set <volname> command is executed, then the bitrot and scrub daemon crashed. With this fix, gluster accepts only “gluster volume bitrot <VOLNAME> *” commands for bitrot and scrub operations.
Clone Of:
: 1229134 (view as bug list)
Environment:
Last Closed: 2015-10-05 07:10:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 11:06:22 UTC

Description Anil Shah 2015-06-04 09:47:34 UTC
Description of problem:

Command "gluster volume set <volname> bitrot enable " succeeds. however gluster volume status command shows bitrot process not online and no pid is assigned.  

Version-Release number of selected component (if applicable):


[root@node1 glusterfs]# rpm -qa | grep glusterfs
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.0-2.el6rhs.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create 4+2 disperse volume
2. Enable bitrot on volume with command gluster v set <volname> bitrot enable
3. check gluster v info and gluster v status

Actual results:

[root@node1 glusterfs]# gluster v set ecvol bitrot enable
volume set: success
===============================================
[root@node1 glusterfs]# gluster v status ecvol
Status of volume: ecvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.143:/rhs/brick1/ec1          49155     0          Y       20497
Brick 10.70.47.145:/rhs/brick1/ec2          49154     0          Y       8958 
Brick 10.70.47.150:/rhs/brick1/ec3          49154     0          Y       13867
Brick 10.70.47.151:/rhs/brick1/ec4          49154     0          Y       8033 
Brick 10.70.47.143:/rhs/brick4/ec5          49156     0          Y       20514
Brick 10.70.47.145:/rhs/brick4/ec6          49155     0          Y       8975 
NFS Server on localhost                     2049      0          Y       20534
Quota Daemon on localhost                   N/A       N/A        Y       20665
Bitrot Daemon on localhost                  N/A       N/A        N       N/A  
Scrubber Daemon on localhost                N/A       N/A        N       N/A  
NFS Server on 10.70.47.150                  2049      0          Y       13887
Quota Daemon on 10.70.47.150                N/A       N/A        Y       13955
Bitrot Daemon on 10.70.47.150               N/A       N/A        N       N/A  
Scrubber Daemon on 10.70.47.150             N/A       N/A        N       N/A  
NFS Server on 10.70.47.145                  2049      0          Y       8995 
Quota Daemon on 10.70.47.145                N/A       N/A        Y       9072 
Bitrot Daemon on 10.70.47.145               N/A       N/A        N       N/A  
Scrubber Daemon on 10.70.47.145             N/A       N/A        N       N/A  
NFS Server on 10.70.47.151                  2049      0          Y       8052 
Quota Daemon on 10.70.47.151                N/A       N/A        Y       8130 
Bitrot Daemon on 10.70.47.151               N/A       N/A        N       N/A  
Scrubber Daemon on 10.70.47.151             N/A       N/A        N       N/A  
========================================================================


[root@node1 glusterfs]# gluster v info ecvol
 
Volume Name: ecvol
Type: Disperse
Volume ID: 75cae01b-8d42-4621-b2ac-6853ea04e90d
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/ec1
Brick2: 10.70.47.145:/rhs/brick1/ec2
Brick3: 10.70.47.150:/rhs/brick1/ec3
Brick4: 10.70.47.151:/rhs/brick1/ec4
Brick5: 10.70.47.143:/rhs/brick4/ec5
Brick6: 10.70.47.145:/rhs/brick4/ec6
Options Reconfigured:
features.bitrot: enable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on


Expected results:

Volume set option is not supported to enable bitrot.


Additional info:

Comment 4 monti lawrence 2015-07-22 20:35:04 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 5 Gaurav Kumar Garg 2015-07-27 05:41:11 UTC
hi,

this doc text looks good to me.

Comment 8 Gaurav Kumar Garg 2015-08-20 05:49:30 UTC
upstream patch for this bug is already merged will backport it. http://review.gluster.org/#/c/11118/

Comment 9 Gaurav Kumar Garg 2015-08-20 11:09:39 UTC
downstream patch available for this bug: https://code.engineering.redhat.com/gerrit/#/c/55756/

Comment 10 RajeshReddy 2015-08-27 08:10:06 UTC
Tested with build "" and with gluster vol set command is not allowing to change bitrot configuration 


[root@rhs-client9 data]# gluster vol set dht4  bitrot enable 
volume set: failed:  'gluster volume set <VOLNAME> bitrot' is invalid command. Use 'gluster volume bitrot <VOLNAME> {enable|disable}' instead.
[root@rhs-client9 data]# gluster vol set dht4  [2015-08-27 07:53:58.738372] A [MSGID: 118023] [bit-rot-scrub.c:228:bitd_compare_ckum] 0-dht4-bit-rot-0: Object checksum mismatch: /data/bitrot1 [GFID: 9f472773-c993-41d7-b5c3-1a206c36fc7c | Brick: /rhs/brick1/dht4]
-bash: Brick:: command not found
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@rhs-client9 data]# [2015-08-27 07:53:58.738670] A [MSGID: 118024] [bit-rot-scrub.c:248:bitd_compare_ckum] 0-dht4-bit-rot-0: Marking /data/bitrot1 [GFID: 9f472773-c993-41d7-b5c3-1a206c36fc7c | Brick: /rhs/brick1/dht4] as corrupted..
-bash: Brick:: command not found
-bash: [2015-08-27: command not found
[root@rhs-client9 data]# gluster vol set 
Usage: volume set <VOLNAME> <KEY> <VALUE>
[root@rhs-client9 data]# gluster vol set dht scrub-frequency daily
volume set: failed:  'gluster volume set <VOLNAME> scrub-frequency' is invalid command. Use 'gluster volume bitrot <VOLNAME> scrub-frequency {hourly|daily|weekly|biweekly|monthly}' instead.
[root@rhs-client9 data]# gluster vol set dht scrub pause
volume set: failed:  'gluster volume set <VOLNAME> scrub' is invalid command. Use 'gluster volume bitrot <VOLNAME> scrub {pause|resume}' instead.
[root@rhs-client9 data]# gluster vol set dht scrub-throttle pause
volume set: failed:  'gluster volume set <VOLNAME> scrub-throttle' is invalid command. Use 'gluster volume bitrot <VOLNAME> scrub {pause|resume}' instead.
[root@rhs-client9 data]# gluster vol set dht4 scrub-throttle normal
volume set: failed:  'gluster volume set <VOLNAME> scrub-throttle' is invalid command. Use 'gluster volume bitrot <VOLNAME> scrub {pause|resume}' instead.

Comment 12 errata-xmlrpc 2015-10-05 07:10:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html


Note You need to log in before you can comment on or make changes to this bug.