+++ This bug was initially created as a clone of Bug #1207134 +++ Description of problem: ======================= If any node has more than 3 bricks on it, bitd is not signing file on that node. Version-Release number of selected component (if applicable): ============================================================= 0.803.gitf64666f.el6.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. create a volume which has 4 or more bricks on one node. [root@rhs-client37 ~]# gluster v info BitRot1 Volume Name: BitRot1 Type: Distributed-Replicate Volume ID: a311984b-5978-4041-91fd-be627c616bea Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: rhs-client44:/pavanbrick6/br1 Brick2: rhs-client44:/pavanbrick6/br2 Brick3: rhs-client44:/pavanbrick6/br3 Brick4: rhs-client44:/pavanbrick6/br4 Brick5: rhs-client44:/pavanbrick6/br5 Brick6: rhs-client44:/pavanbrick6/br6 Options Reconfigured: features.bitrot: on performance.open-behind: off 2. start volume and enable BitRot and mount it 3. create files on that volume and checked after 120 second that bitd is not signing file caching on that volume. # file: pavanbrick6/br1/new/f3 trusted.afr.BitRot1-client-0=0x000000000000000000000000 trusted.afr.BitRot1-client-1=0x000000000000000000000000 trusted.afr.BitRot1-client-2=0x000000000000000000000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x4a35d05d0b474671ad7482be105bf7fd trusted.glusterfs.bit-rot.signature=0xff000000000000000000000000000000 trusted.glusterfs.bit-rot.version=0x12000000000000005514f33100049b0b # file: pavanbrick6/br1/new/f5 trusted.afr.BitRot1-client-0=0x000000000000000000000000 trusted.afr.BitRot1-client-1=0x000000000000000000000000 trusted.afr.BitRot1-client-2=0x000000000000000000000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x620dfcbb6e374e15a79b8ea5c06eebaf trusted.glusterfs.bit-rot.signature=0xff000000000000000000000000000000 trusted.glusterfs.bit-rot.version=0x10000000000000005514f33100049b0b Actual results: =============== Signing functionality is not working if node has more than 3 bricks Expected results: ================ bitd should sign files eventhought it has more than 3 bricks on same node Additional info: ================ --- Additional comment from Anand Avati on 2015-05-13 05:07:47 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#1) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Anand Avati on 2015-05-15 04:27:46 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#2) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Anand Avati on 2015-05-18 09:56:04 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#3) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Anand Avati on 2015-05-19 09:51:42 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#4) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Anand Avati on 2015-05-19 10:04:32 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#5) for review on master by Raghavendra Bhat (raghavendra) --- Additional comment from Anand Avati on 2015-05-20 06:04:05 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: refactor brick connection logic) posted (#6) for review on master by Venky Shankar (vshankar) --- Additional comment from Anand Avati on 2015-05-20 06:43:28 EDT --- REVIEW: http://review.gluster.org/10763 (features/bitrot: refactor brick connection logic) posted (#7) for review on master by Raghavendra Bhat (raghavendra)
Tested with glusterfs-fuse-3.7.0-3 and signing and scrubbing is working for volume which has more than three bricks so marking this as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html