Bug 1207134

Summary: BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
Product: [Community] GlusterFS Reporter: Rachana Patel <racpatel>
Component: bitrotAssignee: Raghavendra Bhat <rabhat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact: bugs <bugs>
Priority: high    
Version: mainlineCC: bugs, hchiramm, kaushal, mzywusko, nsathyan, vshankar
Target Milestone: ---Keywords: Reopened, Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1224218 1226146 (view as bug list) Environment:
Last Closed: 2016-06-16 12:45:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1221605, 1224193    

Description Rachana Patel 2015-03-30 10:04:03 UTC
Description of problem:
=======================
If any node has more than 3 bricks on it, bitd is not signing file on that node.


Version-Release number of selected component (if applicable):
=============================================================
0.803.gitf64666f.el6.x86_64

How reproducible:
=================
always

Steps to Reproduce:
===================
1. create a volume which has 4 or more bricks on one node.
[root@rhs-client37 ~]# gluster v info BitRot1
 
Volume Name: BitRot1
Type: Distributed-Replicate
Volume ID: a311984b-5978-4041-91fd-be627c616bea
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick6/br1
Brick2: rhs-client44:/pavanbrick6/br2
Brick3: rhs-client44:/pavanbrick6/br3
Brick4: rhs-client44:/pavanbrick6/br4
Brick5: rhs-client44:/pavanbrick6/br5
Brick6: rhs-client44:/pavanbrick6/br6
Options Reconfigured:
features.bitrot: on
performance.open-behind: off

2. start volume and enable BitRot and mount it
3. create files on that volume and checked after 120 second that bitd is not signing file caching on that volume.

# file: pavanbrick6/br1/new/f3
trusted.afr.BitRot1-client-0=0x000000000000000000000000
trusted.afr.BitRot1-client-1=0x000000000000000000000000
trusted.afr.BitRot1-client-2=0x000000000000000000000000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x4a35d05d0b474671ad7482be105bf7fd
trusted.glusterfs.bit-rot.signature=0xff000000000000000000000000000000
trusted.glusterfs.bit-rot.version=0x12000000000000005514f33100049b0b

# file: pavanbrick6/br1/new/f5
trusted.afr.BitRot1-client-0=0x000000000000000000000000
trusted.afr.BitRot1-client-1=0x000000000000000000000000
trusted.afr.BitRot1-client-2=0x000000000000000000000000
trusted.afr.dirty=0x000000000000000000000000
trusted.gfid=0x620dfcbb6e374e15a79b8ea5c06eebaf
trusted.glusterfs.bit-rot.signature=0xff000000000000000000000000000000
trusted.glusterfs.bit-rot.version=0x10000000000000005514f33100049b0b


Actual results:
===============
Signing functionality is not working if node has more than 3 bricks

Expected results:
================
bitd should sign files eventhought it has more than 3 bricks on same node


Additional info:
================

Comment 3 Anand Avati 2015-05-13 09:07:47 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#1) for review on master by Raghavendra Bhat (raghavendra)

Comment 4 Anand Avati 2015-05-15 08:27:46 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#2) for review on master by Raghavendra Bhat (raghavendra)

Comment 5 Anand Avati 2015-05-18 13:56:04 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#3) for review on master by Raghavendra Bhat (raghavendra)

Comment 6 Anand Avati 2015-05-19 13:51:42 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#4) for review on master by Raghavendra Bhat (raghavendra)

Comment 7 Anand Avati 2015-05-19 14:04:32 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: free resources if the thread creation fails) posted (#5) for review on master by Raghavendra Bhat (raghavendra)

Comment 8 Anand Avati 2015-05-20 10:04:05 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: refactor brick connection logic) posted (#6) for review on master by Venky Shankar (vshankar)

Comment 9 Anand Avati 2015-05-20 10:43:28 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: refactor brick connection logic) posted (#7) for review on master by Raghavendra Bhat (raghavendra)

Comment 10 Anand Avati 2015-05-28 06:34:40 UTC
REVIEW: http://review.gluster.org/10763 (features/bitrot: refactor brick connection logic) posted (#8) for review on master by Venky Shankar (vshankar)

Comment 11 Anand Avati 2015-05-28 15:40:00 UTC
COMMIT: http://review.gluster.org/10763 committed in master by Vijay Bellur (vbellur) 
------
commit 19818254fa7d2b227d212e0a62c37846aef3fc24
Author: Raghavendra Bhat <raghavendra>
Date:   Wed May 13 14:35:47 2015 +0530

    features/bitrot: refactor brick connection logic
    
    Brick connection was bloated (and not implemented efficiently) with
    calls which were not required to be called under lock. This resulted
    in starvation of lock by critical code paths. This eventally did not
    scale when the number of bricks per volume increases (add-brick and
    the likes).
    
    Also, this patch cleans up some of the weird reconnection logic that
    added more to the starvation of resources and cleans up uncontrolled
    growing of log files.
    
    Change-Id: I05e737f2a9742944a4a543327d167de2489236a4
    BUG: 1207134
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Signed-off-by: Venky Shankar <vshankar>
    Signed-off-by: Raghavendra Bhat <raghavendra>
    Reviewed-on: http://review.gluster.org/10763
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: NetBSD Build System

Comment 12 Niels de Vos 2015-06-02 08:20:15 UTC
The required changes to fix this bug have not made it into glusterfs-3.7.1. This bug is now getting tracked for glusterfs-3.7.2.

Comment 13 Niels de Vos 2015-06-20 10:07:56 UTC
Unfortunately glusterfs-3.7.2 did not contain a code change that was associated with this bug report. This bug is now proposed to be a blocker for glusterfs-3.7.3.

Comment 14 Kaushal 2015-07-27 18:26:19 UTC
This bug has been filed on the master branch and the change has already been merged into master. This bug shouldn't be blocking 3.7.x releases. Removing the `glusterfs-3.7.3` block.

Comment 15 Nagaprasad Sathyanarayana 2015-10-25 15:17:24 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 16 Niels de Vos 2016-06-16 12:45:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user