Bug 1215129 - After adding/removing the bricks to the volume, bitrot is crawling bricks of other bitrot enabled volumes.
Summary: After adding/removing the bricks to the volume, bitrot is crawling bricks of ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: bitrot
Version: mainline
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-24 11:15 UTC by RajeshReddy
Modified: 2019-05-07 15:25 UTC (History)
6 users (show)

Fixed In Version: glusterfs-6.x
Clone Of:
Environment:
Last Closed: 2019-05-07 15:25:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:
rmekala: needinfo+


Attachments (Terms of Use)

Description RajeshReddy 2015-04-24 11:15:19 UTC
Description of problem:
============================
After adding /removing the bricks to the volume bitrot is crawling the bricks


Version-Release number of selected component (if applicable):
================================================================
glusterfs-server-3.7dev-0.1009.git8b987be.el6.x86_64


How reproducible:
======================
Always


Steps to Reproduce:
1.Create volume with one brick and create few files on the volume
2.Add new brick to the existing volume, after adding the brick bitrot started crawling the bricks not only for this volume and end up doing crawling bricks of the other bitrot enabled volumes
3.

Actual results:


Expected results:
As crawling bricks is heavy weight operation and it would be good to optimize this  

Additional info:

Comment 1 Venky Shankar 2015-04-28 12:34:48 UTC
BitRot daemons use the same infrastructure in glusterd for spawn and tear down, hence has the same side effect of restart when graph changes.

I'm not sure if we can change this behavior atm.

Gaurav, any comments?

Comment 2 Gaurav Kumar Garg 2015-05-18 10:10:22 UTC
Its not a bug. This is a expected behaviour. If you do add-brick/remove-brick operation then graph will re-generate and it will restart bitd and scrub daemon. Because If scrubber is crawling brick on which user did remove-brick operation then after removing brick graph will regenerate but scrubber daemon still action on that remove-brick and will cause a segmentation fault.

Comment 3 Gaurav Kumar Garg 2015-05-27 09:22:43 UTC
After adding and removing brick bitd/scrub daemon service will be restart so it will starting crawling brick from the same volume or other bitrot enabled volume in that node in the cluster. I don't think its a bug. rajesh could you check with all other daemon behaviour with adding/removing brick ????

Comment 4 Venky Shankar 2015-05-29 03:24:19 UTC
This is the behavior with other daemons. As of now it looks unlikely that something can be done within short span on time.

Naga/Vijay: Would it make sense to have this bz as an RFE/FutureFeature?

Comment 5 RamaKasturi 2016-01-21 06:48:19 UTC
I observer a similar behaviour when i stop and start one of the volume in the cluster. When i have four volumes in a system and i stop and start one of the volume bitrot is crawling bricks of other bitrot enabled volumes too.

Comment 8 Amar Tumballi 2019-05-07 15:25:13 UTC
This is not seen in latest releases (glusterfs-6.x). Please reopen if seen again.


Note You need to log in before you can comment on or make changes to this bug.