Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1447390

Summary: Brick Multiplexing :- .trashcan not able to heal after replace brick
Product: [Community] GlusterFS Reporter: Jiffin <jthottan>
Component: coreAssignee: Jiffin <jthottan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, anoopcs, bugs, ksandha, nchilaka, rhinduja, rhs-bugs, rkavunga, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard: brick-multiplexing
Fixed In Version: glusterfs-3.12.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1443939 Environment:
Last Closed: 2018-03-24 07:20:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jiffin 2017-05-02 15:21:20 UTC
+++ This bug was initially created as a clone of Bug #1443939 +++

Description of proble:-
self heal daemon fops not able to heal .trashcan after replacing a brick

Version-Release number of selected component (if applicable):
mainline

How reproducible:
100%

Steps to Reproduce:
1. create 100 files on arbiter volume 3*(2+1)
2. replace b1 brick with bnew brick
3. start renaming the files
4. .trashcan remains unhealed ; / possibly going under heal

Actual results:
no files should be left unhealed
.trash can shhould be healed.
.trashcan is not supported downstream

Expected results:
there should be no entry under heal command

 Karan Sandha on 2017-04-20 05:56:48 EDT ---

]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: bc5a0c88-7ca7-48f6-8092-70c0fe5e8846
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.47.60:/bricks/brick3/b1
Brick2: 10.70.46.218:/bricks/brick0/b1
Brick3: 10.70.47.61:/bricks/brick0/b1 (arbiter)
Brick4: 10.70.46.218:/bricks/brick2/b2
Brick5: 10.70.47.61:/bricks/brick2/b2
Brick6: 10.70.47.60:/bricks/brick2/b2 (arbiter)
Brick7: 10.70.47.60:/bricks/brick1/b3
Brick8: 10.70.46.218:/bricks/brick1/b3
Brick9: 10.70.47.61:/bricks/brick1/b3 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
nfs.disable: off
cluster.brick-multiplex: on.

--- Additional comment from Karan Sandha on 2017-04-21 02:36:52 EDT ---

Atin,

When I replaced the brick only .trashcan & / is left to heal rest all the directories and  files were healed. It would make confusion to the user why this directory isn't getting healed. its 100% reproducible. 

out put of the heal info command:-

[root@K1 /]# gluster v heal testvol info
Brick 10.70.47.60:/bricks/brick3/b1
/ - Possibly undergoing heal

Status: Connected
Number of entries: 1

Brick 10.70.46.218:/bricks/brick0/b1
/ - Possibly undergoing heal

/.trashcan 
Status: Connected
Number of entries: 2

Brick 10.70.47.61:/bricks/brick0/b1
/ - Possibly undergoing heal

/.trashcan 
Status: Connected
Number of entries: 2

Brick 10.70.46.218:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.61:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.60:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.60:/bricks/brick1/b3
Status: Connected
Number of entries: 0

Brick 10.70.46.218:/bricks/brick1/b3
Status: Connected
Number of entries: 0

Brick 10.70.47.61:/bricks/brick1/b3
Status: Connected
Number of entries: 0