Bug 1447390 - Brick Multiplexing :- .trashcan not able to heal after replace brick
Summary: Brick Multiplexing :- .trashcan not able to heal after replace brick
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Jiffin
QA Contact:
URL:
Whiteboard: brick-multiplexing
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-02 15:21 UTC by Jiffin
Modified: 2018-03-24 07:20 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.12.0
Clone Of: 1443939
Environment:
Last Closed: 2018-03-24 07:20:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1447389 0 unspecified CLOSED Brick Multiplexing: seeing Input/Output Error for .trashcan 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1453977 0 urgent CLOSED Brick Multiplexing: Deleting brick directories of the base volume must gracefully detach from glusterfsd without impacti... 2021-02-22 00:41:40 UTC

Internal Links: 1447389 1453977

Description Jiffin 2017-05-02 15:21:20 UTC
+++ This bug was initially created as a clone of Bug #1443939 +++

Description of proble:-
self heal daemon fops not able to heal .trashcan after replacing a brick

Version-Release number of selected component (if applicable):
mainline

How reproducible:
100%

Steps to Reproduce:
1. create 100 files on arbiter volume 3*(2+1)
2. replace b1 brick with bnew brick
3. start renaming the files
4. .trashcan remains unhealed ; / possibly going under heal

Actual results:
no files should be left unhealed
.trash can shhould be healed.
.trashcan is not supported downstream

Expected results:
there should be no entry under heal command

 Karan Sandha on 2017-04-20 05:56:48 EDT ---

]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: bc5a0c88-7ca7-48f6-8092-70c0fe5e8846
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.47.60:/bricks/brick3/b1
Brick2: 10.70.46.218:/bricks/brick0/b1
Brick3: 10.70.47.61:/bricks/brick0/b1 (arbiter)
Brick4: 10.70.46.218:/bricks/brick2/b2
Brick5: 10.70.47.61:/bricks/brick2/b2
Brick6: 10.70.47.60:/bricks/brick2/b2 (arbiter)
Brick7: 10.70.47.60:/bricks/brick1/b3
Brick8: 10.70.46.218:/bricks/brick1/b3
Brick9: 10.70.47.61:/bricks/brick1/b3 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
transport.address-family: inet
nfs.disable: off
cluster.brick-multiplex: on.

--- Additional comment from Karan Sandha on 2017-04-21 02:36:52 EDT ---

Atin,

When I replaced the brick only .trashcan & / is left to heal rest all the directories and  files were healed. It would make confusion to the user why this directory isn't getting healed. its 100% reproducible. 

out put of the heal info command:-

[root@K1 /]# gluster v heal testvol info
Brick 10.70.47.60:/bricks/brick3/b1
/ - Possibly undergoing heal

Status: Connected
Number of entries: 1

Brick 10.70.46.218:/bricks/brick0/b1
/ - Possibly undergoing heal

/.trashcan 
Status: Connected
Number of entries: 2

Brick 10.70.47.61:/bricks/brick0/b1
/ - Possibly undergoing heal

/.trashcan 
Status: Connected
Number of entries: 2

Brick 10.70.46.218:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.61:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.60:/bricks/brick2/b2
Status: Connected
Number of entries: 0

Brick 10.70.47.60:/bricks/brick1/b3
Status: Connected
Number of entries: 0

Brick 10.70.46.218:/bricks/brick1/b3
Status: Connected
Number of entries: 0

Brick 10.70.47.61:/bricks/brick1/b3
Status: Connected
Number of entries: 0


Note You need to log in before you can comment on or make changes to this bug.