Bug 1624329 - Prevent hangs while increasing replica-count/replace-brick for directory hierarchy
Summary: Prevent hangs while increasing replica-count/replace-brick for directory hier...
Keywords:
Status: CLOSED DUPLICATE of bug 1619357
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1622821 1625575 1625588
Blocks: 1619357
TreeView+ depends on / blocked
 
Reported: 2018-08-31 08:39 UTC by Ravishankar N
Modified: 2018-11-06 06:13 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Metadata, Name heals triggered from mounts are waiting for locks to be acquired before completing mount operations such as lookup. Consequence: All operations across the mounts which are trying to do heal on same directory will slow down the operations on the mounts for seconds making it appear like a hang. Fix: Delegated the heals to shd in the common cases to prevent triggering heals from mounts, so that the mount operations don't need to wait for heal to complete before completing the operation. Result: Hangs are not observed.
Clone Of: 1622821
Environment:
Last Closed: 2018-11-05 17:27:22 UTC
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2018-08-31 08:39:18 UTC
+++ This bug was initially created as a clone of Bug #1622821 +++

Description of problem:
Problem Statement:

Whenever we have access patterns on the mounts that need to perform lookups on multiple directories to reach the leaf file/directory when a brick is replaced/replica-count increased, there is a possibility of introducing hangs because of name-heal and metadata heal.

Here is a simulation of access pattern that can show the problem.

#!/bin/bash
MAX_MOUNTS=40
DEPTH=4
glusterd

gluster --mode=script --wignore volume create r2 replica 2 localhost.localdomain:/home/gfs/r2_0 localhost.localdomain:/home/gfs/r2_1 localhost.localdomain:/home/gfs/r2_2 localhost.localdomain:/home/gfs/r2_3
gluster --mode=script volume start r2

for i in $(seq 1 $MAX_MOUNTS); do mkdir /mnt/$i; mount -t glusterfs localhost.localdomain:/r2 /mnt/$i; done
depth_str=""
for i in $(seq 1 $DEPTH); do depth_str="1/$depth_str"; done
mkdir -p /mnt/1/$depth_str
for i in $(seq 1 $MAX_MOUNTS); do touch /mnt/1/$depth_str/$i; done
gluster v add-brick r2 replica 3 localhost.localdomain:/home/gfs/r2_{4,5} force
sleep 5 #for graph switch to complete
gluster volume profile r2 start
gluster volume profile r2 info clear

for i in $(seq 1 $MAX_MOUNTS); do time touch /mnt/$i/$depth_str/${i}_1 & done; wait
sleep 2
gluster volume profile r2 info incremental

This will create 40 clients which will try to access different files in same directory hierarchy increasing the chances of metadata-heals and name-heals. I modified afr code to print log messages about launching metadata and name heals. What I observed is that lookups on the directories from different clients would all trigger metadata and name heals so all of these lookups will be serialized and the last mount which gets the necessary lock to perform heal took more than a second to get the metadata lock and similarly more than a second to get the lock for name heal. This was the worst I have seen after running the script above multiple times, sometimes enabling disabling different heals. More the activity on the mount, more the users will think that the mount is unusable after a point because of the hangs. This also leads to timeouts with applications that expect
response in some time like web servers etc.
Outputs:
It takes more than 2 seconds to do 'touch'

    real 0m2.042s
    user 0m0.001s
    sys 0m0.002s

    root@localhost - /var/log/glusterfs
    14:42:40 :) ⚡ grep "performing metadata selfheal" mnt-* | awk '{print $12}' | sort | uniq -c
    8 805420ef-fff1-4b0a-9430-b145124a756b
    41 ab2c7d71-735d-4341-b6ce-6158ebab210a
    14 f0314b17-0fe4-45df-ba13-ea42066ce063
    Profile info
    61.15 106739.19 us 42.00 us 1358447.00 us 278 INODELK

...

    14:20:28 :) ⚡ grep "completed name-heal" mnt-* | awk '{print $8}' | sort | uniq -c
    44 00000000-0000-0000-0000-000000000001/1
    5 1a43321e-5060-4185-9e49-de889f4f5071/1
    19 999feb5b-118c-42ff-973b-0f95c6dfb927/1
    11 aaa924bf-2891-4eba-b923-2ea3877061c4/1
    Profile info
    54.23 41022.43 us 16.00 us 1239715.00 us 551 ENTRYLK

Potential Solution:
Entry self-heal and Data self-heal don't have this problem because lookups don't wait for them to complete. This is not the case with metadata and name-heal. Lookup waits for these to complete. We need to find a way of serving lookup without blocking for heals except in very rare situations to address this problem.

Proposed changes to name-heal:
For 2-way replication we can't disable name heal because without doing name-heal we won't know what to respond to the application because when a filename exists on one replica and not on the other we need to find which of those 2 bricks is the source and which is not. But in the case of replica-3 and replica-2+Arbiter we know that if there are 2 names with same gfid but one of the bricks doesn't have the file, we don't need to block lookup until name-heal completes. We can trigger it in the background. Similarly when lookup finds that the name is not present on multiple bricks and if readables for the parent inode say only the bricks with no name are the sources, we can move name-heal to back-ground. Same idea can be applied to thin-arbiter also.

Proposed changes to metadata heal:
The only case which requires metadata heal to happen in foreground is when the metadata mismatches without any pending markers on the inode on the bricks, rest don't need to do metadata heal in foreground.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Always
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Worker Ant on 2018-08-28 02:41:40 EDT ---

REVIEW: https://review.gluster.org/21011 (cluster/afr: Delegate metadata heal with pending xattrs to SHD) posted (#3) for review on master by Pranith Kumar Karampuri

--- Additional comment from Worker Ant on 2018-08-28 23:57:26 EDT ---

COMMIT: https://review.gluster.org/21011 committed in master by "Pranith Kumar Karampuri" <pkarampu> with a commit message- cluster/afr: Delegate metadata heal with pending xattrs to SHD

Problem:
When metadata-self-heal is triggered on the mount, it blocks
lookup until metadata-self-heal completes. But that can lead
to hangs when lot of clients are accessing a directory which
needs metadata heal and all of them trigger heals waiting
for other clients to complete heal.

Fix:
Only when the heal is needed but the pending xattrs are not set,
trigger metadata heal that could block lookup. This is the only
case where different clients may give different metadata to the
clients without heals, which should be avoided.

Updates bz#1622821
Change-Id: I6089e9fda0770a83fb287941b229c882711f4e66
Signed-off-by: Pranith Kumar K <pkarampu>

--- Additional comment from Worker Ant on 2018-08-30 02:14:30 EDT ---

REVIEW: https://review.gluster.org/21040 (cluster/afr: Delegate name-heal when possible) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 10 Pranith Kumar K 2018-09-21 05:28:48 UTC
The issues identified in https://bugzilla.redhat.com/show_bug.cgi?id=1619357 from the customer's logs are going to be fixed as part of this bz. We are yet to have a call with customer to find if there are anymore issues even after this fix.

Comment 13 RHEL Program Management 2018-10-31 03:11:53 UTC
Quality Engineering Management has reviewed and declined this request.
You may appeal this decision by reopening this request.

Comment 14 Amar Tumballi 2018-10-31 04:06:46 UTC
Marking it as POST again, because the bug got closed due to 'qa_ NACK'.


Note You need to log in before you can comment on or make changes to this bug.