Bug 764822 (GLUSTER-3090) - data self-heal is not happening with 3.2.1 rdma set-up
Summary: data self-heal is not happening with 3.2.1 rdma set-up
Keywords:
Status: CLOSED NOTABUG
Alias: GLUSTER-3090
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.2.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-27 11:24 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:55 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description M S Vishwanath Bhat 2011-06-27 11:24:11 UTC
I created a 2 way replicate volume. I created 1 million 10K files on the mount point through dd command. When the creation of files were half way through (just around 5000 files) I brought the first replica child down. Now after the creation of 1 million files, I brought up the server which was down. Now I run 'find .' command on the mount point to trigger the self-heal. But after a while I see that only entries are self healed (There was 1 million files on the brick). But size was just 70MB around while the original data is around 16GB.

I was able to hit the problem twice. The transport type was 'rdma'. 

The logs were too big to even read because I had et the log level to DEBUG. I will try to reproduce this with lesser data.

Comment 1 Pranith Kumar K 2011-07-01 06:18:35 UTC
(In reply to comment #0)
> I created a 2 way replicate volume. I created 1 million 10K files on the mount
> point through dd command. When the creation of files were half way through
> (just around 5000 files) I brought the first replica child down. Now after the
> creation of 1 million files, I brought up the server which was down. Now I run
> 'find .' command on the mount point to trigger the self-heal. But after a while
> I see that only entries are self healed (There was 1 million files on the
> brick). But size was just 70MB around while the original data is around 16GB.
> 
> I was able to hit the problem twice. The transport type was 'rdma'. 
> 
> The logs were too big to even read because I had et the log level to DEBUG. I
> will try to reproduce this with lesser data.

Vishwa tried the same test with 50,000 files and the same happened but when the command "find <mount> | xargs ls -l" is executed on the mount point, everything healed fine, please verify that the self-heal is triggered using the command "find <gluster-mount> -noleaf -print0 | xargs --null stat >/dev/null".

Comment 2 Raghavendra G 2011-07-11 01:29:12 UTC
MS/Pranith,

Can we close this bug as invalid (since wrong command was used for self-healing)?

regards,
Raghavendra.


Note You need to log in before you can comment on or make changes to this bug.