Bug 764382 (GLUSTER-2650) - replace brick failed in distributed-replicated setup
Summary: replace brick failed in distributed-replicated setup
Keywords:
Status: CLOSED DUPLICATE of bug 764221
Alias: GLUSTER-2650
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-01 12:47 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
glusterd logs (652.73 KB, text/x-log)
2011-04-01 09:47 UTC, M S Vishwanath Bhat
no flags Details
glusterd log for machine 223 (49.86 KB, text/x-log)
2011-05-19 09:12 UTC, M S Vishwanath Bhat
no flags Details
glusterd log for machine 224 (470.46 KB, text/x-log)
2011-05-19 09:12 UTC, M S Vishwanath Bhat
no flags Details

Description M S Vishwanath Bhat 2011-04-01 12:47:30 UTC
In a 2*2 distributed-replicated setup, replace brick fails with following output. 
[root@FC-4 Bricks]# gluster volume replace-brick hosdu 10.1.12.127:/data/vishwa/brick3/ 10.1.12.126:/data/vishwa/brick3 start
replace-brick started successfully
[root@FC-4 Bricks]# gluster volume replace-brick hosdu 10.1.12.127:/data/vishwa/brick3/ 10.1.12.126:/data/vishwa/brick3 status
10.1.12.126, is not connected at the moment

Both the machines were connected before replace-brick and after replace-brick they get disconnected.

I have attached the glusterd log.

Comment 1 Vijay Bellur 2011-04-06 07:49:58 UTC
Can you please confirm if this still happens?

Comment 2 M S Vishwanath Bhat 2011-04-06 08:55:30 UTC
yes...This issue is still there in 3.2.0beta1 code.

Comment 3 Vijay Bellur 2011-04-07 14:18:15 UTC
test comment

Comment 4 Vijay Bellur 2011-04-07 14:48:26 UTC
test comment2

Comment 5 M S Vishwanath Bhat 2011-05-19 09:12:06 UTC
Created attachment 490

Comment 6 M S Vishwanath Bhat 2011-05-19 09:12:57 UTC
Created attachment 491

Comment 7 M S Vishwanath Bhat 2011-05-19 09:17:00 UTC
With the same set-up, the issue still persists with DEBUG build.

root@gluster-Ubuntu1:/etc/glusterd# /opt/320qa7/sbin/gluster volume info

Volume Name: mattondu
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.224:/data/export5
Brick2: 192.168.1.223:/data/export6
Brick3: 192.168.1.224:/data/export7
Brick4: 192.168.1.223:/data/export8
root@gluster-Ubuntu1:/etc/glusterd# /opt/320qa7/sbin/gluster volume replace-brick mattondu 192.168.1.223:/data/export8/ 192.168.1.224:/data/export9/ start
replace-brick started successfully
root@gluster-Ubuntu1:/etc/glusterd# /opt/320qa7/sbin/gluster volume replace-brick mattondu 192.168.1.223:/data/export8/192.168.1.224:/data/export9/ status
Number of files migrated = 0       Current file=  
root@gluster-Ubuntu1:/etc/glusterd# /opt/320qa7/sbin/gluster volume replace-brick mattondu 192.168.1.223:/data/export8/ 192.168.1.224:/data/export9/ status
192.168.1.224, is not connected at the moment

Here glusterd at 224 machine crashed. I couldn't attach the core file since it's too big(16M). I have attached the glusterd logs from both the machines.

Comment 8 krishnan parthasarathi 2011-06-09 06:08:48 UTC

*** This bug has been marked as a duplicate of bug 2489 ***


Note You need to log in before you can comment on or make changes to this bug.