Bug 763370 (GLUSTER-1638) - 3 replica creation creates only 2 replicas in volfile for nfs
Summary: 3 replica creation creates only 2 replicas in volfile for nfs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-1638
Product: GlusterFS
Classification: Community
Component: cli
Version: 3.1-alpha
Hardware: All
OS: Linux
low
high
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-09-18 08:57 UTC by Shehjar Tikoo
Modified: 2015-12-01 16:45 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTP
Mount Type: All
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
Generated volume file (9.22 KB, text/plain)
2010-09-18 05:57 UTC, Shehjar Tikoo
no flags Details

Description Shehjar Tikoo 2010-09-18 05:57:56 UTC
Created attachment 311

Comment 1 Shehjar Tikoo 2010-09-18 08:57:17 UTC
Take 4 servers on which the exports are structured as:

/testdirs/4d-3r-master/disk1
/testdirs/4d-3r-master/disk2
/testdirs/4d-3r-master/disk3

Together these will form 12 bricks, (4 servers X 3 bricks each)

On qa26, to start a distributed-replicated with 3 replicas, I run:

# gluster volume create 4dist-3repl  replica 3 192.168.1.78:/testdirs/4d-3r-master/disk1/ 192.168.1.78:/testdirs/4d-3r-master/disk2/ 192.168.1.78:/testdirs/4d-3r-master/disk3/ 192.168.1.79:/testdirs/4d-3r-master/disk1/ 192.168.1.79:/testdirs/4d-3r-master/disk2/ 192.168.1.79:/testdirs/4d-3r-master/disk3/ 192.168.1.80:/testdirs/4d-3r-master/disk1/ 192.168.1.80:/testdirs/4d-3r-master/disk2/ 192.168.1.80:/testdirs/4d-3r-master/disk3/ 192.168.1.77:/testdirs/4d-3r-master/disk1/ 192.168.1.77:/testdirs/4d-3r-master/disk2/ 192.168.1.77:/testdirs/4d-3r-master/disk3/

After starting the volume file has the first replicate generated as:

volume dr-client-0
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.77
    option transport.socket.nodelay on
    option remote-subvolume /testdirs/4d-3r-master/disk1
end-volume

volume dr-client-1
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.78
    option transport.socket.nodelay on
    option remote-subvolume /testdirs/4d-3r-master/disk1
end-volume

volume dr-client-2
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.79
    option transport.socket.nodelay on
    option remote-subvolume /testdirs/4d-3r-master/disk1
end-volume

volume dr-client-3
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.1.80
    option transport.socket.nodelay on
    option remote-subvolume /testdirs/4d-3r-master/disk1
end-volume

volume dr-replicate-0
    type cluster/replicate
#   option read-subvolume on
#   option favorite-child on
#   option background-self-heal-count on
#   option data-self-heal on
#   option data-self-heal-algorithm on
#   option data-self-heal-window-size on
#   option metadata-self-heal on
#   option entry-self-heal on
#   option data-change-log on
#   option metadata-change-log on
#   option entry-change-log on
#   option strict-readdir on
    subvolumes dr-client-0 dr-client-1
end-volume

See attached file for complete vol file. I think to reproduce, we will not need 4 servers, just three bricks on single server may suffice to show the above behaviour.

Comment 2 Amar Tumballi 2010-09-21 08:45:22 UTC
Worked for me on a single system.. will try over more machine, and see if the behavior is different..

Comment 3 Raghavendra Bhat 2010-09-23 09:29:01 UTC
checked on 4 machines. It worked fine.

Comment 4 Shehjar Tikoo 2010-09-23 09:52:40 UTC
i checked it again with 12 bricks and it worked this time. Will re-open if i see it again. thanks.


Note You need to log in before you can comment on or make changes to this bug.