| Summary: | OS X client displays wrong disk size in combination with DHT | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | atepoorthuis |
| Component: | distribute | Assignee: | Anand Avati <aavati> |
| Status: | CLOSED DUPLICATE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | 2.0.2 | CC: | chrisw, gluster-bugs, shehjart |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Mac OS | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
atepoorthuis
2009-07-06 07:06:23 UTC
Overview: Disk size is not outputted correctly on OSX clients when using DHT with 4 subvolumes. A little extra information after experimenting with the distribute sequence. The outputted disk size is NOT that of the first brick. In my test setup, I have 4 bricks (2 are 4.5TB and 2 are 6.3TB). If I only add 2 or 3 bricks to the dht translator disk size is outputted correctly, regardless of which bricks I add - the same errors remain in the logs though. Upon adding a fourth, disk size is set to 5.4TB, which happens to be the size of the largest and the smallest brick divided by two. Steps to reproduce: - Mount gluster on OS X with 4 subvolumes in distribute translator Actual results: OS X client: /Users/ate/glusterfs_ip.vol 5.4Ti 1.1Ti 3.2Ti 26% /mnt/gluster Debian client: /etc/glusterfs/client_st.vol 22T 1.1T 20T 6% /mnt/gluster Setup: Gluster 2.0.2 Client: Mac OS X 10.5.7 Intel (same issue on PPC and other 10.5 versions) and MacFuse 2.0.3.2 Servers: Debian 5.0.2 ### file: client-volume.vol ### Add client feature and attach to remote subvolume volume gfs-001-afr1 type protocol/client option transport-type tcp option remote-host 10.0.0.30 # IP address of the remote brick option remote-subvolume afr1 # name of the remote volume end-volume volume gfs-001-afr2 type protocol/client option transport-type tcp option remote-host 10.0.0.30 # IP address of the remote brick option remote-subvolume afr2 # name of the remote volume end-volume volume gfs-002-afr1 type protocol/client option transport-type tcp option remote-host 10.0.0.32 # IP address of the remote brick option remote-subvolume afr1 # name of the remote volume end-volume volume gfs-002-afr2 type protocol/client option transport-type tcp option remote-host 10.0.0.32 # IP address of the remote brick option remote-subvolume afr2 # name of the remote volume end-volume volume bricks type cluster/distribute subvolumes gfs-001-afr1 gfs-001-afr2 gfs-002-afr1 gfs-002-afr2 end-volume Sample server.vol: ### file: server-volume-01a.vol ### Export volume "brick" with the contents of "/home/export" directory. volume posix1 type storage/posix # POSIX FS translator option directory /srv/export/gfs1/ # Export this directory end-volume volume posix2 type storage/posix # POSIX FS translator option directory /srv/export/gfs2/ # Export this directory end-volume volume locks1 type features/locks option mandatory-locks on subvolumes posix1 end-volume volume locks2 type features/locks option mandatory-locks on subvolumes posix2 end-volume volume afr_31_1 type protocol/client option transport-type tcp/client option remote-host 10.0.0.31 # IP address of server2 option remote-subvolume locks1 # use brick1 on server2 end-volume volume afr1 type cluster/replicate subvolumes locks1 afr_31_1 option read-subvolume locks1 end-volume volume afr_31_2 type protocol/client option transport-type tcp/client option remote-host 10.0.0.31 # IP address of server2 option remote-subvolume locks2 # use brick1 on server2 end-volume volume afr2 type cluster/replicate subvolumes locks2 afr_31_2 option read-subvolume locks2 end-volume ### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp subvolumes afr1 afr2 locks1 locks2 # posixt1 posixt2 option auth.addr.afr1.allow * # Allow access to "brick" volume option auth.addr.afr2.allow * # Allow access to "brick" volume option auth.addr.locks1.allow * # Allow access to "brick" volume option auth.addr.locks2.allow * # Allow access to "brick" volume end-volume |