Bug 1411899 - DHT doesn't evenly balance files on FreeBSD with ZFS
Summary: DHT doesn't evenly balance files on FreeBSD with ZFS
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.8
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
Depends On: 1356076
Blocks: glusterfs-3.8.10
TreeView+ depends on / blocked
Reported: 2017-01-10 17:18 UTC by Xavi Hernandez
Modified: 2017-02-21 07:27 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1356076
Last Closed: 2017-02-20 12:33:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1425307 0 unspecified CLOSED Fix statvfs for FreeBSD in Python 2021-02-22 00:41:40 UTC

Internal Links: 1425307

Description Xavi Hernandez 2017-01-10 17:18:22 UTC
+++ This bug was initially created as a clone of Bug #1356076 +++

Description of problem:

On a pure distributed volume with one brick being a FreeBSD node with ZFS as filesystem and the other a Linux, dht puts ten more times data on FreeBSD node (3 TB vs 30 TB)

Version-Release number of selected component (if applicable): mainline

How reproducible:

Not sure

Steps to Reproduce:
1. Create a distributed volume with two bricks: one on a FreeBSD/ZFS and another one on a CentOS
2. Start copying files

Actual results:

almost all files are placed in the FreeBSD node.

Expected results:

nearly 50% of files should be placed in each node.

Additional info:

A "gluster volume status detail" command shows a space on FreeBSD filesystem much bigger that it really is (~256 times bigger). It also doesn't detect the filesystem and some other information:

    File System          : N/A
    Device               : N/A
    Mount Options        : N/A
    Inode Size           : N/A
    Disk Space Free      : 2.6PB
    Total Disk Space     : 12.6PB

Real brick space is 45TB

A statvfs() call on FreeBSD returns this:

    f_frsize: 512
    f_bsize:  131072

From statvfs() man page on FreeBSD:

    "The statvfs() and fstatvfs() functions fill the structure pointed to by
     buf with garbage.  This garbage will occasionally bear resemblance to
     file system statistics, but portable applications must not depend on
     this.  Applications must pass a pathname or file descriptor which refers
     to a file on the file system in which they are interested."

    "f_frsize   The size in bytes of the minimum unit of allocation on
                this file system.  (This corresponds to the f_bsize mem-
                ber of struct statfs.)"

    "f_bsize    The preferred length of I/O requests for files on this
                file system.  (Corresponds to the f_iosize member of
                struct statfs.)"

Probably gluster uses f_bsize as the block size, but on FreeBSD it's the optimal I/O size, not the block size.

As a workaround, disabling 'weighted-rebalance' distributes files evenly between bricks.

--- Additional comment from Jeff Darcy on 2016-07-13 17:54:25 CEST ---

You're probably right, Xavier.  Unfortunately, Linux and FreeBSD seem to have some fundamental disagreements about what these fields mean, so we'll probably have to add some platform-conditional code in some of the several places that use them.  I also doubt that this is the last problem we'll find in OS-heterogeneous clusters.  :(

--- Additional comment from Xavier Hernandez on 2016-07-14 08:17:25 CEST ---

I agree.

We are using wrapped system calls in many places right now (syscall.h). Maybe we should enforce the usage of these wrappers and place the specific OS code in syscall.c.

For this particular case we could solve the problem simply by setting f_bsize = f_frsize on FreeBSD.

Comment 1 Worker Ant 2017-01-13 09:39:05 UTC
REVIEW: http://review.gluster.org/16400 (libglusterfs: fix statvfs in FreeBSD) posted (#1) for review on release-3.8 by Xavier Hernandez (xhernandez@datalab.es)

Comment 2 Worker Ant 2017-01-14 13:43:39 UTC
COMMIT: http://review.gluster.org/16400 committed in release-3.8 by Kaleb KEITHLEY (kkeithle@redhat.com) 
commit 66bd1686e90c339ce140bee55ccc1f195b2c862a
Author: Xavier Hernandez <xhernandez@datalab.es>
Date:   Mon Jan 9 13:10:19 2017 +0100

    libglusterfs: fix statvfs in FreeBSD
    FreeBSD interprets statvfs' f_bsize field in a different way than Linux.
    This fix modifies the value returned by statvfs() on FreeBSD to match
    the expected value by Gluster.
    > Change-Id: I930dab6e895671157238146d333e95874ea28a08
    > BUG: 1356076
    > Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
    > Reviewed-on: http://review.gluster.org/16361
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
    Change-Id: I68af1b84af47d9714ed3b76513d4d3d5747bcd45
    BUG: 1411899
    Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
    Reviewed-on: http://review.gluster.org/16400
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>

Comment 3 Worker Ant 2017-02-16 08:03:23 UTC
REVIEW: https://review.gluster.org/16634 (extras/rebalance.py: Fix statvfs for FreeBSD in python) posted (#1) for review on release-3.8 by Xavier Hernandez (xhernandez@datalab.es)

Comment 4 Niels de Vos 2017-02-20 12:33:40 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.9, please open a new bug report.

glusterfs-3.8.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2017-February/000066.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.