Bug 1520232
Summary: | Rebalance fails on NetBSD because fallocate is not implemented | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Susant Kumar Palai <spalai> |
Component: | distribute | Assignee: | Susant Kumar Palai <spalai> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.13 | CC: | bugs, nbalacha, pkarampu, rgowdapp |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.13.1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1488103 | Environment: | |
Last Closed: | 2018-01-02 16:03:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1488103 | ||
Bug Blocks: | 1516691 |
Description
Susant Kumar Palai
2017-12-04 05:51:24 UTC
REVIEW: https://review.gluster.org/18914 (cluster/dht: make rebalance use truncate incase) posted (#1) for review on release-3.13 by Susant Palai COMMIT: https://review.gluster.org/18914 committed in release-3.13 by \"Susant Palai\" <spalai> with a commit message- cluster/dht: make rebalance use truncate incase .. the brick file system does not support fallocate. > Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c > BUG: 1488103 > Signed-off-by: Susant Palai <spalai> Change-Id: Id76cda2d8bb3b223b779e5e7a34f17c8bfa6283c BUG: 1520232 Signed-off-by: Susant Palai <spalai> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.1, please open a new bug report. glusterfs-3.13.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-devel/2017-December/054104.html [2] https://www.gluster.org/pipermail/gluster-users/ |