Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1599998 - When reserve limits are reached, append on an existing file after truncate operation results to hang
When reserve limits are reached, append on an existing file after truncate op...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.4
Unspecified Unspecified
unspecified Severity medium
: ---
: RHGS 3.4.0
Assigned To: Ravishankar N
Vijay Avuthu
:
Depends On:
Blocks: 1602241 1503137 1602236 1603056
  Show dependency treegraph
 
Reported: 2018-07-11 03:40 EDT by Prasad Desala
Modified: 2018-09-16 07:44 EDT (History)
12 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-15
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1602236 1602241 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:50:20 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
gluster-health-report collected from all servers (7.08 KB, text/plain)
2018-07-11 03:40 EDT, Prasad Desala
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:51 EDT

  None (edit)
Description Prasad Desala 2018-07-11 03:40:30 EDT
Created attachment 1458001 [details]
gluster-health-report collected from all servers

Description of problem:
=======================
When reserve limits are reached, append on an existing file after truncate operation results to hang.

Version-Release number of selected component (if applicable):
3.12.2-13.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
===================
1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
 gluster v set distrep storage.reserve 50
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output). I have used fallocate to quickly fill the disk.
5) Pick an existing file and truncate that file.
truncate -s 0 fallocate_99
It will throw ENOSPC error.
6) Now on the same file try to append some data.
cat /etc/redhat-release > fallocate_99

Actual results:
===============
append will hang. cat command hung here.

Expected results:
=================
No hungs.

Additional info:
================
*. Attached gluster-health-report collected from all servers.
*. Statedump from client
Comment 13 Anand Paladugu 2018-07-20 07:54:41 EDT
I agree that hung state is not good, but do logs provide any info on customer hitting the limits.  Is there any other way customer gets notified of this situation.
Comment 15 nchilaka 2018-07-20 08:58:22 EDT
qa ack is in place
Comment 17 Vijay Avuthu 2018-08-01 08:45:23 EDT
Update:
============

Build used: glusterfs-3.12.2-15.el7rhgs.x86_64

Scenario:

1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output, use fallocate to quickly fill the disk. )
5) Pick an existing file and truncate that file.
# truncate -s 0 file_1
#

6) Now on the same file try to append some data.

> didn't see hangs on appending data

# cat /etc/redhat-release >>file_1
#

Changing status to Verified.
Comment 18 errata-xmlrpc 2018-09-04 02:50:20 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.