Bug 1599998

Summary: When reserve limits are reached, append on an existing file after truncate operation results to hang
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Prasad Desala <tdesala>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED ERRATA QA Contact: Vijay Avuthu <vavuthu>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: amukherj, apaladug, moagrawa, nchilaka, pkarampu, ravishankar, rcyriac, rhs-bugs, sankarshan, sheggodu, storage-qa-internal, vdas
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-15 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1602236 1602241 (view as bug list) Environment:
Last Closed: 2018-09-04 06:50:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1503137, 1602236, 1602241, 1603056    
Attachments:
Description Flags
gluster-health-report collected from all servers none

Description Prasad Desala 2018-07-11 07:40:30 UTC
Created attachment 1458001 [details]
gluster-health-report collected from all servers

Description of problem:
=======================
When reserve limits are reached, append on an existing file after truncate operation results to hang.

Version-Release number of selected component (if applicable):
3.12.2-13.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
===================
1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
 gluster v set distrep storage.reserve 50
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output). I have used fallocate to quickly fill the disk.
5) Pick an existing file and truncate that file.
truncate -s 0 fallocate_99
It will throw ENOSPC error.
6) Now on the same file try to append some data.
cat /etc/redhat-release > fallocate_99

Actual results:
===============
append will hang. cat command hung here.

Expected results:
=================
No hungs.

Additional info:
================
*. Attached gluster-health-report collected from all servers.
*. Statedump from client

Comment 13 Anand Paladugu 2018-07-20 11:54:41 UTC
I agree that hung state is not good, but do logs provide any info on customer hitting the limits.  Is there any other way customer gets notified of this situation.

Comment 15 Nag Pavan Chilakam 2018-07-20 12:58:22 UTC
qa ack is in place

Comment 17 Vijay Avuthu 2018-08-01 12:45:23 UTC
Update:
============

Build used: glusterfs-3.12.2-15.el7rhgs.x86_64

Scenario:

1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output, use fallocate to quickly fill the disk. )
5) Pick an existing file and truncate that file.
# truncate -s 0 file_1
#

6) Now on the same file try to append some data.

> didn't see hangs on appending data

# cat /etc/redhat-release >>file_1
#

Changing status to Verified.

Comment 18 errata-xmlrpc 2018-09-04 06:50:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607