Bug 1599998 - When reserve limits are reached, append on an existing file after truncate operation results to hang
Summary: When reserve limits are reached, append on an existing file after truncate op...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: RHGS 3.4.0
Assignee: Ravishankar N
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks: 1503137 1602236 1602241 1603056
TreeView+ depends on / blocked
 
Reported: 2018-07-11 07:40 UTC by Prasad Desala
Modified: 2018-09-16 11:44 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.12.2-15
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1602236 1602241 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:50:20 UTC


Attachments (Terms of Use)
gluster-health-report collected from all servers (7.08 KB, text/plain)
2018-07-11 07:40 UTC, Prasad Desala
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:51:44 UTC

Description Prasad Desala 2018-07-11 07:40:30 UTC
Created attachment 1458001 [details]
gluster-health-report collected from all servers

Description of problem:
=======================
When reserve limits are reached, append on an existing file after truncate operation results to hang.

Version-Release number of selected component (if applicable):
3.12.2-13.el7rhgs.x86_64

How reproducible:
always

Steps to Reproduce:
===================
1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
 gluster v set distrep storage.reserve 50
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output). I have used fallocate to quickly fill the disk.
5) Pick an existing file and truncate that file.
truncate -s 0 fallocate_99
It will throw ENOSPC error.
6) Now on the same file try to append some data.
cat /etc/redhat-release > fallocate_99

Actual results:
===============
append will hang. cat command hung here.

Expected results:
=================
No hungs.

Additional info:
================
*. Attached gluster-health-report collected from all servers.
*. Statedump from client

Comment 13 Anand Paladugu 2018-07-20 11:54:41 UTC
I agree that hung state is not good, but do logs provide any info on customer hitting the limits.  Is there any other way customer gets notified of this situation.

Comment 15 nchilaka 2018-07-20 12:58:22 UTC
qa ack is in place

Comment 17 Vijay Avuthu 2018-08-01 12:45:23 UTC
Update:
============

Build used: glusterfs-3.12.2-15.el7rhgs.x86_64

Scenario:

1) Create a Distributed-Replicate volume and start it.
2) Set the storage.reserve limits to 50 using below command,
3) FUSE mount the volume on a client.
4) Write data till the limits reaches reserve limits(you will see 100% mount point fill in df -h client output, use fallocate to quickly fill the disk. )
5) Pick an existing file and truncate that file.
# truncate -s 0 file_1
#

6) Now on the same file try to append some data.

> didn't see hangs on appending data

# cat /etc/redhat-release >>file_1
#

Changing status to Verified.

Comment 18 errata-xmlrpc 2018-09-04 06:50:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.