Bug 1018205 - quota: change the values of soft-timeout and hard-timeout and there is not "A" message reported
quota: change the values of soft-timeout and hard-timeout and there is not "A...
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Anuradha
Sudhir D
: ZStream
Depends On:
Blocks: 1020127
  Show dependency treegraph
 
Reported: 2013-10-11 08:37 EDT by Saurabh
Modified: 2016-09-19 22:00 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Alert message is not reported in the logs when quota soft-timeout and hard-timeout is changed.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-01-27 01:35:02 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-10-11 08:37:04 EDT
Description of problem:
No "A" message reported after crossing the soft-limit
Seen this issue after changing the values for soft-timeout and hard-timeout.
As, before the above mentioned change I had an "A" message.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.34rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a volume, start it
2. enable quota and set limit as 1GB for "/"
3. change the soft-timeout and hard-timeout values
Volume Name: dist-rep3
Type: Distributed-Replicate
Volume ID: 75ce853a-fa9c-442e-9705-4408535ab9be
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r13
Brick2: 10.70.43.181:/rhs/brick1/d1r23
Brick3: 10.70.43.18:/rhs/brick1/d2r13
Brick4: 10.70.43.22:/rhs/brick1/d2r23
Brick5: 10.70.42.186:/rhs/brick1/d3r13
Brick6: 10.70.43.181:/rhs/brick1/d3r23
Brick7: 10.70.43.18:/rhs/brick1/d4r13
Brick8: 10.70.43.22:/rhs/brick1/d4r23
Brick9: 10.70.42.186:/rhs/brick1/d5r13
Brick10: 10.70.43.181:/rhs/brick1/d5r23
Brick11: 10.70.43.18:/rhs/brick1/d6r13
Brick12: 10.70.43.22:/rhs/brick1/d6r23
Options Reconfigured:
features.quota-deem-statfs: on
features.hard-timeout: 2s
features.soft-timeout: 4s
features.quota: on

4. mount over nfs
5. start creating data till quota limit is not reached.

Actual results:
no "A" based on this test.


Expected results:
after reaching the soft-limit, "A" message is expected. 

Additional info:
Comment 2 Saurabh 2013-10-28 08:47:34 EDT
There are two "alert messages" :-
 A1. one string says "Usage crossed above soft-limit":- comes just the moment default soft-limit is crossed (i.e. 80%)
 A2. other one is "Usage above soft-limit":- comes on the basis of write to a brick.

Now, in this bug I have said there is no "A" message , basically "A" represents alert. Effectively, if the argument is that alert may not have because the write would just surpassed the time that the alert was suppose to come, if this is the case I presume we may miss A1, but what about A2.
Comment 3 Anuradha 2013-11-07 07:52:51 EST
Patch for review for rhs-2.1 posted on :
https://code.engineering.redhat.com/gerrit/#/c/15345/

Patch for review for rhs-2.1-u1 posted on :
https://code.engineering.redhat.com/gerrit/#/c/15346/
Comment 4 Vivek Agarwal 2013-11-14 06:27:37 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 5 Vivek Agarwal 2013-11-14 06:29:28 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 6 Vivek Agarwal 2013-11-14 06:30:03 EST
Moving the known issues to Doc team, to be documented in release notes for U1
Comment 7 Pavithra 2013-11-25 02:00:27 EST
I've documented this as a known issue in the Big Bend Update 1 Release Notes. Here is the link:

http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html

Note You need to log in before you can comment on or make changes to this bug.