Bug 1289838

Summary: [tiering]: gluster tier status command times out most of the time
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: tierAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-10 07:05:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description krishnaram Karthick 2015-12-09 06:29:47 UTC
Description of problem:

On a 8 node gluster cluster configured on rhel 6.7, tier volume status command times out most of the time.

[root@dhcp37-191 ~]# gluster vol tier tier-volume-01 status

Error : Request timed out
Tier command failed

[root@dhcp37-191 ~]# gluster vol set tier-volume-01 cluster.watermark-low 11
volume set: failed: Another transaction is in progress for tier-volume-01. Please try again after sometime.


Version-Release number of selected component (if applicable):
glusterfs-3.7.5-9.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Configure 8 node gluster cluster and run IO over night. (Overnight IO is to soak the system, tier vol status works fine for newly created volume)
2. Run gluster vol tier status

Actual results:
tier vol status command times out

Expected results:
tier vol status should not time out

Additional info:

sosreport will be attached shortly.

Comment 2 krishnaram Karthick 2015-12-09 08:39:47 UTC
sosreports are available here --> http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1289838/

Comment 3 Nag Pavan Chilakam 2015-12-10 07:05:21 UTC
Given that this was raised on 6.7 and hasn't been so far reported in 7.x setup, will mark it as "wont fix". If QE sees this issue on 7.x, QE will reopen this bug.