Bug 1231797

Summary: tiering: Porting log messages to new framework
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Mohamed Ashiq <mliyazud>
Component: tierAssignee: Nandaja Varma <nvarma>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: annair, nsathyan, rcyriac, rhs-bugs, smohan, storage-qa-internal, vagarwal
Target Milestone: ---   
Target Release: RHGS 3.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-07-29 05:04:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842    

Description Mohamed Ashiq 2015-06-15 12:19:36 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Nandaja Varma 2015-06-19 07:24:09 UTC
The patches that fixes this in upstream can be found at:
1. changetimerecorder
     master: http://review.gluster.org/#/c/10938/ (merged)
     release-3.7: http://review.gluster.org/#/c/11195/ (merged)

2. gfdb/libglusterfs
     master: http://review.gluster.org/#/c/10819/ (merged)
     release-3.7: http://review.gluster.org/#/c/11284/

Comment 4 Nandaja Varma 2015-06-19 09:16:17 UTC
Patch yet to be sent to downstream.

Comment 5 Nandaja Varma 2015-06-22 09:58:43 UTC
Patches backported to the downstream, fixing this bug:
1. https://code.engineering.redhat.com/gerrit/51210
2. https://code.engineering.redhat.com/gerrit/51212

Comment 6 Mohamed Ashiq 2015-06-26 13:05:39 UTC
(In reply to Nandaja Varma from comment #5)
> Patches backported to the downstream, fixing this bug:
> 1. https://code.engineering.redhat.com/gerrit/51210
> 2. https://code.engineering.redhat.com/gerrit/51212

Both these patches are merged in downstream.

Comment 7 Nag Pavan Chilakam 2015-06-30 08:47:10 UTC
What/How does QE verify this bug

Comment 8 Mohamed Ashiq 2015-06-30 08:59:54 UTC
(In reply to nchilaka from comment #7)
> What/How does QE verify this bug

The logging file is located in /var/log/glusterfs/

With this patch all the log messages related to tiering will be having message-id (helps in easier diagnosis) which was not present in the previous logs.

Comment 9 Vivek Agarwal 2015-06-30 09:00:23 UTC
These are log messages based on the new framework. We need to confirm if they comply with the new framework or not.
QE members from other components can help verify that.

Comment 10 Nag Pavan Chilakam 2015-07-02 07:14:08 UTC
@Mohamed Ashiq,
But I see some logs not having message-id like below:
[2015-07-02 06:28:28.707880] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[0] creation successful
[2015-07-02 06:28:28.707914] I [dht-rebalance.c:1753:gf_defrag_task] 0-DHT: Thread sleeping. defrag->current_thread_count: 7
[2015-07-02 06:28:28.707943] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[1] creation successful
[2015-07-02 06:28:28.707980] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[2] creation successful
[2015-07-02 06:28:28.707996] I [dht-rebalance.c:1753:gf_defrag_task] 0-DHT: Thread sleeping. defrag->current_thread_count: 6
[2015-07-02 06:28:28.708735] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[3] creation successful
[2015-07-02 06:28:28.708785] I [dht-rebalance.c:1753:gf_defrag_task] 0-DHT: Thread sleeping. defrag->current_thread_count: 5
[2015-07-02 06:28:28.708799] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[4] creation successful
[2015-07-02 06:28:28.708856] I [dht-rebalance.c:1753:gf_defrag_task] 0-DHT: Thread sleeping. defrag->current_thread_count: 4
[2015-07-02 06:28:28.708871] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[5] creation successful
[2015-07-02 06:28:28.708939] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[6] creation successful
[2015-07-02 06:28:28.708978] I [dht-rebalance.c:2765:gf_defrag_start_crawl] 0-DHT: Thread[7] creation successful



[2015-07-02 07:11:37.328559] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:40.328956] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:43.329355] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:46.329841] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:49.330210] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:52.330745] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:55.331153] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:11:58.331529] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)
[2015-07-02 07:12:01.331903] W [socket.c:642:__socket_rwv] 0-nfs: readv on /var/run/gluster/4048c99db83e5c98a0e8e7645e838b28.socket failed (Invalid argument)

Comment 11 Nag Pavan Chilakam 2015-07-02 07:46:38 UTC
I have seen in the /var/log/glusterfs/<volname>-tier.log and found all tiering related logs are having msgid. Hence closing as verified
Logs can be seen in sosreports of bug#1238549(sosreports collected on same day while verifying this bug)

Comment 12 errata-xmlrpc 2015-07-29 05:04:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html