Bug 1224153

Summary: bitd log grows rapidly if brick goes down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RajeshReddy <rmekala>
Component: bitrotAssignee: bugs <bugs>
Status: CLOSED UPSTREAM QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, asriram, atumball, rabhat, rhs-bugs, sankarshan, smohan, vshankar
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
When a brick process dies, BitD tries to read from the socket used to communicate with the corresponding brick. If it fails, BitD logs the failure to the log file. This results in many messages in the log files, leading to the failure of reading from the socket and an increase in the size of the log file.
Story Points: ---
Clone Of: 1221980 Environment:
Last Closed: 2018-10-11 09:39:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1216951    

Description RajeshReddy 2015-05-22 09:25:00 UTC
+++ This bug was initially created as a clone of Bug #1221980 +++

Description of problem:
scrub log grows rapidly if brick goes down 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Create a rplica volume and kill brick process on any of the node and after that follwoing messages are 

15-05-15 07:45:14.111143] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:14.111620] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:14.116074] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:17.111748] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:18.116675] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:18.122521] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:20.116466] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:22.123131] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:22.129067] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:23.122852] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:26.129157] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:26.129615] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:26.134801] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:29.129755] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:30.135455] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:30.141363] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)


2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Venky Shankar on 2015-05-18 06:29:34 EDT ---

Do you see similar messages for regular clients? If yes, then this is not directly for bitrot/scrub daemon and effects every other client.

Please confirm.

--- Additional comment from RajeshReddy on 2015-05-22 05:13:31 EDT ---

I am not seeing this behaviour with other components

Comment 2 Anjana Suparna Sriram 2015-07-23 07:20:50 UTC
Rajesh, 

Could you please review the doc test and sign off for technical accuracy?


Regards,
Anjana

Comment 3 Raghavendra Bhat 2015-07-27 07:35:58 UTC
doc text looks good.

Comment 7 Amar Tumballi 2018-10-11 09:39:38 UTC
Not planning to fix it in near future! Will revisit if there is demand for bitrot feature! Also, this was not seen as priority running upto 3.4.0 release!