Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1224153 - bitd log grows rapidly if brick goes down
bitd log grows rapidly if brick goes down
Status: CLOSED UPSTREAM
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: bitrot (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: bugs@gluster.org
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks: 1216951
  Show dependency treegraph
 
Reported: 2015-05-22 05:25 EDT by RajeshReddy
Modified: 2018-10-11 05:39 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
When a brick process dies, BitD tries to read from the socket used to communicate with the corresponding brick. If it fails, BitD logs the failure to the log file. This results in many messages in the log files, leading to the failure of reading from the socket and an increase in the size of the log file.
Story Points: ---
Clone Of: 1221980
Environment:
Last Closed: 2018-10-11 05:39:38 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-05-22 05:25:00 EDT
+++ This bug was initially created as a clone of Bug #1221980 +++

Description of problem:
scrub log grows rapidly if brick goes down 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Create a rplica volume and kill brick process on any of the node and after that follwoing messages are 

15-05-15 07:45:14.111143] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:14.111620] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:14.116074] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:17.111748] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:18.116675] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:18.122521] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:20.116466] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:22.123131] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:22.129067] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:23.122852] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:26.129157] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:26.129615] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:26.134801] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)
[2015-05-15 07:45:29.129755] W [socket.c:642:__socket_rwv] 0-gfchangelog: readv on /var/run/gluster/changelog-988e38b4cf112124ee4dd36f96171cce.sock failed (Invalid argument)
[2015-05-15 07:45:30.135455] I [rpc-clnt.c:1807:rpc_clnt_reconfig] 0-vol3-client-1: changing port to 49173 (from 0)
[2015-05-15 07:45:30.141363] E [socket.c:2332:socket_connect_finish] 0-vol3-client-1: connection to 10.70.33.235:49173 failed (Connection refused)


2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Venky Shankar on 2015-05-18 06:29:34 EDT ---

Do you see similar messages for regular clients? If yes, then this is not directly for bitrot/scrub daemon and effects every other client.

Please confirm.

--- Additional comment from RajeshReddy on 2015-05-22 05:13:31 EDT ---

I am not seeing this behaviour with other components
Comment 2 Anjana Suparna Sriram 2015-07-23 03:20:50 EDT
Rajesh, 

Could you please review the doc test and sign off for technical accuracy?


Regards,
Anjana
Comment 3 Raghavendra Bhat 2015-07-27 03:35:58 EDT
doc text looks good.
Comment 7 Amar Tumballi 2018-10-11 05:39:38 EDT
Not planning to fix it in near future! Will revisit if there is demand for bitrot feature! Also, this was not seen as priority running upto 3.4.0 release!

Note You need to log in before you can comment on or make changes to this bug.