Bug 1903468 - A brick process is getting crashed
Summary: A brick process is getting crashed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 4
Assignee: Mohit Agrawal
QA Contact: Leela Venkaiah Gangavarapu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-02 07:39 UTC by Mohit Agrawal
Modified: 2021-04-29 07:21 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-50
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-29 07:21:03 UTC
Embargoed:
puebele: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:1462 0 None None None 2021-04-29 07:21:19 UTC

Description Mohit Agrawal 2020-12-02 07:39:23 UTC
One brick on one server is crashed and all attempts to bring it back online have failed.The corresponding brick on the other (of a replica 2) server is ok. Other bricks are ok.

The brick process is getting crashed at the time running a lookup operation.After analyzed the coredump we have found it was crashed because on file huge xattrs are created.On the brick side it run's every posix_operation under iot_worker thread and per thread stack size is around 256k. In posix_layer the function posix_get_ancestry_non_directory call's alloca based on backend xattr size because the size was bigger than 256k so it was crashed.

Comment 1 Mohit Agrawal 2020-12-02 07:40:54 UTC
The issue is already fixed in upstream https://github.com/gluster/glusterfs/issues/1699

Comment 17 errata-xmlrpc 2021-04-29 07:21:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1462


Note You need to log in before you can comment on or make changes to this bug.