Bug 1774611 - Glusterfsd process crash for nfs/server.so while nfs is disabled
Summary: Glusterfsd process crash for nfs/server.so while nfs is disabled
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-20 14:47 UTC by emanuel.ocone
Modified: 2023-09-14 05:47 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:17:54 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Core Dump (1.44 MB, application/gzip)
2019-11-20 14:47 UTC, emanuel.ocone
no flags Details

Description emanuel.ocone 2019-11-20 14:47:56 UTC
Created attachment 1638150 [details]
Core Dump

Description of problem:
Glusterd service crashes with he following logs:
"[2019-11-20 14:45:09.929199] W [MSGID: 101095] [xlator.c:180:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/5.3/xlator/nfs/server.so: cannot open shared object file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:180:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/5.3/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-11-20 14:45:09.929199] and [2019-11-20 14:45:09.929673]"

In our config we don't have nfs enabled:

Volume Name: gv0
Type: Replicate
Volume ID: bf80aeb9-122f-4db1-82a1-12b40227a29e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: cabaret:/data0/brick1/gv0
Brick2: emotion:/data0/brick1/gv0
Brick3: people:/data0/brick1/gv0 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
 
Volume Name: tmp_webfiles_1
Type: Replicate
Volume ID: 3ff59525-0a77-4ba2-a06e-40c7f2f1abea
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: cabaret:/tmp_webfiles_1/brick1/tmp_webfiles_1
Brick2: emotion:/tmp_webfiles_1/brick1/tmp_webfiles_1
Brick3: people:/tmp_webfiles_1/brick1/tmp_webfiles_1 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
 
Volume Name: tmp_webfiles_2
Type: Replicate
Volume ID: 0c37448b-c69a-4a2c-a665-b483e621145a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: cabaret:/tmp_webfiles_2/brick1/tmp_webfiles_2
Brick2: emotion:/tmp_webfiles_2/brick1/tmp_webfiles_2
Brick3: people:/tmp_webfiles_2/brick1/tmp_webfiles_2 (arbiter)
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on


Version-Release number of selected component (if applicable):
5.3.2

How reproducible:
Unknown

Steps to Reproduce:


Actual results:
Server crashes with core dump, all client loose the mount

Comment 1 Sanju 2019-11-25 07:31:39 UTC
The core is generated by glusterfsd, i.e, brick process.

Core was generated by `/usr/sbin/glusterfsd -s cabaret --volfile-id tmp_webfiles_2.cabaret.tmp_webfile'.
Program terminated with signal SIGSEGV, Segmentation fault.

Moving this to core component.

Comment 2 Mohit Agrawal 2020-02-19 14:25:01 UTC
Can you attach gdb to the core file after install gluster-debug packages and share the backtrace?
#gdb /usr/sbin/glusterfsd /path/to/core.file


Also share the backtrace of all the threads in the core:
(gdb) thread apply all bt



Thanks,
Mohit Agrawal

Comment 3 Worker Ant 2020-03-12 12:17:54 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/863, and will be tracked there from now on. Visit GitHub issues URL for further details

Comment 4 Red Hat Bugzilla 2023-09-14 05:47:16 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.