Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1714895

Summary: Glusterfs(fuse) client crash
Product: [Community] GlusterFS Reporter: maybeonly
Component: libglusterfsclientAssignee: Kotresh HR <khiremat>
Status: CLOSED UPSTREAM QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 6CC: amukherj, bugs, maybeonly, skoduri
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-12 12:37:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Copied log from /var/log/glusterfs/mount-point.log of the client when crashing none

Description maybeonly 2019-05-29 06:39:49 UTC
Created attachment 1574617 [details]
Copied log from /var/log/glusterfs/mount-point.log of the client when crashing

Description of problem:
One of Glusterfs(fuse) client crashes sometimes 

Version-Release number of selected component (if applicable):
6.1 (from yum)

How reproducible:
about once a week

Steps to Reproduce:
I'm sorry, I don't know

Actual results:
It crashed. It seems a core file was generated but it failed to be written to the root dir.
And I think there's something wrong with this volume, but cannot be healed.

Expected results:


Additional info:
# gluster volume info datavolume3
 
Volume Name: datavolume3
Type: Replicate
Volume ID: 675d3435-e60e-424d-9eb6-dfd7427defdd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 185***:/***/bricks/datavolume3
Brick2: 237***:/***/bricks/datavolume3
Brick3: 208***:/***/bricks/datavolume3
Options Reconfigured:
features.locks-revocation-max-blocked: 3
features.locks-revocation-clear-all: true
cluster.entry-self-heal: on
cluster.data-self-heal: on
cluster.metadata-self-heal: on
storage.owner-gid: ****
storage.owner-uid: ****
auth.allow: *********
nfs.disable: on
transport.address-family: inet

The attachment is copied from /var/log/glusterfs/mount-point.log of the client
I've got a statedump file but I don't know which section is related.

The volume(s) were created by gfs v3.8 @ centos6, and then I replaced the servers by new servers with gfs v6.0 @ centos7, and upgraded their gfs to v6.1, and then set cluseter.opver=60000

Comment 2 Worker Ant 2020-03-12 12:37:19 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/913, and will be tracked there from now on. Visit GitHub issues URL for further details