Bug 1101479 - glusterfs process crash when creating a file with command touch on stripe volume
Summary: glusterfs process crash when creating a file with command touch on stripe volume
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: stripe
Version: 3.5.0
Hardware: i686
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-27 09:31 UTC by zerotog
Modified: 2023-09-14 02:08 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-17 15:56:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
core file and client log (825.60 KB, application/octet-stream)
2014-05-27 09:31 UTC, zerotog
no flags Details

Description zerotog 2014-05-27 09:31:24 UTC
Created attachment 899433 [details]
core file and client log

Description of problem:

My scenario is below:
server1: 10.1.1.102 redhat5.6 2.6.18-238.el5PAE #1 SMP i686 i686 i386 GNU/Linux
server2: 10.1.5.56  redhat5.6 2.6.18-238.el5PAE #1 SMP i686 i686 i386 GNU/Linux
glusterfs-libs-3.5.0-2.el5
glusterfs-server-3.5.0-2.el5
glusterfs-3.5.0-2.el5
glusterfs-cli-3.5.0-2.el5
glusterfs-geo-replication-3.5.0-2.el5
glusterfs-fuse-3.5.0-2.el5

client: 10.1.5.7  redhat5.6 2.6.18-238.el5PAE #1 SMP i686 i686 i386 GNU/Linux
fuse-2.7.4-8.el5
glusterfs-3.5.0-2.el5
glusterfs-libs-3.5.0-2.el5
glusterfs-fuse-3.5.0-2.el5
glusterfs-cli-3.5.0-2.el5

####
How reproducible:
Steps to Reproduce:
1. on server1:
gluster volume create v1 stripe 2 transport tcp 10.1.1.102:/data/glusterfs/brick1/ 10.1.5.56:/data/glusterfs/brick1/
gluster volume start v1

2. on client:
mkdir /data1/
mount -t glusterfs 10.1.1.102:/v1 /data1/
cd /data1/
touch p.txt

then, glusterfs process crashs, and the attachment contains core file and client's log     '/var/log/glusterfs/data1-.log'. 

####
glusterfs works well if the client's OS is redhat5.6 x86_64. It also works well when i create file with another command.

####
Cluster Information:
gluster peer status
Number of Peers: 1

Hostname: 10.1.5.56
Uuid: 6a549863-2272-4e12-a9e2-227f1975b0a4
State: Peer in Cluster (Connected)

####
Volume Information
gluster volume info
 
Volume Name: v1
Type: Stripe
Volume ID: c94c661c-2a57-4658-9457-d49b74521fd7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.1.1.102:/data/glusterfs/brick1
Brick2: 10.1.5.56:/data/glusterfs/brick1

##
Status of volume: v1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.1.1.102:/data/glusterfs/brick1                 49155   Y       30973
Brick 10.1.5.56:/data/glusterfs/brick1                  49155   Y       21125
NFS Server on localhost                                 N/A     N       N/A
NFS Server on 10.1.5.56                                 N/A     N       N/A
 
Task Status of Volume v1
------------------------------------------------------------------------------
There are no active volume tasks

Comment 1 Niels de Vos 2014-12-20 20:33:58 UTC
This is not reproducible in a x86_64 only environment.

I'll install a 32-bit system and see if the error pops up there.

Comment 2 Niels de Vos 2014-12-20 21:20:24 UTC
I can not reproduce this on a CentOS-6.6/i686 system either. I am using the current 3.5 packages from the nightly builds:
- http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.5/

Could you please let me know if you still hit the crashes of the fuse client with newer versions?

Thanks!

Comment 3 Niels de Vos 2016-06-17 15:56:43 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Comment 4 Red Hat Bugzilla 2023-09-14 02:08:25 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.