Bug 1002194

Summary: Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Product: Red Hat Gluster Storage Reporter: Ben Turner <bturner>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: Ben Turner <bturner>
Severity: high Docs Contact:
Priority: high    
Version: 2.0CC: rhs-bugs, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
Backtrace 1
none
Backtrace 2 none

Description Ben Turner 2013-08-28 15:20:12 UTC
Description of problem:

When mount my 2.0u6 servers from a 2.1 client and write a file I see a segfault on all peers in the cluster:

Core was generated by `/usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.122.12.bricks-br'.
Program terminated with signal 11, Segmentation fault.

And on the client I see:

# dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000
dd: writing `/gluster-mount/test.file': Transport endpoint is not connected
dd: closing output file `/gluster-mount/test.file': Transport endpoint is not connected


Version-Release number of selected component (if applicable):

Server - glusterfs-3.3.0.13rhs-1.el6rhs.x86_64
Client - glusterfs-3.4.0.24rhs-1.el6_4.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Create a distributed replicated volume on 2.0u6 servers.
2.  Mount the volume on a 2.1 client.
3.  Run: dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000

Actual results:

Segfault.

Expected results:

No segfault.

Additional info:

Attaching BT and core to BZ.

Comment 1 Ben Turner 2013-08-28 15:42:37 UTC
Created attachment 791439 [details]
Backtrace 1

Comment 2 Ben Turner 2013-08-28 15:43:11 UTC
Created attachment 791440 [details]
Backtrace 2

Comment 4 Amar Tumballi 2013-08-29 09:36:00 UTC
This is dependent on bug 999944. For which we haven't yet have a solution. Looks like we either do a patch for rhs-2.1 clients to work fine with rhs-2.0 servers, or may do a patch for rhs-2.0 servers to handle rhs-2.1 clients better.

Not a blocker for U6 as RHS2.1 is not out it IMO.

Comment 5 Vivek Agarwal 2015-03-23 07:40:16 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Comment 6 Vivek Agarwal 2015-03-23 07:40:40 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html