Bug 1002194 - Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Summary: Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-28 15:20 UTC by Ben Turner
Modified: 2015-03-23 07:40 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)
Backtrace 1 (5.03 KB, text/plain)
2013-08-28 15:42 UTC, Ben Turner
no flags Details
Backtrace 2 (5.21 KB, text/plain)
2013-08-28 15:43 UTC, Ben Turner
no flags Details

Description Ben Turner 2013-08-28 15:20:12 UTC
Description of problem:

When mount my 2.0u6 servers from a 2.1 client and write a file I see a segfault on all peers in the cluster:

Core was generated by `/usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.122.12.bricks-br'.
Program terminated with signal 11, Segmentation fault.

And on the client I see:

# dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000
dd: writing `/gluster-mount/test.file': Transport endpoint is not connected
dd: closing output file `/gluster-mount/test.file': Transport endpoint is not connected


Version-Release number of selected component (if applicable):

Server - glusterfs-3.3.0.13rhs-1.el6rhs.x86_64
Client - glusterfs-3.4.0.24rhs-1.el6_4.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Create a distributed replicated volume on 2.0u6 servers.
2.  Mount the volume on a 2.1 client.
3.  Run: dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000

Actual results:

Segfault.

Expected results:

No segfault.

Additional info:

Attaching BT and core to BZ.

Comment 1 Ben Turner 2013-08-28 15:42:37 UTC
Created attachment 791439 [details]
Backtrace 1

Comment 2 Ben Turner 2013-08-28 15:43:11 UTC
Created attachment 791440 [details]
Backtrace 2

Comment 4 Amar Tumballi 2013-08-29 09:36:00 UTC
This is dependent on bug 999944. For which we haven't yet have a solution. Looks like we either do a patch for rhs-2.1 clients to work fine with rhs-2.0 servers, or may do a patch for rhs-2.0 servers to handle rhs-2.1 clients better.

Not a blocker for U6 as RHS2.1 is not out it IMO.

Comment 5 Vivek Agarwal 2015-03-23 07:40:16 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Comment 6 Vivek Agarwal 2015-03-23 07:40:40 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html


Note You need to log in before you can comment on or make changes to this bug.