Bug 1002194 - Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.0
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Ben Turner
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-28 11:20 EDT by Ben Turner
Modified: 2015-03-23 03:40 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Backtrace 1 (5.03 KB, text/plain)
2013-08-28 11:42 EDT, Ben Turner
no flags Details
Backtrace 2 (5.21 KB, text/plain)
2013-08-28 11:43 EDT, Ben Turner
no flags Details

  None (edit)
Description Ben Turner 2013-08-28 11:20:12 EDT
Description of problem:

When mount my 2.0u6 servers from a 2.1 client and write a file I see a segfault on all peers in the cluster:

Core was generated by `/usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.122.12.bricks-br'.
Program terminated with signal 11, Segmentation fault.

And on the client I see:

# dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000
dd: writing `/gluster-mount/test.file': Transport endpoint is not connected
dd: closing output file `/gluster-mount/test.file': Transport endpoint is not connected


Version-Release number of selected component (if applicable):

Server - glusterfs-3.3.0.13rhs-1.el6rhs.x86_64
Client - glusterfs-3.4.0.24rhs-1.el6_4.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Create a distributed replicated volume on 2.0u6 servers.
2.  Mount the volume on a 2.1 client.
3.  Run: dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000

Actual results:

Segfault.

Expected results:

No segfault.

Additional info:

Attaching BT and core to BZ.
Comment 1 Ben Turner 2013-08-28 11:42:37 EDT
Created attachment 791439 [details]
Backtrace 1
Comment 2 Ben Turner 2013-08-28 11:43:11 EDT
Created attachment 791440 [details]
Backtrace 2
Comment 4 Amar Tumballi 2013-08-29 05:36:00 EDT
This is dependent on bug 999944. For which we haven't yet have a solution. Looks like we either do a patch for rhs-2.1 clients to work fine with rhs-2.0 servers, or may do a patch for rhs-2.0 servers to handle rhs-2.1 clients better.

Not a blocker for U6 as RHS2.1 is not out it IMO.
Comment 5 Vivek Agarwal 2015-03-23 03:40:16 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html
Comment 6 Vivek Agarwal 2015-03-23 03:40:40 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Note You need to log in before you can comment on or make changes to this bug.