Bug 961275 - Write to a file succeeds, read fails with I/O error
Write to a file succeeds, read fails with I/O error
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: fuse (Show other bugs)
2.1
x86_64 Linux
high Severity urgent
: ---
: ---
Assigned To: Pranith Kumar K
Sachidananda Urs
:
: 960591 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-09 06:17 EDT by Sachidananda Urs
Modified: 2013-09-23 18:41 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.6rhs-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:38:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Client logs (8.14 MB, application/x-xz)
2013-05-09 06:17 EDT, Sachidananda Urs
no flags Details

  None (edit)
Description Sachidananda Urs 2013-05-09 06:17:52 EDT
Created attachment 745607 [details]
Client logs

Description of problem:

When one of the nodes are brought down, the client throws I/O error on reading a file but write succeeds.

[root@hamm rep]# echo "Hello world" > abcd 
[root@hamm rep]# cat abcd 
cat: abcd: Input/output error
[root@hamm rep]# date >> abcd 
[root@hamm rep]# cat abcd 
cat: abcd: Input/output error

Version-Release number of selected component (if applicable):

glusterfs 3.4.0.4rhs built on May  7 2013 13:37:36

How reproducible:

Most of the times.

Steps to Reproduce:
1. Do some I/O on the clients (Kernel extract, compile, rm -rf ...)
2. Bring down one of the servers.
3. I/O errors are seen.
  

Additional info:


Volume Name: bb
Type: Distributed-Replicate
Volume ID: 7c26fe1e-9b7e-48fa-81ed-6db4bd23b6e8
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: tex.lab.eng.blr.redhat.com:/mnt/store/bb
Brick2: wingo.lab.eng.blr.redhat.com:/mnt/store/bb
Brick3: van.lab.eng.blr.redhat.com:/mnt/store/bb
Brick4: mater.lab.eng.blr.redhat.com:/mnt/store/bb

Happens when one of the nodes is brought down. Since we have a replica setup here the operations should be seamless.


[root@tex glusterfs]# gluster volume status
Status of volume: bb
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick tex.lab.eng.blr.redhat.com:/mnt/store/bb          49152   Y       16177
Brick wingo.lab.eng.blr.redhat.com:/mnt/store/bb        49152   Y       11020
Brick mater.lab.eng.blr.redhat.com:/mnt/store/bb        49152   Y       9035
NFS Server on localhost                                 2049    Y       17601
Self-heal Daemon on localhost                           N/A     Y       16193
NFS Server on e73a2be7-a995-4f09-a1dd-f7cfe935b1cb      2049    Y       12468
Self-heal Daemon on e73a2be7-a995-4f09-a1dd-f7cfe935b1c
b                                                       N/A     Y       11036
NFS Server on f41e0800-64fa-4148-ad4b-2630b1fe170a      2049    Y       10434
Self-heal Daemon on f41e0800-64fa-4148-ad4b-2630b1fe170
a                                                       N/A     Y       9051
 
There are no active volume tasks
Comment 3 Sachidananda Urs 2013-05-10 02:47:35 EDT
*** Bug 960591 has been marked as a duplicate of this bug. ***
Comment 5 Sachidananda Urs 2013-05-13 07:49:15 EDT
Verified on: glusterfs 3.4.0.6rhs built on May 10 2013 14:12:00

Unable to reproduce the issue.
Comment 6 Scott Haines 2013-09-23 18:38:33 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 7 Scott Haines 2013-09-23 18:41:26 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.