Bug 961275 - Write to a file succeeds, read fails with I/O error
Summary: Write to a file succeeds, read fails with I/O error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: 2.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Sachidananda Urs
URL:
Whiteboard:
: 960591 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-09 10:17 UTC by Sachidananda Urs
Modified: 2013-09-23 22:41 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.4.0.6rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:38:33 UTC
Embargoed:


Attachments (Terms of Use)
Client logs (8.14 MB, application/x-xz)
2013-05-09 10:17 UTC, Sachidananda Urs
no flags Details

Description Sachidananda Urs 2013-05-09 10:17:52 UTC
Created attachment 745607 [details]
Client logs

Description of problem:

When one of the nodes are brought down, the client throws I/O error on reading a file but write succeeds.

[root@hamm rep]# echo "Hello world" > abcd 
[root@hamm rep]# cat abcd 
cat: abcd: Input/output error
[root@hamm rep]# date >> abcd 
[root@hamm rep]# cat abcd 
cat: abcd: Input/output error

Version-Release number of selected component (if applicable):

glusterfs 3.4.0.4rhs built on May  7 2013 13:37:36

How reproducible:

Most of the times.

Steps to Reproduce:
1. Do some I/O on the clients (Kernel extract, compile, rm -rf ...)
2. Bring down one of the servers.
3. I/O errors are seen.
  

Additional info:


Volume Name: bb
Type: Distributed-Replicate
Volume ID: 7c26fe1e-9b7e-48fa-81ed-6db4bd23b6e8
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: tex.lab.eng.blr.redhat.com:/mnt/store/bb
Brick2: wingo.lab.eng.blr.redhat.com:/mnt/store/bb
Brick3: van.lab.eng.blr.redhat.com:/mnt/store/bb
Brick4: mater.lab.eng.blr.redhat.com:/mnt/store/bb

Happens when one of the nodes is brought down. Since we have a replica setup here the operations should be seamless.


[root@tex glusterfs]# gluster volume status
Status of volume: bb
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick tex.lab.eng.blr.redhat.com:/mnt/store/bb          49152   Y       16177
Brick wingo.lab.eng.blr.redhat.com:/mnt/store/bb        49152   Y       11020
Brick mater.lab.eng.blr.redhat.com:/mnt/store/bb        49152   Y       9035
NFS Server on localhost                                 2049    Y       17601
Self-heal Daemon on localhost                           N/A     Y       16193
NFS Server on e73a2be7-a995-4f09-a1dd-f7cfe935b1cb      2049    Y       12468
Self-heal Daemon on e73a2be7-a995-4f09-a1dd-f7cfe935b1c
b                                                       N/A     Y       11036
NFS Server on f41e0800-64fa-4148-ad4b-2630b1fe170a      2049    Y       10434
Self-heal Daemon on f41e0800-64fa-4148-ad4b-2630b1fe170
a                                                       N/A     Y       9051
 
There are no active volume tasks

Comment 3 Sachidananda Urs 2013-05-10 06:47:35 UTC
*** Bug 960591 has been marked as a duplicate of this bug. ***

Comment 5 Sachidananda Urs 2013-05-13 11:49:15 UTC
Verified on: glusterfs 3.4.0.6rhs built on May 10 2013 14:12:00

Unable to reproduce the issue.

Comment 6 Scott Haines 2013-09-23 22:38:33 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 7 Scott Haines 2013-09-23 22:41:26 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.