Bug 864395 - Opening a file should fail when file is present on all the bricks and doesn't have GFID assigned to it (created file on bricks from back-end)
Summary: Opening a file should fail when file is present on all the bricks and doesn't...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: spandura
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-09 10:38 UTC by spandura
Modified: 2015-12-03 17:14 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:14:27 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2012-10-09 10:38:39 UTC
Description of problem:
----------------------------
Opening a file from the mount point should fail when file is present on all the bricks and doesn't have GFID assigned to it (created file on bricks from back-end) . The size of the file on all the bricks is same. 


Version-Release number of selected component (if applicable):
------------------------------------------------------------
[root@gqac010 ~]# gluster --version
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11


How reproducible:
------------------
Often

Steps to Reproduce:
---------------------
1. create a pure replicate volume (1x2) and start the volume

2. create a file "file1" on both the bricks "brick1" and "brick2" from the storage-nodes using the command : "dd if=/dev/urandom of=file1 bs=1K count=1"

3. create a fuse mount.

4. from the mount point execute "ls" . This should list the file "file1"

5. from the mount point execute "cat file1 > /dev/null". 
  
Actual results:
----------------
Command execution successful


Expected results:
----------------
Should report "Input/Output Error" and the mount log should report error messages of "split-brain".

Also the afr extended attributes on the file "file1" are not set.

Brick1:- ( trusted.afr.rep-client-0 and trusted.afr.rep-client-1 are not set)
--------
[root@gqac010 ~]# getfattr -d -e hex -m . /home/export200/file1
getfattr: Removing leading '/' from absolute path names
# file: home/export200/file1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a686f6d655f726f6f745f743a733000
trusted.gfid=0x061fba6e47ca43c18ef1e83133715c6a


Brick2:-
-----------

[10/09/12 - 05:50:21 root@gqac011 ~]# getfattr -d -e hex -m . /home/export200/file1
getfattr: Removing leading '/' from absolute path names
# file: home/export200/file1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a686f6d655f726f6f745f743a733000
trusted.gfid=0x061fba6e47ca43c18ef1e83133715c6a

Additional Info:-
--------------
mount log just reports the following message when "ls" is performed. 

[2012-10-09 09:50:24.655622] I [afr-self-heal-common.c:1189:afr_sh_missing_entry_call_impunge_recreate] 0-rep-replicate-0: no missing files - /file1. proceeding to metadata check

Comment 2 Pranith Kumar K 2012-11-16 07:05:46 UTC
Shwetha,
   Could you explain why you are expecting it to give EIO?

Pranith.

Comment 3 Pranith Kumar K 2012-11-16 07:06:51 UTC
Shwetha,
     Is it because the file contents could be different?

Pranith.

Comment 4 spandura 2012-11-29 03:25:29 UTC
Pranith, 

Even though the file sizes are same on both the bricks , they differ in content. The md5sums doesn't match.

Comment 5 Pranith Kumar K 2013-05-31 10:41:40 UTC
This issue only happens when the file sizes match but the content mismatches. Ideally when such content mismatches are there, changelogs would suggest which file in the replica pair is the correct one. But in this particular case, the content is edited in the backend directly which is not supported.
   One way to fix it is to add selinux policy saying only gluster brick processes can modify files/directories.

Pranith

Comment 6 Vivek Agarwal 2015-12-03 17:14:27 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.