Bug 864395 - Opening a file should fail when file is present on all the bricks and doesn't have GFID assigned to it (created file on bricks from back-end)
Opening a file should fail when file is present on all the bricks and doesn't...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
low Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
spandura
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-09 06:38 EDT by spandura
Modified: 2015-12-03 12:14 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:14:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description spandura 2012-10-09 06:38:39 EDT
Description of problem:
----------------------------
Opening a file from the mount point should fail when file is present on all the bricks and doesn't have GFID assigned to it (created file on bricks from back-end) . The size of the file on all the bricks is same. 


Version-Release number of selected component (if applicable):
------------------------------------------------------------
[root@gqac010 ~]# gluster --version
glusterfs 3.3.0rhs built on Sep 10 2012 00:49:11


How reproducible:
------------------
Often

Steps to Reproduce:
---------------------
1. create a pure replicate volume (1x2) and start the volume

2. create a file "file1" on both the bricks "brick1" and "brick2" from the storage-nodes using the command : "dd if=/dev/urandom of=file1 bs=1K count=1"

3. create a fuse mount.

4. from the mount point execute "ls" . This should list the file "file1"

5. from the mount point execute "cat file1 > /dev/null". 
  
Actual results:
----------------
Command execution successful


Expected results:
----------------
Should report "Input/Output Error" and the mount log should report error messages of "split-brain".

Also the afr extended attributes on the file "file1" are not set.

Brick1:- ( trusted.afr.rep-client-0 and trusted.afr.rep-client-1 are not set)
--------
[root@gqac010 ~]# getfattr -d -e hex -m . /home/export200/file1
getfattr: Removing leading '/' from absolute path names
# file: home/export200/file1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a686f6d655f726f6f745f743a733000
trusted.gfid=0x061fba6e47ca43c18ef1e83133715c6a


Brick2:-
-----------

[10/09/12 - 05:50:21 root@gqac011 ~]# getfattr -d -e hex -m . /home/export200/file1
getfattr: Removing leading '/' from absolute path names
# file: home/export200/file1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a686f6d655f726f6f745f743a733000
trusted.gfid=0x061fba6e47ca43c18ef1e83133715c6a

Additional Info:-
--------------
mount log just reports the following message when "ls" is performed. 

[2012-10-09 09:50:24.655622] I [afr-self-heal-common.c:1189:afr_sh_missing_entry_call_impunge_recreate] 0-rep-replicate-0: no missing files - /file1. proceeding to metadata check
Comment 2 Pranith Kumar K 2012-11-16 02:05:46 EST
Shwetha,
   Could you explain why you are expecting it to give EIO?

Pranith.
Comment 3 Pranith Kumar K 2012-11-16 02:06:51 EST
Shwetha,
     Is it because the file contents could be different?

Pranith.
Comment 4 spandura 2012-11-28 22:25:29 EST
Pranith, 

Even though the file sizes are same on both the bricks , they differ in content. The md5sums doesn't match.
Comment 5 Pranith Kumar K 2013-05-31 06:41:40 EDT
This issue only happens when the file sizes match but the content mismatches. Ideally when such content mismatches are there, changelogs would suggest which file in the replica pair is the correct one. But in this particular case, the content is edited in the backend directly which is not supported.
   One way to fix it is to add selinux policy saying only gluster brick processes can modify files/directories.

Pranith
Comment 6 Vivek Agarwal 2015-12-03 12:14:27 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.