Bug 1462210 - [RFE][GSS]Geo-replication skip the deletion of files if a slave subvolume was down
Summary: [RFE][GSS]Geo-replication skip the deletion of files if a slave subvolume was...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.3
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: ---
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1408949 RHGS-usability-bug-GSS
TreeView+ depends on / blocked
 
Reported: 2017-06-16 12:30 UTC by Riyas Abdulrasak
Modified: 2020-01-07 05:36 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-12-14 04:30:50 UTC
Embargoed:
khiremat: needinfo-


Attachments (Terms of Use)

Description Riyas Abdulrasak 2017-06-16 12:30:53 UTC
Description of problem:

Geo-replication is not syncing files in case a subvolume of slave volume(distribute-replicate) goes down and comes back.  

Version-Release number of selected component (if applicable):


Red Hat Gluster Storage Server 3.3.0
glusterfs-3.8.4-27.el7rhgs.x86_64

How reproducible:

Always

Steps to Reproduce:

- Create slave and master volumes 2x2 . 
- Kill 1 set replica bricks on the slave side. Keep the other replica set running.

Eg: - 
on a 2x2 volume
kill brick1 & brick2 which were a replica set and keep the brick4 and brick5 up. 


- geo-replications status showed the sessions active & passive(no faulty sessions)

- Delete the contents from master volume. Delete will be successful. 


- Bring up both the replica bricks at slave side. 
- The files in the slave volume which were in the bricks which were down will not be deleted. 


This causes the slave and master volumes to be out of sync.  

Actual results:

The slave volume has some stale data

Expected results:

The master and slave volumes should be in sync. 


Additional info:

Customers can hit this issue easily. In case the brick process gets killed , The geo-replication status is not reporting any errors, but the slave and master volumes will be out of sync.

Comment 2 Aravinda VK 2017-06-16 13:26:51 UTC
I think this is the expected behavior if all the bricks of a subvolume is down and the file is hashed to that subvolume.

Simple example:

gluster volume create gv1 node1:/bricks/b1 node2:/bricks/b2 force
gluster volume start gv1

mount -t glusterfs localhost:gv1 /mnt/gv1
echo "Hello World" > /mnt/gv1/f1

Check in backend and kill the brick where it is hashed. Then try to access the file or delete the file. We always get "rm: cannot remove 'f1': No such file or directory". Geo-rep thinks that file is already deleted and proceeds without logging.

There is no way to differentiate from Geo-replication if the file is already deleted or subvolume is down. I think DHT can be enhanced to return different error code if the subvolume is down.

Adding Raghavendra to check the possibility from DHT to differentiate these errors.

Comment 5 Amar Tumballi 2018-03-13 07:54:33 UTC
This falls under the category of issues where required quorum for high availability is not available in the cluster. In that case, such behaviors are expected.

The problem in marking the whole session as faulty would make other files from syncing too, which we believe is not an expected behavior. We recommend to close this bug as WONTFIX (or NOTABUG as product's failures are expected when quorum is not met).

To answer Aravinda's comment, DHT can never respond to the higher layer properly when the nodes are not reachable. 

One possible thing is it should return ENOTCONN (not connected) instead ENOENT (not found) as that is the proper error at that time.


Let us know what everybody think.

Comment 6 Kotresh HR 2018-04-20 12:11:28 UTC
(In reply to Amar Tumballi from comment #5)
> This falls under the category of issues where required quorum for high
> availability is not available in the cluster. In that case, such behaviors
> are expected.
> 
> The problem in marking the whole session as faulty would make other files
> from syncing too, which we believe is not an expected behavior. We recommend
> to close this bug as WONTFIX (or NOTABUG as product's failures are expected
> when quorum is not met).
> 
> To answer Aravinda's comment, DHT can never respond to the higher layer
> properly when the nodes are not reachable. 
> 
> One possible thing is it should return ENOTCONN (not connected) instead
> ENOENT (not found) as that is the proper error at that time.
> 
> 
> Let us know what everybody think.

I am in for closing this as WONTFIX and may document somewhere known or expected behaviour. Geo-rep can't do anything about when the only available subvolume is down.


Note You need to log in before you can comment on or make changes to this bug.