Bug 821710 - Add-brick and rebalance in succession doesn't copy the xtimes for directories
Summary: Add-brick and rebalance in succession doesn't copy the xtimes for directories
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
medium
urgent
Target Milestone: ---
Assignee: Venky Shankar
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-15 11:51 UTC by Vijaykumar Koppad
Modified: 2015-10-22 15:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-22 15:46:38 UTC
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2012-05-15 11:51:46 UTC
Description: If bricks are added to volume and then rebalanced the volume of which indexing is on. Then the xtimes of the directories are not copied to the new bricks. 


Version-Release number of selected component (if applicable):3.3.0qa41 


How reproducible:Always 


Steps to Reproduce:
1.Create a replicate volume and create some files and directory structure on the mount point 
2.Enable indexing for the volume 
3.check the xtime of the file in the backend .
4. Add 2 bricks to the volume.
5 Do rebalance 
6 check the xtime of the files in the backend
  
Actual results: xtimes are not getting copied 


Expected results: xtimes should be copied .


Additional info:
#############################################################################
xtime of the backend before add-brick and rebalance 
#############################################################################
                               |     /exportdir/d1     /exportdir/d2
./                             | 1337059933.185576 1337059933.186221 OOSYNC!
file2                          | 1337059933.185576 1337059933.186221 OOSYNC!
file1                          | 1337059933.180506 1337059933.180571 OOSYNC!
s1/                            | 1337059895.919342 1337059895.920354 OOSYNC!
s1/s2/                         | 1337059895.919342 1337059895.920354 OOSYNC!
s1/s2/s3/                      | 1337059895.919342 1337059895.920354 OOSYNC!
s1/s2/s3/s4/                   | 1337059895.919342 1337059895.920354 OOSYNC!

Volume Name: doa
Type: Replicate
Volume ID: d7da4da7-66ec-4c55-9d43-5dc205be8c9f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.17.251.151:/exportdir/d1
Brick2: 172.17.251.151:/exportdir/d2
Options Reconfigured:
geo-replication.indexing: on

#############################################################################
xtime of the backend after add-brick and rebalance 
#############################################################################


                               |     /exportdir/d1     /exportdir/d2     
./                             | 1337059933.185576 1337059933.186221           
s1/                            | 1337059895.919342 1337059895.920354           
s1/s2/                         | 1337059895.919342 1337059895.920354         
s1/s2/s3/                      | 1337059895.919342 1337059895.920354           
s1/s2/s3/s4/                   | 1337059895.919342 1337059895.920354         
file2                          |            ENOENT            ENOENT 
file1                          |            ENOENT            ENOENT 

 /exportdir/d3     /exportdir/d4
 ENODATA           ENODATA OOSYNC!
 ENODATA           ENODATA OOSYNC!
 ENODATA           ENODATA OOSYNC!
 ENODATA           ENODATA OOSYNC!
 ENODATA           ENODATA OOSYNC!
1337059933.186221 1337059933.186221 OOSYNC!
1337059933.180506 1337059933.180506 OOSYNC!

Volume Name: doa
Type: Distributed-Replicate
Volume ID: d7da4da7-66ec-4c55-9d43-5dc205be8c9f
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.251.151:/exportdir/d1
Brick2: 172.17.251.151:/exportdir/d2
Brick3: 172.17.251.151:/exportdir/d3
Brick4: 172.17.251.151:/exportdir/d4
Options Reconfigured:
geo-replication.indexing: on

Comment 1 Amar Tumballi 2012-05-15 12:19:22 UTC
issue is with directory self-heal of distribute, because the rebalance path of the code doesn't handle the directory at all.

Comment 2 shishir gowda 2012-05-24 10:35:11 UTC
Rebalance/dir self heal will not and should not copy over the xtimes. These are anyways filtered by marker.
Xtimes if missing should be set by marker running on the relevant brick processes, as selfheal/rebalance would create a dir using mkdir call. Changing component to marker.

Comment 3 Junaid 2012-05-27 09:24:52 UTC
The reason why marker doesn't create xtime xattr's on mkdir's is because the re-balance client process is mounted with special client-pid which instruct the marker translator not to create xtime xattrs. I found also found that re-balance was copying xtime xattrs of files from the existing brick to the new brick when the file is moved from the old brick to the newly added brick as part of re-balance which can be seen below.

root@QA-17 ~]# gluster v i
 
Volume Name: doa
Type: Distribute
Volume ID: 00897a42-f579-4ef4-a90e-96ad2afe50b1
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: 172.17.251.56:/exportdir/s1
Brick2: 172.17.251.56:/exportdir/s2
Brick3: 172.17.251.56:/exportdir/s3
Brick4: 172.17.251.56:/exportdir/s4
Brick5: 172.17.251.56:/exportdir/s5

$ ./xattr/bin/xtime.rb /exportdir/ | grep -v glusterfs

                |      /exportdir/s3     /exportdir/s5                                                                                 
./              |  1338019084.915617 1338019084.903663
d1/             |  1338019084.904312 1338019084.903663
d1/d2/          |  1338019084.904312 1338019084.903663
d1/d2/d3/       |  1338019084.904312 1338019084.903663
d1/d2/d3/d4/    |  1338019084.904312 1338019084.903663
d1/d2/d3/d4/d5/ |  1338019084.904312 1338019084.903663
file5           |  1338019084.915617            ENOENT 
############################################################################

Now add /exportdir/s6, and run rebalance. As part of rebalance the file "file5" is moved from /exportdir/s3 to /exportdir/s6. But the xtime is same as before.

Volume Name: doa
Type: Distribute
Volume ID: 00897a42-f579-4ef4-a90e-96ad2afe50b1
Status: Started
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: 172.17.251.56:/exportdir/s1
Brick2: 172.17.251.56:/exportdir/s2
Brick3: 172.17.251.56:/exportdir/s3
Brick4: 172.17.251.56:/exportdir/s4
Brick5: 172.17.251.56:/exportdir/s5
Brick6: 172.17.251.56:/exportdir/s6
Options Reconfigured:
geo-replication.indexing: on


$ ./xattr/bin/xtime.rb /exportdir/ | grep -v glusterfs

                 |     /exportdir/s3     /exportdir/s5     /exportdir/s6
./               | 1338019084.915617 1338019084.903663           ENODATA                                                               
d1/              | 1338019084.904312 1338019084.903663           ENODATA
d1/d2/           | 1338019084.904312 1338019084.903663           ENODATA
d1/d2/d3/        | 1338019084.904312 1338019084.903663           ENODATA
d1/d2/d3/d4/     | 1338019084.904312 1338019084.903663           ENODATA
d1/d2/d3/d4/d5/  | 1338019084.904312 1338019084.903663           ENODATA
file5            |            ENOENT            ENOENT 1338019084.915617

#############################################################################


So, rebalance must copy the xtime from the existing brick to the new brick when it does mkdir. I think it is directly creating directories as part of directory self heal with out copying the xattrs. Also, there is no filtering of xtime xattr in marker.

Comment 4 Amar Tumballi 2012-07-11 03:48:10 UTC
Need more thinking before proceeding with fixing this issue. I feel it may not be a bug at all, but the way it would happen during add-brick. Need more of documentation to say this would happen.

Comment 5 Vidya Sakar 2012-08-18 02:35:14 UTC
Adding DP flag.

Comment 7 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.