Description of problem: Setfattr, setfattr or any other xattr get/set operation dosen't take rebalance into consideration. Version-Release number of selected component (if applicable): How reproducible: Setxattr on a file that is being rebalanced(migrated), will end up setting xattr on the source file only. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Set xattr operations needs to be done on both source and destination file if the migration of that file is in progress. Additional info:
Varun, Since you've sent a patch on upstream, can this be targeted for Denali code Freeze? If yes, can you add "Blocker" flag, so that this bug can be triaged. regards, Raghavendra.
*** Bug 1140531 has been marked as a duplicate of this bug. ***
[root@rhsqa14-vm5 mnt]# ls -la total 3072004 drwxr-xr-x. 4 root root 138 Jul 6 05:44 . dr-xr-xr-x. 31 root root 4096 Jul 6 06:10 .. -rw-r--r--. 1 root root 1048576000 Jul 6 05:32 BIG -rw-r--r--. 1 root root 2097152000 Jul 6 05:53 BIG2 drwxr-xr-x. 3 root root 72 Jul 3 20:35 .trashcan [root@rhsqa14-vm5 mnt]# [root@rhsqa14-vm5 mnt]# ls -la total 3072004 drwxr-xr-x. 4 root root 138 Jul 6 05:44 . dr-xr-xr-x. 31 root root 4096 Jul 6 06:10 .. -rw-r--r--. 1 root root 1048576000 Jul 6 05:32 BIG -rw-r--r--. 1 root root 2097152000 Jul 6 05:53 BIG2 drwxr-xr-x. 3 root root 72 Jul 3 20:35 .trashcan [root@rhsqa14-vm5 mnt]# dd if=/dev/urandom of=BIG3 bs=2M count=100 100+0 records in 100+0 records out 209715200 bytes (210 MB) copied, 50.1863 s, 4.2 MB/s [root@rhsqa14-vm5 mnt]# ls -la total 3276804 drwxr-xr-x. 4 root root 149 Jul 6 06:54 . dr-xr-xr-x. 31 root root 4096 Jul 6 06:10 .. -rw-r--r--. 1 root root 1048576000 Jul 6 05:32 BIG -rw-r--r--. 1 root root 2097152000 Jul 6 05:53 BIG2 -rw-r--r--. 1 root root 209715200 Jul 6 06:55 BIG3 drwxr-xr-x. 3 root root 72 Jul 3 20:35 .trashcan [root@rhsqa14-vm5 mnt]# [root@rhsqa14-vm5 mnt]# setfattr -n trusted.glusterfs.test -v "HI" BIG3 [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG3 # file: BIG3 security.selinux="system_u:object_r:fusefs_t:s0" trusted.glusterfs.test="HI" [root@rhsqa14-vm5 mnt]# setfattr -n trusted.user.test -v "Hello" BIG2 [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG2 # file: BIG2 security.selinux="system_u:object_r:fusefs_t:s0" trusted.user.test="Hello" [root@rhsqa14-vm5 mnt]# [root@rhsqa14-vm5 mnt]# [root@casino-vm1 ~]# gluster v info venus Volume Name: venus Type: Distributed-Replicate Volume ID: 2c13461a-530c-4040-97f3-fa0f5f3837f1 Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.70.35.57:/rhs/brick1/venus Brick2: 10.70.35.136:/rhs/brick1/venus Brick3: 10.70.35.57:/rhs/brick4/venus Brick4: 10.70.35.136:/rhs/brick4/venus Brick5: 10.70.35.57:/rhs/brick3/venus Brick6: 10.70.35.136:/rhs/brick3/venus Options Reconfigured: features.uss: enable features.quota-deem-statfs: on features.inode-quota: on features.quota: on cluster.min-free-disk: 10 performance.readdir-ahead: on [root@casino-vm1 ~]# [root@casino-vm1 ~]# gluster v add-brick venus 10.70.35.57:/rhs/brick2/venus 10.70.35.136:/rhs/brick2/venus 10.70.35.57:/rhs/brick5/venus 10.70.35.136:/rhs/brick5/venus force volume add-brick: success [root@casino-vm1 ~]# gluster v rebalance start venus force Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}} [root@casino-vm1 ~]# gluster v rebalance venus start force volume rebalance: venus: success: Rebalance on venus has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 61589d07-feec-4eb0-acbb-44dc1ce0737c [root@casino-vm1 ~]# [root@casino-vm1 ~]# gluster v rebalance venus status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 1 2.0GB 3 0 0 completed 68.00 10.70.35.136 0 0Bytes 0 0 0 completed 0.00 volume rebalance: venus: success: [root@casino-vm1 ~]# [root@rhsqa14-vm5 mnt]# [root@rhsqa14-vm5 mnt]# ls a100 a19 a23 a30 a31 a35 a41 a46 a50 a67 a99 BIG BIG2 BIG3 x17 x20 x3 x36 x51 x57 x58 x66 x7 x73 x79 x81 x85 x97 xattrs of file which was set and migrated . [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG2 # file: BIG2 security.selinux="system_u:object_r:fusefs_t:s0" trusted.user.test="Hello" [root@rhsqa14-vm5 mnt]# xattrs of file which was set earlier but not migrated. [root@rhsqa14-vm5 mnt]# [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG3 # file: BIG3 security.selinux="system_u:object_r:fusefs_t:s0" trusted.glusterfs.test="HI" [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG # file: BIG security.selinux="system_u:object_r:fusefs_t:s0" [root@rhsqa14-vm5 mnt]# After migration completed, setting the xattrs of a file which was not set earlier. [root@rhsqa14-vm5 mnt]# setfattr -n trusted.system.test -v "HI" BIG [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG # file: BIG security.selinux="system_u:object_r:fusefs_t:s0" trusted.system.test="HI" [root@rhsqa14-vm5 mnt]# After renaming the file xattrs: [root@rhsqa14-vm5 mnt]# mv BIG BIG_new [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG_new # file: BIG_new security.selinux="system_u:object_r:fusefs_t:s0" trusted.system.test="HI" [root@rhsqa14-vm5 mnt]# [root@casino-vm1 ~]# gluster v rebalance venus start force volume rebalance: venus: success: Rebalance on venus has been started successfully. Use rebalance status command to check status of the rebalance process. ID: b582b30c-c6cc-4370-989b-4bad09ae0413 [root@casino-vm1 ~]# gluster v rebalance venus status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 4 0 0 in progress 3.00 10.70.35.136 0 0Bytes 0 0 0 completed 2.00 volume rebalance: venus: success: [root@casino-vm1 ~]# gluster v rebalance venus status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 0 0Bytes 4 0 0 in progress 41.00 10.70.35.136 0 0Bytes 0 0 0 completed 2.00 volume rebalance: venus: success: [root@casino-vm1 ~]# [root@casino-vm1 ~]# gluster v rebalance venus status Node Rebalanced-files size scanned failures skipped status run time in secs --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 1 1000.0MB 4 0 0 completed 62.00 10.70.35.136 0 0Bytes 0 0 0 completed 2.00 volume rebalance: venus: success: [root@casino-vm1 ~]# [root@rhsqa14-vm5 mnt]# ls a100 a19 a23 a30 a31 a35 a41 a46 a50 a67 a99 BIG2 BIG3 BIG_new x17 x20 x3 x36 x51 x57 x58 x66 x7 x73 x79 x81 x85 x97 I tried to change the xatrs of a migrating file. [root@rhsqa14-vm5 mnt]# setfattr -n trusted.system.test -v "HIm" BIG_new [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG_new # file: BIG_new security.selinux="system_u:object_r:fusefs_t:s0" trusted.glusterfs.dht.linkto="venus-replicate-3" trusted.system.test="HIm" [root@rhsqa14-vm5 mnt]# after rebalance completed xattrs of a migrating file has been successfully changed. [root@rhsqa14-vm5 mnt]# getfattr -d -m . BIG_new # file: BIG_new security.selinux="system_u:object_r:fusefs_t:s0" trusted.system.test="HIm"
[root@casino-vm1 ~]# rpm -qa | grep gluster gluster-nagios-addons-0.2.3-1.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-6.el6rhs.x86_64 glusterfs-server-3.7.1-6.el6rhs.x86_64 gluster-nagios-common-0.2.0-1.el6rhs.noarch glusterfs-3.7.1-6.el6rhs.x86_64 glusterfs-api-3.7.1-6.el6rhs.x86_64 glusterfs-cli-3.7.1-6.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-6.el6rhs.x86_64 vdsm-gluster-4.16.20-1.1.el6rhs.noarch glusterfs-libs-3.7.1-6.el6rhs.x86_64 glusterfs-fuse-3.7.1-6.el6rhs.x86_64 glusterfs-rdma-3.7.1-6.el6rhs.x86_64 [root@casino-vm1 ~]#
Hi Nithya, The doc text is updated. Please review the same and share your technical review comments. If it looks ok, then sign-off on the same. Regards, Bhavana
Thanks Nithya. Changing the doc text flag to +
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html