Bug 1006899 - Windows/Posix ACLs are getting lost during replace brick operation
Windows/Posix ACLs are getting lost during replace brick operation
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba (Show other bugs)
2.1
Unspecified Unspecified
high Severity medium
: ---
: ---
Assigned To: Ira Cooper
Lalatendu Mohanty
gluster
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-11 09:23 EDT by Lalatendu Mohanty
Modified: 2015-04-01 03:31 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-01 03:31:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Lalatendu Mohanty 2013-09-11 09:23:13 EDT
Description of problem:

After replace brick operation, the files which are moving to the new brick losing ACLs. 

Version-Release number of selected component (if applicable):

samba-glusterfs-3.6.9-160.3.el6rhs.x86_64
glusterfs-server-3.4.0.33rhs-1.el6rhs

How reproducible:

Always

Steps to Reproduce:
1. Create gluster dht volume, start a volume.
2. Mount on Windows client, create files with ACL 
3. Perform a replace brick operation.
4. After replace brick check the acls of the files which got migrated to a new brick.

/var/log/glusterfs/.cmd_log_history

[2013-09-11 11:59:07.109887]  : v create testvol4 10.70.37.55:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick1/testvol4-b1 : SUCCESS
[2013-09-11 12:08:00.688171]  : v start testvol4 : SUCCESS
[2013-09-11 12:08:46.985446]  : v set testvol4 stat-prefetch off : SUCCESS
[2013-09-11 12:08:55.287691]  : v set testvol4 server.allow-insecure on : SUCCESS
[2013-09-11 12:52:43.299101]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 start : SUCCESS
[2013-09-11 12:52:47.221155]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 status : SUCCESS
[2013-09-11 12:52:50.272799]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 status : SUCCESS
[2013-09-11 12:52:52.274036]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 status : SUCCESS
[2013-09-11 12:52:53.202033]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 status : SUCCESS
[2013-09-11 12:53:06.792325]  : v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 commit : SUCCESS

Actual results:

The ACLs are getting lost.

Expected results:

The ACLs should not get removed by replace brick operation.

Additional info:

To get some more information about the issue I have captured the acl (from the fuse mount point) of the file before and after replace brick. You can see from the getfacl output, the ACLs for groups i.e. elfs, dwarfs, humans got removed after replace brick operation.

[root@bvt-rhs1 acl]# getfacl a3
# file: a3
# owner: hobbit1
# group: domain\040users
user::rw-
group::r--
group:elfs:rwx
group:dwarfs:rwx
group:humans:rwx
mask::rwx
other::---

root@bvt-rhs1 acl]# gluster v info testvol4
 
Volume Name: testvol4
Type: Distribute
Volume ID: 1a98231b-bb7c-4629-bb01-59b77539fd8c
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.55:/rhs/brick1/testvol4-b1
Brick2: 10.70.37.56:/rhs/brick1/testvol4-b1
Options Reconfigured:
server.allow-insecure: on
performance.stat-prefetch: off
[root@bvt-rhs1 acl]# 

[root@bvt-rhs1 acl]# gluster v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 start
volume replace-brick: success: replace-brick started successfully
ID: 20edcd56-5594-4878-a9ee-a388b5181eca

[root@bvt-rhs1 acl]# gluster v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 status
volume replace-brick: success: Number of files migrated = 20	Migration complete

[root@bvt-rhs1 acl]# gluster v replace-brick testvol4 10.70.37.56:/rhs/brick1/testvol4-b1 10.70.37.56:/rhs/brick2/testvol4-b1 commit
volume replace-brick: success: replace-brick commit successful

[root@bvt-rhs1 acl]# gluster v info testvol4
Volume Name: testvol4
Type: Distribute
Volume ID: 1a98231b-bb7c-4629-bb01-59b77539fd8c
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.55:/rhs/brick1/testvol4-b1
Brick2: 10.70.37.56:/rhs/brick2/testvol4-b1
Options Reconfigured:
server.allow-insecure: on
performance.stat-prefetch: off
[root@bvt-rhs1 acl]# 

[root@bvt-rhs1 acl]# getfacl a3
# file: a3
# owner: hobbit1
# group: domain\040users
user::rw-
group::rwx
other::---
Comment 2 Lalatendu Mohanty 2013-09-12 03:32:00 EDT
There are known issues with replace brick, hence the severity is medium for this bug.

There is another doc bug has been raised for removing replace brick section form the admin doc BZ 1006898

Note You need to log in before you can comment on or make changes to this bug.