Bug 1760399
| Summary: | WORMed files couldn't be migrated during rebalancing | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | david.spisla | ||||
| Component: | distribute | Assignee: | Mohit Agrawal <moagrawa> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | |||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | mainline | CC: | bugs, david.spisla, moagrawa | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-10-12 03:03:47 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
david.spisla
2019-10-10 12:59:19 UTC
Created attachment 1624571 [details]
Worm_specific_patch
Hi,
Yes he is right.To know about the client at server xlators we check the pid value if pid
is negative it means the fop request has come from an internal client otherwise request has
come from an external client.
Below are the total defined pid's those we use as a internal pid
>>>>>>>>>>>>>>>>>>>>>
GF_CLIENT_PID_MAX = 0,
GF_CLIENT_PID_GSYNCD = -1,
GF_CLIENT_PID_HADOOP = -2,
GF_CLIENT_PID_DEFRAG = -3,
GF_CLIENT_PID_NO_ROOT_SQUASH = -4,
GF_CLIENT_PID_QUOTA_MOUNT = -5,
GF_CLIENT_PID_SELF_HEALD = -6,
GF_CLIENT_PID_GLFS_HEAL = -7,
GF_CLIENT_PID_BITD = -8,
GF_CLIENT_PID_SCRUB = -9,
GF_CLIENT_PID_TIER_DEFRAG = -10,
GF_SERVER_PID_TRASH = -11,
GF_CLIENT_PID_ADD_REPLICA_MOUNT = -12
>>>>>>>>>>>>>>>>>>>>>>>>
To avoid the client_pid assign to internal pid we have to put some conditional checks in
set_fuse_mount_options so that user would not be able to assign any defined internal
pid to the fuse process and on the server-side we need to validate pid value should be less than 0
and greater than -12.
I have included a raw patch(specific to worm xlator only) but the same we need to
update in other xlator also. I will upload the complete patch.
Thanks,
Mohit Agrawal
The patch is posted to resolve the same https://review.gluster.org/#/c/glusterfs/+/23540/ Hi David, It is not a legitimate way to access volume through negative client-pid. By default mount.glustefs script does not pass any client-pid argument to the fuse process. The fuse process internal fop request will not pass negative pid to the server unless the user will not mount a volume without using mount command and pass the negative client-pid directly to the glusterfs process so we think it is not a bug and there is no harm with this option. Multiple times we use negative PID to execute script-based file migration and own testing purposes. To restrict malicious client access user can configure (auth.allow) without hurting any performance. Regards, Mohit Agrawal For more you can access the same discussion here https://lists.gluster.org/pipermail/gluster-devel/2019-October/056616.html Hello Mohit, I understand the reason why there is no need to implement more protection against an potential malicious client. But the main issue here is to ensure migration of WORMed files during full rebalancing. There is still no solution for this. What do you think? Regards David Spisla Hi,
Yes wormed file should be move on newly added brick.
I have tried to reproduce the same on the below version
glusterfs-libs-5.5-1.el7.x86_64
glusterfs-fuse-5.5-1.el7.x86_64
glusterfs-devel-5.5-1.el7.x86_64
glusterfs-rdma-5.5-1.el7.x86_64
glusterfs-5.5-1.el7.x86_64
glusterfs-cli-5.5-1.el7.x86_64
glusterfs-api-devel-5.5-1.el7.x86_64
glusterfs-cloudsync-plugins-5.5-1.el7.x86_64
glusterfs-client-xlators-5.5-1.el7.x86_64
glusterfs-server-5.5-1.el7.x86_64
glusterfs-events-5.5-1.el7.x86_64
glusterfs-debuginfo-5.5-1.el7.x86_64
glusterfs-api-5.5-1.el7.x86_64
glusterfs-extra-xlators-5.5-1.el7.x86_64
glusterfs-geo-replication-5.5-1.el7.x86_64
Reproducer Steps:
1) gluster v create test1 replica 3 10.74.251.224:/dist1/b{0..2} force
2) gluster v set test1 features.worm-file-level on
3) Mount the volume /mnt
4) Write the data
time for (( i=0 ; i<=10 ; i++ )); do dd if=/dev/urandom of=/mnt/file$i bs=1M count=100; mkdir -p /mnt/dir$i/dir1/dir2/dir3/dir4/dir5/; done
5) Run add-brick
gluster v add-brick test1 10.74.251.224:/dist2/b{0..2}
6) Start rebalance
gluster v rebalance test1 start
5 files are successfully transferred on dist2/b{0..2}
I am not able to reproduce the issue, please correct me if I have missed any steps to reproduce the same.
Can you please share the rebalance logs and confirm about the reproducer steps.In most of of the fops
in worm xlator it is checking if fop request has come from internal client then wind a fop to next xlator.
For some of the fops it is not checking the same, to confirm the same need rebalance logs along with reproducer steps.
Regards,
Mohit Agrawal
Hello Mohit,
after creating the files they should be set Read-Only
Reproducer Steps:
1) gluster v create test1 replica 3 10.74.251.224:/dist1/b{0..2} force
2) gluster v set test1 features.worm-file-level on
3) Mount the volume /mnt
4) Write the data
time for (( i=0 ; i<=10 ; i++ )); do dd if=/dev/urandom of=/mnt/file$i bs=1M count=100; mkdir -p /mnt/dir$i/dir1/dir2/dir3/dir4/dir5/; done
5) for i in {0..10}; do chmod 444 file$i; done # Set files RO
6) Run add-brick
gluster v add-brick test1 10.74.251.224:/dist2/b{0..2}
7) Start rebalance
gluster v rebalance test1 start
5 files are successfully transferred on dist2/b{0..2}
Regards
David Spisla
Hi David, Are you sure you are using the same gluster bits(glusterfs-5.5-1.el7.x86_64). I am still not able to reproduce the issue. Can you share the rebalance logs for the same while rebalance is not working fine? Thanks, Mohit Agrawal Hello Mohit, I will do so when I have some free minutes. Hi, Kindly share the volume option also along with volume topology to debug it more. Thanks, Mohit Agrawal Please share if you have any updates. Thanks, Mohit Agrawal Hello Mohit, just now I did some observations and find out, that the reason for the bug is found in some custom changes, that we made to the WORM Xlator, so it is our own fault. Regards David Spisla Hi, Thanks for your update. Regards, Mohit Agrawal No problem. Sorry for the inconvenience!!! Regards David Spisla |