Bug 1088649
Summary: | Some newly created folders have root ownership although created by unprivileged user | ||||||
---|---|---|---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Vincent Caron <vcaron> | ||||
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | mainline | CC: | bugs, gluster-bugs, kdhananj, pkarampu | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-3.7.0 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1171847 1184527 1184528 (view as bug list) | Environment: | |||||
Last Closed: | 2015-05-14 17:25:40 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1171847, 1184527, 1184528, 1188082 | ||||||
Attachments: |
|
Description
Vincent Caron
2014-04-16 23:38:23 UTC
Vincent, Could you please upload the complete logs of both bricks and clients so that I can take a look. Pranith Created attachment 889897 [details]
logs where the 'root owner' bug occured at 14:49:04 GMT
I only keep 8 days worth of logs, so I compiled some logs from today (2014-04-25) where the bug occured at 14:49:04 GMT on /invoices/DA37/1D37/1EEF. Thanks for your attention ! hi Vincent, There seems to be some problem with the attachment. I am not able to download the complete logs just for this bug. when I untar I get the following error: pk@localhost - ~/sos/1088649 23:18:22 :) ⚡ tar xvzf root-bug-gluster-logs.tar.gz client1/ client1/mnt-map-prod.log client2/ client2/mnt-map-prod.log gzip: stdin: unexpected end of file tar: Unexpected EOF in archive tar: Unexpected EOF in archive tar: Error is not recoverable: exiting now I am able to download logs for other bugs, I am not sure what the problem is. Could you upload it again. Did it cross the upload size limit or something? Pranith Hello, I redownloaded the tarball from this tracker and it untarred fined with 'tar xzf'. Maybe you only partially downloaded it ? FYI its M5 hash is 25924f84704c8f4ce3ad3d144220de79. Vincent, Krutika and I found that the brick has lots of following logs "srv-brick2-map-prod.log:[2014-04-23 16:51:01.791637] W [posix-handle.c:631:posix_handle_soft] 0-map-prod-posix: symlink ../../92/39/9239cdb1-b1bc-4b12-b189-99865dc3b5bd/5F1C -> /srv/brick2/map-prod/.glusterfs/e5/a9/e5a95672-4d87-4290-bf8e-b45839e1bec2 failed (File exists)". Could you check if the file /srv/brick2/map-prod/.glusterfs/e5/a9/e5a95672-4d87-4290-bf8e-b45839e1bec2 is still softlink or directory. I think these look similar to the ones in bug 859581. Pranith. Hello, it's a symlink : lrwxrwxrwx 1 root root 53 Apr 23 18:51 /srv/brick2/map-prod/.glusterfs/e5/a9/e5a95672-4d87-4290-bf8e-b45839e1bec2 -> ../../92/39/9239cdb1-b1bc-4b12-b189-99865dc3b5bd/5F1C Meanwhile I stopped the replication because the cluster was way too slow with 16 bricks. We're operating with half of the cluster (I peer-detach'ed a node), which boils down to 8 distributed bricks. This very bug has not been encountered since, or at least it has not been reported to me. Sadly I can't verify it easily, because when I run something like 'ionice -c3 find /path/to/brick/ -user root' the GlusterFS performance drops dead. It looks like related to dentry cache thrashing, because the 8 spindles have free I/Os and the server uses most of its 16 GB for cache. The operation search in at least 1 million of files. REVIEW: http://review.gluster.org/9434 (storage/posix: Set gfid after all xattrs, uid/gid are set) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) COMMIT: http://review.gluster.org/9434 committed in master by Raghavendra Bhat (raghavendra) ------ commit f91d19a9ce6ec6659c84c3f907ebc3ecaceba176 Author: Pranith Kumar K <pkarampu> Date: Mon Jan 12 12:50:50 2015 +0530 storage/posix: Set gfid after all xattrs, uid/gid are set Problem: When a new entry is created gfid is set even before uid/gid, xattrs are set on the entry. This can lead to dht/afr healing that file/dir with the uid/gid it sees just after the gfid is set, i.e. root/root. Sometimes setattr/setxattr are failing on that file/dir. Fix: Set gfid of the file/directory only after uid/gid, xattrs are setup properly. Readdirp, lookup either wait for the gfid to be assigned to the entry or not update the in-memory inode ctx in posix-acl xlator which was producing lot EACCESS/EPERM to the application or dht/afr self-heals. Change-Id: I0a6ced579daabe3452f5a85010a00ca6e8579451 BUG: 1088649 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9434 Reviewed-by: Niels de Vos <ndevos> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Raghavendra Bhat <raghavendra> REVIEW: http://review.gluster.org/9446 (storage/posix: Don't try to set gfid in case of INTERNAL-mknod) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) COMMIT: http://review.gluster.org/9446 committed in master by Raghavendra Bhat (raghavendra) ------ commit a3bc6cf85d20ab21380f96f8d11cad29a53b3bb4 Author: Pranith Kumar K <pkarampu> Date: Wed Jan 14 17:10:41 2015 +0530 storage/posix: Don't try to set gfid in case of INTERNAL-mknod Change-Id: I96540ed07f08e54d2a24a3b22c2437bddd558c85 BUG: 1088649 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9446 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Raghavendra Bhat <raghavendra> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |