Bug 820134
| Summary: | geo-replication with an unprivileged user fails because of SELinux. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Etsuji Nakai <enakai> |
| Component: | geo-replication | Assignee: | Csaba Henk <csaba> |
| Status: | CLOSED ERRATA | QA Contact: | Vijaykumar Koppad <vkoppad> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 2.0 | CC: | amarts, bbandari, gluster-bugs, sdharane, shaines, vshankar |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-09-23 22:29:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This bug is solved in RHS 2.0 (RC1) by having SELinux completely disabled. Hence marking it ON_QA. In the new ISO, SELinux is disabled by default. Since SELinux is disabled , its no more a bug. Moving it to verified on the build glusterfs-3.4.0.18rhs-1.el6rhs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |
Description of problem: geo-replication with an unprivileged user, such as below, fails. # gluster vol geo-replication vol01 geoaccount@rhs20b2-02::vol01_slave I suspect this is caused by SELinux. Here's the audit log from the slave. ----- May 9 09:01:30 rhs20b2-02 kernel: type=1400 audit(1336554090.683:5): avc: denied { read } for pid=2130 comm="umount" name="mnt48dS2M" dev=vda2 ino=29940 scontext=unconfined_u:system_r:mount_t:s0 tcontext=unconfined_u:object_r:var_t:s0 tclass=lnk_file ----- A workaround is to "setenforce 0", but the final resolution should be an appropriate context labeling.... My setup is RHS2.0Beta2. # rpm -qa | grep gluster gluster-swift-1.4.8-3.el6.noarch glusterfs-3.3.0qa38-1.el6.x86_64 glusterfs-fuse-3.3.0qa38-1.el6.x86_64 gluster-swift-container-1.4.8-3.el6.noarch glusterfs-server-3.3.0qa38-1.el6.x86_64 glusterfs-geo-replication-3.3.0qa38-1.el6.x86_64 gluster-swift-proxy-1.4.8-3.el6.noarch gluster-swift-account-1.4.8-3.el6.noarch gluster-swift-plugin-1.0-1.noarch glusterfs-rdma-3.3.0qa38-1.el6.x86_64 gluster-swift-object-1.4.8-3.el6.noarch By the way, I found an entry for the same problem with community version and I added a comment to it, too. https://bugzilla.redhat.com/show_bug.cgi?id=811672