Bug 1367999
| Summary: | Build logs not accessible in centos setup | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | rjoseph |
| Component: | project-infrastructure | Assignee: | bugs <bugs> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.8.2 | CC: | bugs, gluster-infra, mscherer, nigelb |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-08-23 11:28:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
rjoseph
2016-08-18 05:48:59 UTC
I see selinux permission denials in the audit logs. Michael, I thought we fixed this a while ago. Do you know what's possibly going wrong? I ran the ansible task for selinux manually from my computer for this node and it still didn't help.
audit.log content:
type=AVC msg=audit(1471500301.159:427156): avc: denied { read } for pid=541 comm="nginx" name="glusterfs-logs-20160818:03:02:41.tgz" dev=xvda1 ino=1335302 scontext=system_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=file
type=SYSCALL msg=audit(1471500301.159:427156): arch=c000003e syscall=2 success=no exit=-13 a0=ad2fb0 a1=800 a2=0 a3=1000 items=0 ppid=25688 pid=541 auid=4294967295 uid=498 gid=498 euid=498 suid=498 fsuid=498 egid=498 sgid=498 fsgid=498 tty=(none) ses=4294967295 comm="nginx" exe="/usr/sbin/nginx" subj=system_u:system_r:httpd_t:s0 key=(null)
The logs have the wrong label. # restorecon -Rv /archives/ restorecon reset /archives context system_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /archives/archived_builds context system_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /archives/logs context system_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 restorecon reset /archives/logs/glusterfs-logs-20160818:03:02:41.tgz context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:public_content_t:s0 restorecon reset /archives/logs/glusterfs-logs-20160818:05:57:29.tgz context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:public_content_t:s0 restorecon reset /archives/logs/glusterfs-logs-20160818:05:52:15.tgz context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:public_content_t:s0 restorecon reset /archives/logs/glusterfs-logs-20160816:10:22:43.tgz context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:public_content_t:s0 restorecon reset /archives/logs/glusterfs-logs-20160816:11:52:53.tgz context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:public_content_t:s0 restorecon reset /archives/log context system_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0 However the proper file context is here, so I wonder if something did changed that cause the context to not be applied. So 22, 24, 27 and 28 have the same issue. I am running restorecon on them. This should now be fixed. |