Bug 1395643 - [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
Summary: [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On: 1247056
Blocks: 1202842 1216951 1223636 1239269 1239270
TreeView+ depends on / blocked
 
Reported: 2016-11-16 10:49 UTC by Avra Sengupta
Modified: 2017-05-30 18:35 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1247056
Environment:
Last Closed: 2017-05-30 18:35:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Avra Sengupta 2016-11-16 10:49:01 UTC
+++ This bug was initially created as a clone of Bug #1247056 +++

+++ This bug was initially created as a clone of Bug #1231647 +++

Description of problem:
======================
Scheduler does not create any snapshots when job are scheduled on RHEL7.1. 

Creating snapshots manually without using schduler is successful

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.1-1.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
====================
1.Create a shared storage and mount it on /var/run/gluster/shared_storage

2.Initialise snap_scheduler from all nodes

3.Enable snap_scheduler from any one node in the cluster 

4.Add a job 

snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
J1               * * * * *        Snapshot Create  master   

No snapshots were created 

gluster snapshot list
No snapshots present

Looks like selinux is not getting the right user context for the cron files

# ls -lZ /etc/cron.d/
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 gcron_update_task
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate 


rpm -qa |grep selinux
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-27.el7.noarch
selinux-policy-targeted-3.13.1-27.el7.noarch
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64


Actual results:
==============
Scheduler does not create any snapshots when job are scheduled.


Expected results:



Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-06-15 03:22:36 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from  on 2015-06-15 07:31:55 EDT ---

Scheduler creates snapshots when SELinux is in 'Permissive' mode on RHEL7.1

--- Additional comment from Prasanth on 2015-06-19 04:35:29 EDT ---

Seema,

Are you seeing the same issue with the latest build "selinux-policy-3.13.1-29.el7"? If so, please attach the audit.log for further debugging.

--- Additional comment from Avra Sengupta on 2015-06-19 04:40:23 EDT ---

We create two files glusterfs_snap_cron_tasks and gcron_update_task in /etc/cron.d. These files are created by "snap_scheduler.py init", both of which are present in /usr/sbin. These scripts is run as root. The issue we are seeing here in 7.1 is that two files are created with 'unconfined_u' file context, because of which crond refuses to pick them up


[root@rhsqe-vm05 ~]# cd /etc/cron.d
[root@rhsqe-vm05 cron.d]# ls -lZ
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 gcron_update_task
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate

Can we have the selinux team have a look at this, and provide a suggestion as to how to go about it.

--- Additional comment from  on 2015-06-19 06:30:26 EDT ---

Retried with the latest build "selinux-policy-3.13.1-29.el7", still unable to create snapshots using scheduler 

rpm -qa |grep selinux
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-29.el7.noarch
selinux-policy-targeted-3.13.1-29.el7.noarch

Part of audit.log :

ors=pam_access,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_ACQ msg=audit(1434708601.218:33471): pid=7132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1434708601.219:33472): pid=7132 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=4588 res=1
type=USER_AVC msg=audit(1434708601.256:33473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=1)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
type=USER_START msg=audit(1434708601.273:33474): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1434708601.273:33475): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1434708601.295:33476): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1434708601.301:33477): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_ACCT msg=audit(1434709501.322:33478): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_access,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_ACQ msg=audit(1434709501.322:33479): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1434709501.322:33480): pid=7153 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=4589 res=1
type=USER_START msg=audit(1434709501.349:33481): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1434709501.349:33482): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1434709501.366:33483): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1434709501.368:33484): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
(END)

--- Additional comment from  on 2015-06-19 06:34:35 EDT ---

cleared the needinfo while updating the bug. adding it back as per comment 4

--- Additional comment from Prasanth on 2015-06-19 07:21:28 EDT ---

Thanks Seema, for pasting the AVC messages. I'll have the SELinux team look at this as soon. If possible, please attach the complete audit.log to this BZ.

--- Additional comment from  on 2015-06-19 07:37:36 EDT ---



--- Additional comment from  on 2015-06-19 07:38:06 EDT ---



--- Additional comment from Milos Malik on 2015-06-19 07:55:44 EDT ---

Both attached audit.log files only show USER_AVCs, which indicate that SELinux was switched to permissive and then to enforcing. There are no AVCs or SELINUX_ERRs which would indicate that something was denied.

Maybe there is something hidden:

# semodule -DB
# re-run your scenario
# semodule -B
# ausearch -m avc -m user_avc -m selinux_err -i -ts today

Dontaudit rules are needed, please do not forget to run: semodule -B

--- Additional comment from  on 2015-06-19 09:11:34 EDT ---

Folowed steps in Comment 10 and re-run the scenario. 

ausearch -m avc -m user_avc -m selinux_err -i -ts today

type=USER_AVC msg=audit(06/19/2015 15:30:29.278:33292) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=0)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=USER_AVC msg=audit(06/19/2015 15:40:01.935:33300) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=1)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=SYSCALL msg=audit(06/19/2015 18:30:23.464:33467) : arch=x86_64 syscall=execve success=yes exit=0 a0=0x7fc56bae69c0 a1=0x7fc56bae68c0 a2=0x7fc56bae5010 a3=0x0 items=0 ppid=8605 pid=8606 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=setroubleshootd exe=/usr/bin/python2.7 subj=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 key=(null) 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { noatsecure } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { siginh } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { rlimitinh } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 

=====================================================================
Attaching the complete output of :
ausearch -m avc -m user_avc -m selinux_err -i -ts today

--- Additional comment from  on 2015-06-19 09:12:55 EDT ---



--- Additional comment from  on 2015-06-19 09:13:43 EDT ---



--- Additional comment from Miroslav Grepl on 2015-06-19 09:44:25 EDT ---

Does it really work in permissive mode? If so, could try to repeat in permissive mode?

--- Additional comment from  on 2015-06-19 10:07:00 EDT ---

Retried in permissive mode and scheduler was able to create snapshots.

--- Additional comment from Milos Malik on 2015-06-19 10:21:50 EDT ---

Can I get access to the machine where the snapshots are created?

--- Additional comment from  on 2015-06-22 05:42:28 EDT ---

Hi Milos, 

I have mailed you the machine details.

-Seema

--- Additional comment from Milos Malik on 2015-06-22 07:47:57 EDT ---

Following messages appeared in the journal:

Jun 22 17:12:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31776]: (*system*) NULL security context for user ()
Jun 22 17:12:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31776]: (root) ERROR (failed to change SELinux context)
Jun 22 17:13:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31779]: (*system*) NULL security context for user ()
Jun 22 17:13:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31779]: (root) ERROR (failed to change SELinux context)
Jun 22 17:14:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31786]: (*system*) NULL security context for user ()
Jun 22 17:14:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31786]: (root) ERROR (failed to change SELinux context)
Jun 22 17:15:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31791]: (*system*) NULL security context for user ()
Jun 22 17:15:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31791]: (root) ERROR (failed to change SELinux context)
Jun 22 17:15:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31790]: (*system*) NULL security context for user ()
Jun 22 17:15:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31790]: (root) ERROR (failed to change SELinux context)
Jun 22 17:16:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31794]: (*system*) NULL security context for user ()

--- Additional comment from Milos Malik on 2015-06-22 07:52:05 EDT ---

Above-mentioned messages appeared when one of the cronjob files was mislabeled:

# restorecon -Rvn /etc/
restorecon reset /etc/cron.d/gcron_update_task context unconfined_u:object_r:user_tmp_t:s0->unconfined_u:object_r:system_cron_spool_t:s0
#

That could be a reason why the reproducer failed in enforcing mode.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-06-22 08:00:22 EDT ---

Since this bug has been approved for the Red Hat Gluster Storage 3.1.0 release, through release flag 'rhgs-3.1.0+', the Target Release is being automatically set to 'RHGS 3.1.0'

--- Additional comment from Milos Malik on 2015-06-22 08:06:23 EDT ---

Jun 22 17:26:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31836]: (*system*) NULL security context for user ()
Jun 22 17:26:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31836]: (root) ERROR (failed to change SELinux context)
Jun 22 17:27:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31861]: (*system*) NULL security context for user ()
Jun 22 17:27:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31861]: (root) ERROR (failed to change SELinux context)
Jun 22 17:28:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31865]: (*system*) NULL security context for user ()
Jun 22 17:28:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31865]: (root) ERROR (failed to change SELinux context)
Jun 22 17:29:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31868]: (*system*) NULL security context for user ()
Jun 22 17:29:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31868]: (root) ERROR (failed to change SELinux context)
Jun 22 17:30:02 rhsqe-vm05.lab.eng.blr.redhat.com crond[12476]: ((null)) Unauthorized SELinux context=system_u:system_r:system_cronjob_t:s0-s0:c0.c1023 file_context=system_u:object_r:fusefs_t:s0 (/etc/cron.d/glusterfs_snap_cron_tasks)
Jun 22 17:30:02 rhsqe-vm05.lab.eng.blr.redhat.com crond[12476]: (root) FAILED (loading cron table)
Jun 22 17:30:02 rhsqe-vm05.lab.eng.blr.redhat.com crond[31877]: (*system*) NULL security context for user ()
Jun 22 17:30:02 rhsqe-vm05.lab.eng.blr.redhat.com crond[31877]: (root) ERROR (failed to change SELinux context)
Jun 22 17:31:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31879]: (*system*) NULL security context for user ()
Jun 22 17:31:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31879]: (root) ERROR (failed to change SELinux context)
Jun 22 17:32:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31881]: (*system*) NULL security context for user ()
Jun 22 17:32:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31881]: (root) ERROR (failed to change SELinux context)
Jun 22 17:33:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31885]: (*system*) NULL security context for user ()
Jun 22 17:33:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31885]: (root) ERROR (failed to change SELinux context)
Jun 22 17:34:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31889]: (*system*) NULL security context for user ()
Jun 22 17:34:01 rhsqe-vm05.lab.eng.blr.redhat.com crond[31889]: (root) ERROR (failed to change SELinux context)

--- Additional comment from Milos Malik on 2015-06-22 09:02:54 EDT ---

Based on the journal records, the crond complains when symbolic links (pointing at FUSE filesystem) are used:

# ls -Z /etc/cron.d
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 gcron_update_task
lrwxrwxrwx. root root system_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate
# ls -Z /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks 
lrwxrwxrwx. root root system_u:object_r:fusefs_t:s0    /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/gcron_enabled
# ls -Z /var/run/gluster/shared_storage/snaps/gcron_enabled 
-rw-r--r--. root root system_u:object_r:fusefs_t:s0    /var/run/gluster/shared_storage/snaps/gcron_enabled
#

But the scheduler starts working as expected, once you copy the cronjob file from /var/run/gluster/shared_storage/snaps/ directory into /etc/cron.d directory:

# rm -f /etc/cron.d/glusterfs_snap_cron_tasks 
# cp /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks /etc/cron.d/glusterfs_snap_cron_tasks
# restorecon -vF /etc/cron.d/glusterfs_snap_cron_tasks 
restorecon reset /etc/cron.d/glusterfs_snap_cron_tasks context unconfined_u:object_r:system_cron_spool_t:s0->system_u:object_r:system_cron_spool_t:s0
# ls -Z /etc/cron.d
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 gcron_update_task
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate
#

--- Additional comment from Milos Malik on 2015-06-23 06:06:08 EDT ---

AFAIK the crond contains some SELinux user-space code, which seems to be buggy. To workaround the issue now, we should not use symbolic links (pointing at FUSE filesystem) in /etc/cron.d directory. We should use regular files there. It will enable the scheduler to do what is expected.

In the meantime I will investigate the cron issue and I will file appropriate bug(s).

--- Additional comment from Miroslav Grepl on 2015-06-23 08:15:11 EDT ---

See https://bugzilla.redhat.com/show_bug.cgi?id=1234847#c1

--- Additional comment from  on 2015-06-24 05:42:17 EDT ---

update from Avra:

The entire design of the scheduler works around the symbolic link. To
change it at this point in time, by accommodating copying of the actual
file, might be doable but certainly is a very very intrusive fix. If we
send a fix centered around this, we will have to re-test every small
aspect of the feature again, all the corner cases, everything. I don't
know how feasible that will be, this close to the release.

--- Additional comment from Avra Sengupta on 2015-06-24 06:13:07 EDT ---

Currently we use symlinks at every node's /etc/cron.d so that every node is aware of the changes that are made at the shared storage volume. This design will never work well with copying the actual file, because we won't be able to copy the changes made in the shared storage volume to /etc/cron.d for all the nodes at the same time. This would bring in a major design change for us, which is not feasible as per the current timelines for RHGS3.1.1

--- Additional comment from Prasanth on 2015-06-29 06:39:38 EDT ---

Update from SELinux team on Bug 1234847 :

#####
Miroslav Grepl 2015-06-29 06:35:49 EDT

Could you test it with the following local policy module

# cat mygluster.te
policy_module(mygluster, 1.0)

require{
 type gluster_t;
 type nfs_t;
 type cifs_t;
 type fusefs_t;
}

allow gluster_t nfs_t:file entrypoint;
allow gluster_t cifs_t:file entrypoint;
allow gluster_t fusefs_t:file entrypoint;

# make -f /usr/share/selinux/devel/Makefile mygluster.pp
# semodule -i mygluster.pp
#####

Seema, could you try the above and confirm back in Bug 1234847 so that I can request for backporting the fix in RHEL-7.1?

--- Additional comment from  on 2015-07-01 03:28:38 EDT ---

Followed steps as mentioned in Comment 6 and 7 in BZ 1234847 and scheduler was able to create snapshots.

 snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
A1               */5 * * * *      Snapshot Create  vol0             


[root@rhsqe-vm06 ~]# tail -f /var/log/glusterfs/gcron.log 
[2015-07-01 12:45:19,802 gcron.py:100 doJob] INFO Job Scheduled-A1-vol0 succeeded
[2015-07-01 12:50:01,968 gcron.py:178 main] DEBUG locking_file = /var/run/gluster/shared_storage/snaps/lock_files/A1
[2015-07-01 12:50:01,969 gcron.py:179 main] DEBUG volname = vol0
[2015-07-01 12:50:01,969 gcron.py:180 main] DEBUG jobname = A1
[2015-07-01 12:50:01,981 gcron.py:96 doJob] DEBUG /var/run/gluster/shared_storage/snaps/lock_files/A1 last modified at Wed Jul  1 12:45:19 2015
[2015-07-01 12:50:01,981 gcron.py:98 doJob] DEBUG Processing job Scheduled-A1-vol0
[2015-07-01 12:50:01,982 gcron.py:68 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-A1-vol0 vol0'
[2015-07-01 12:50:20,747 gcron.py:75 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-A1-vol0 vol0' returned '0'
[2015-07-01 12:50:20,747 gcron.py:83 takeSnap] INFO Snapshot of vol0 successful
[2015-07-01 12:50:20,747 gcron.py:100 doJob] INFO Job Scheduled-A1-vol0 succeeded


gluster snapshot list |wc -l 
254


getenforce
Enforcing

rpm -qa |grep selinux
libselinux-debuginfo-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-29.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-devel-3.13.1-29.el7.noarch
libselinux-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-29.el7.noarch

I tried this on a 2 node cluster and found that only one node was picking uo the job , need to look into this further

--- Additional comment from Rejy M Cyriac on 2015-07-02 00:53:34 EDT ---

This BZ is not a blocker for the RHGS 3.1.0 release, and so is being re-proposed for the next Z-stream release

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-02 00:53:38 EDT ---

Since this bug does not have release flag 'rhgs-3.1.0+', the Target Release is being automatically reset to '---'

--- Additional comment from Avra Sengupta on 2015-07-02 05:07:40 EDT ---



--- Additional comment from errata-xmlrpc on 2015-07-03 02:12:38 EDT ---

This bug has been dropped from advisory RHEA-2015:20560 by Vivek Agarwal (vagarwal)

--- Additional comment from Avra Sengupta on 2015-07-03 06:08:06 EDT ---

Tried the latest policy update. With that the file /etc/cron.d/glusterfs_snap_cron_tasks is being picked up by crond. However because /etc/cron.d/gcron_update_task is created by renaming it from a tmp file created in /tmp/crontab it has a different file context which prevents it from being picked up by crond.

In order to resolve this, we need to create /etc/cron.d/gcron_update_task by renaming it from a tmp file created in /var/run/gluster/shared_storage/snaps/tmp_file.

--- Additional comment from Avra Sengupta on 2015-07-03 06:50:33 EDT ---

But even with the above change I don't see crond reloading /etc/cron.d/glusterfs_snap_cron_tasks whenever the modified time of the file changes. Am not sure is this is a crond bug for rhel 7.1 or not, but it seems not to reload a file even though the file's last modified time is changed.

--- Additional comment from Milos Malik on 2015-07-03 08:40:59 EDT ---

I need the output of following commands:

# ls -l /etc/cron.d/glusterfs_snap_cron_tasks
# ls -Z /etc/cron.d/glusterfs_snap_cron_tasks
# ausearch -m avc -m user_avc -m selinux_err -i -c crond -ts today

--- Additional comment from  on 2015-07-03 08:43:23 EDT ---

ls -l /etc/cron.d/glusterfs_snap_cron_tasks
lrwxrwxrwx. 1 root root 63 Jul  3 16:18 /etc/cron.d/glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks


 ls -Z /etc/cron.d/glusterfs_snap_cron_tasks
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 /etc/cron.d/glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks

ausearch -m avc -m user_avc -m selinux_err -i -c crond -ts today
<no matches>

--- Additional comment from Milos Malik on 2015-07-03 09:17:43 EDT ---

I'm not sure why crond did not run the commands from /etc/cron.d/glusterfs_snap_cron_tasks, but following action helped:

# service crond restart

Jul  3 18:40:19 rhsqe-vm05 crond[30096]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 23% if used.)
Jul  3 18:40:20 rhsqe-vm05 crond[30096]: (CRON) INFO (running with inotify support)
Jul  3 18:40:20 rhsqe-vm05 crond[30096]: (CRON) INFO (@reboot jobs will be run at computer's startup.)
Jul  3 18:41:01 rhsqe-vm05 CROND[30107]: (root) CMD (PATH=$PATH:/usr/local/sbin:/usr/sbin gcron.py --update)
Jul  3 18:41:01 rhsqe-vm05 CROND[30108]: (root) CMD (PATH=$PATH:/usr/local/sbin:/usr/sbin gcron.py vol0 A1)
Jul  3 18:42:01 rhsqe-vm05 CROND[30256]: (root) CMD (PATH=$PATH:/usr/local/sbin:/usr/sbin gcron.py vol0 A1)
Jul  3 18:42:01 rhsqe-vm05 CROND[30255]: (root) CMD (PATH=$PATH:/usr/local/sbin:/usr/sbin gcron.py vol0 A2)
Jul  3 18:42:01 rhsqe-vm05 CROND[30257]: (root) CMD (PATH=$PATH:/usr/local/sbin:/usr/sbin gcron.py --update)

Is it possible that crond did not notice the symlink that was added ?

--- Additional comment from Avra Sengupta on 2015-07-05 03:06:11 EDT ---

Cloning this bug upstream(https://bugzilla.redhat.com/show_bug.cgi?id=1239269) and on release 3.7 branch(https://bugzilla.redhat.com/show_bug.cgi?id=1239270) to send the /tmp snap_scheduler change upstream

--- Additional comment from Avra Sengupta on 2015-07-08 05:59:19 EDT ---

Master Url: http://review.gluster.org/#/c/11535/
Release 3.7 Url: http://review.gluster.org/#/c/11536/
RHGS 3.1 Url: https://code.engineering.redhat.com/gerrit/#/c/52559/

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-08 07:15:32 EDT ---

Since this bug has been approved for the Red Hat Gluster Storage 3.1.0 release, through release flag 'rhgs-3.1.0+', the Target Release is being automatically set to 'RHGS 3.1.0'

--- Additional comment from Rejy M Cyriac on 2015-07-13 14:04:19 EDT ---

The build version with the fix was inadvertently put up at the 'Internal Whiteboard' field instead of the 'Fixed In Version' field

Correcting the error

--- Additional comment from  on 2015-07-15 08:00:28 EDT ---

Version :  glusterfs-3.7.1-9 

I have enabled the boolean as mentioned and Scheduler is able to create snapshots 

setsebool cron_system_cronjob_use_shares on

getsebool -a |grep cron_system_cronjob_use_shares
cron_system_cronjob_use_shares --> on

rpm -qa |grep selinux
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-32.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-32.el7.noarch

snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
RHEL7_JOB1       */5 * * * *      Snapshot Create  volume0     

gluster snapshot list 
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.45.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.50.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.55.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.00.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.05.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.10.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.15.01


[2015-07-15 14:50:01,627 gcron.py:96 doJob] DEBUG /var/run/gluster/shared_storage/snaps/lock_files/RHEL7_JOB1 last modified at Wed Jul 15 14:45:05 2015
[2015-07-15 14:50:01,628 gcron.py:98 doJob] DEBUG Processing job Scheduled-RHEL7_JOB1-volume0
[2015-07-15 14:50:01,628 gcron.py:68 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-RHEL7_JOB1-volume0 volume0'
[2015-07-15 14:50:05,869 gcron.py:75 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-RHEL7_JOB1-volume0 volume0' returned '0'
[2015-07-15 14:50:05,870 gcron.py:83 takeSnap] INFO Snapshot of volume0 successful
[2015-07-15 14:50:05,871 gcron.py:100 doJob] INFO Job Scheduled-RHEL7_JOB1-volume0 succeeded

Marking the bug 'Verified'

--- Additional comment from  on 2015-07-15 08:05:15 EDT ---

It has to be documented that the boolean value for cron_system_cronjob_use_shares must be enabled for Scheduler to work as expected on RHEL 7.1 .

--- Additional comment from monti lawrence on 2015-07-23 09:51:15 EDT ---

Doc text is edited. Please sign off to be included in Known Issues.

--- Additional comment from Avra Sengupta on 2015-07-27 03:11:03 EDT ---

Doc text Looks Good. Verified.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-27 04:57:41 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-07-27 04:57:41 EDT ---

Since this bug does not have release flag 'rhgs-3.1.0+', the Target Release is being automatically reset to '---'

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-08-12 06:15:30 EDT ---

Since this bug has been approved for the z-stream release of Red Hat Gluster Storage 3, through release flag 'rhgs-3.1.z+', and has been marked for RHGS 3.1 Update 1 release through the Internal Whiteboard entry of '3.1.1', the Target Release is being automatically set to 'RHGS 3.1.1'

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-08-24 09:11:10 EDT ---

Since this bug does not have release flag 'rhgs-3.1.z+', the Target Release is being automatically reset to '---'

--- Additional comment from RHEL Product and Program Management on 2015-08-24 09:11:10 EDT ---

This bug report previously had all acks and release flag approved.
However since at least one of its acks has been changed, the
release flag has been reset to ? by the bugbot (pm-rhel).  The
ack needs to become approved before the release flag can become
approved again.

--- Additional comment from errata-xmlrpc on 2015-08-31 08:56:20 EDT ---

This bug has been dropped from advisory RHBA-2015:21371 by Vivek Agarwal (vagarwal)

--- Additional comment from Avra Sengupta on 2016-02-26 01:29:24 EST ---

Removing this from 3.1.3. Will be fixing it later.

--- Additional comment from Avra Sengupta on 2016-11-16 05:48:14 EST ---

Cloning this to master, and adding setting of the boolean in the init of snap scheduler, so that this bug can be removed from known issues

Comment 1 Avra Sengupta 2016-11-16 10:50:52 UTC
The scope of the fix will be removing the manual step of setting the boolean in Rhel 7.1 onwards versions

Comment 2 Worker Ant 2016-11-16 10:53:15 UTC
REVIEW: http://review.gluster.org/15857 (snapshot/scheduler: Set sebool cron_system_cronjob_use_shares to on) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 3 Worker Ant 2016-11-17 07:12:25 UTC
REVIEW: http://review.gluster.org/15857 (snapshot/scheduler: Set sebool cron_system_cronjob_use_shares to on) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 4 Worker Ant 2016-11-17 09:48:42 UTC
REVIEW: http://review.gluster.org/15857 (snapshot/scheduler: Set sebool cron_system_cronjob_use_shares to on) posted (#3) for review on master by Avra Sengupta (asengupt)

Comment 5 Worker Ant 2017-02-22 06:11:19 UTC
COMMIT: https://review.gluster.org/15857 committed in master by Rajesh Joseph (rjoseph) 
------
commit 7b6ee5f2bbe00d68a5dcc6283eca2ed3d821c110
Author: Avra Sengupta <asengupt>
Date:   Wed Nov 16 16:19:14 2016 +0530

    snapshot/scheduler: Set sebool cron_system_cronjob_use_shares to on
    
    Rhel 7.1 onwards, the user has to manually set the
    selinux boolean 'cron_system_cronjob_use_shares' as
    on, if selinux is enabled for snapshot scheduler to
    work.
    
    With this fix, we are automating that bit, in init step
    of snapshot scheduler
    
    Change-Id: I5c1d23c14133c64770e84a77999ce647526f6711
    BUG: 1395643
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: https://review.gluster.org/15857
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 6 Shyamsundar 2017-05-30 18:35:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.