Bug 1231647 - [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
Summary: [SELinux] [Scheduler]: Unable to create Snapshots on RHEL-7.1 using Scheduler
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: rjoseph
QA Contact: senaik
URL:
Whiteboard:
: 1236333 (view as bug list)
Depends On: 1234847
Blocks: 1202842 1212796 1223636 1239269 1239270
TreeView+ depends on / blocked
 
Reported: 2015-06-15 07:22 UTC by senaik
Modified: 2016-09-17 13:03 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.7.1-9
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1239269 1247056 (view as bug list)
Environment:
Last Closed: 2015-07-29 05:03:07 UTC
Embargoed:


Attachments (Terms of Use)
audit logs for Comment5 (27.00 KB, text/plain)
2015-06-19 11:37 UTC, senaik
no flags Details
audit logs for comment 5 (27.00 KB, text/plain)
2015-06-19 11:38 UTC, senaik
no flags Details
logs for Comment 11 (15.24 KB, text/x-vhdl)
2015-06-19 13:12 UTC, senaik
no flags Details
Logs for Comment 11 (14.17 KB, text/x-vhdl)
2015-06-19 13:13 UTC, senaik
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1236333 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Internal Links: 1236333

Description senaik 2015-06-15 07:22:35 UTC
Description of problem:
======================
Scheduler does not create any snapshots when job are scheduled on RHEL7.1. 

Creating snapshots manually without using schduler is successful

Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.1-1.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
====================
1.Create a shared storage and mount it on /var/run/gluster/shared_storage

2.Initialise snap_scheduler from all nodes

3.Enable snap_scheduler from any one node in the cluster 

4.Add a job 

snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
J1               * * * * *        Snapshot Create  master   

No snapshots were created 

gluster snapshot list
No snapshots present

Looks like selinux is not getting the right user context for the cron files

# ls -lZ /etc/cron.d/
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 gcron_update_task
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate 


rpm -qa |grep selinux
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-27.el7.noarch
selinux-policy-targeted-3.13.1-27.el7.noarch
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64


Actual results:
==============
Scheduler does not create any snapshots when job are scheduled.


Expected results:



Additional info:

Comment 2 senaik 2015-06-15 11:31:55 UTC
Scheduler creates snapshots when SELinux is in 'Permissive' mode on RHEL7.1

Comment 3 Prasanth 2015-06-19 08:35:29 UTC
Seema,

Are you seeing the same issue with the latest build "selinux-policy-3.13.1-29.el7"? If so, please attach the audit.log for further debugging.

Comment 4 Avra Sengupta 2015-06-19 08:40:23 UTC
We create two files glusterfs_snap_cron_tasks and gcron_update_task in /etc/cron.d. These files are created by "snap_scheduler.py init", both of which are present in /usr/sbin. These scripts is run as root. The issue we are seeing here in 7.1 is that two files are created with 'unconfined_u' file context, because of which crond refuses to pick them up


[root@rhsqe-vm05 ~]# cd /etc/cron.d
[root@rhsqe-vm05 cron.d]# ls -lZ
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 0hourly
-rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 gcron_update_task
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 raid-check
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 unbound-anchor
-rw-r--r--. root root system_u:object_r:system_cron_spool_t:s0 vdsm-libvirt-logrotate

Can we have the selinux team have a look at this, and provide a suggestion as to how to go about it.

Comment 5 senaik 2015-06-19 10:30:26 UTC
Retried with the latest build "selinux-policy-3.13.1-29.el7", still unable to create snapshots using scheduler 

rpm -qa |grep selinux
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-29.el7.noarch
selinux-policy-targeted-3.13.1-29.el7.noarch

Part of audit.log :

ors=pam_access,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_ACQ msg=audit(1434708601.218:33471): pid=7132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1434708601.219:33472): pid=7132 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=4588 res=1
type=USER_AVC msg=audit(1434708601.256:33473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=1)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
type=USER_START msg=audit(1434708601.273:33474): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1434708601.273:33475): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1434708601.295:33476): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1434708601.301:33477): pid=7132 uid=0 auid=0 ses=4588 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_ACCT msg=audit(1434709501.322:33478): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_access,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_ACQ msg=audit(1434709501.322:33479): pid=7153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1434709501.322:33480): pid=7153 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=4589 res=1
type=USER_START msg=audit(1434709501.349:33481): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1434709501.349:33482): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1434709501.366:33483): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1434709501.368:33484): pid=7153 uid=0 auid=0 ses=4589 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
(END)

Comment 6 senaik 2015-06-19 10:34:35 UTC
cleared the needinfo while updating the bug. adding it back as per comment 4

Comment 7 Prasanth 2015-06-19 11:21:28 UTC
Thanks Seema, for pasting the AVC messages. I'll have the SELinux team look at this as soon. If possible, please attach the complete audit.log to this BZ.

Comment 8 senaik 2015-06-19 11:37:36 UTC
Created attachment 1040917 [details]
audit logs for Comment5

Comment 9 senaik 2015-06-19 11:38:06 UTC
Created attachment 1040918 [details]
audit logs for comment 5

Comment 10 Milos Malik 2015-06-19 11:55:44 UTC
Both attached audit.log files only show USER_AVCs, which indicate that SELinux was switched to permissive and then to enforcing. There are no AVCs or SELINUX_ERRs which would indicate that something was denied.

Maybe there is something hidden:

# semodule -DB
# re-run your scenario
# semodule -B
# ausearch -m avc -m user_avc -m selinux_err -i -ts today

Dontaudit rules are needed, please do not forget to run: semodule -B

Comment 11 senaik 2015-06-19 13:11:34 UTC
Folowed steps in Comment 10 and re-run the scenario. 

ausearch -m avc -m user_avc -m selinux_err -i -ts today

type=USER_AVC msg=audit(06/19/2015 15:30:29.278:33292) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=0)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=USER_AVC msg=audit(06/19/2015 15:40:01.935:33300) : pid=1 uid=root auid=unset ses=unset subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=1)  exe=/usr/lib/systemd/systemd sauid=root hostname=? addr=? terminal=?' 
----
type=SYSCALL msg=audit(06/19/2015 18:30:23.464:33467) : arch=x86_64 syscall=execve success=yes exit=0 a0=0x7fc56bae69c0 a1=0x7fc56bae68c0 a2=0x7fc56bae5010 a3=0x0 items=0 ppid=8605 pid=8606 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=unset comm=setroubleshootd exe=/usr/bin/python2.7 subj=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 key=(null) 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { noatsecure } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { siginh } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 
type=AVC msg=audit(06/19/2015 18:30:23.464:33467) : avc:  denied  { rlimitinh } for  pid=8606 comm=setroubleshootd scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=system_u:system_r:setroubleshootd_t:s0-s0:c0.c1023 tclass=process 

=====================================================================
Attaching the complete output of :
ausearch -m avc -m user_avc -m selinux_err -i -ts today

Comment 12 senaik 2015-06-19 13:12:55 UTC
Created attachment 1040962 [details]
logs for Comment 11

Comment 13 senaik 2015-06-19 13:13:43 UTC
Created attachment 1040963 [details]
Logs for Comment 11

Comment 14 Miroslav Grepl 2015-06-19 13:44:25 UTC
Does it really work in permissive mode? If so, could try to repeat in permissive mode?

Comment 15 senaik 2015-06-19 14:07:00 UTC
Retried in permissive mode and scheduler was able to create snapshots.

Comment 16 Milos Malik 2015-06-19 14:21:50 UTC
Can I get access to the machine where the snapshots are created?

Comment 23 Milos Malik 2015-06-23 10:06:08 UTC
AFAIK the crond contains some SELinux user-space code, which seems to be buggy. To workaround the issue now, we should not use symbolic links (pointing at FUSE filesystem) in /etc/cron.d directory. We should use regular files there. It will enable the scheduler to do what is expected.

In the meantime I will investigate the cron issue and I will file appropriate bug(s).

Comment 24 Miroslav Grepl 2015-06-23 12:15:11 UTC
See https://bugzilla.redhat.com/show_bug.cgi?id=1234847#c1

Comment 27 Prasanth 2015-06-29 10:39:38 UTC
Update from SELinux team on Bug 1234847 :

#####
Miroslav Grepl 2015-06-29 06:35:49 EDT

Could you test it with the following local policy module

# cat mygluster.te
policy_module(mygluster, 1.0)

require{
 type gluster_t;
 type nfs_t;
 type cifs_t;
 type fusefs_t;
}

allow gluster_t nfs_t:file entrypoint;
allow gluster_t cifs_t:file entrypoint;
allow gluster_t fusefs_t:file entrypoint;

# make -f /usr/share/selinux/devel/Makefile mygluster.pp
# semodule -i mygluster.pp
#####

Seema, could you try the above and confirm back in Bug 1234847 so that I can request for backporting the fix in RHEL-7.1?

Comment 28 senaik 2015-07-01 07:28:38 UTC
Followed steps as mentioned in Comment 6 and 7 in BZ 1234847 and scheduler was able to create snapshots.

 snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
A1               */5 * * * *      Snapshot Create  vol0             


[root@rhsqe-vm06 ~]# tail -f /var/log/glusterfs/gcron.log 
[2015-07-01 12:45:19,802 gcron.py:100 doJob] INFO Job Scheduled-A1-vol0 succeeded
[2015-07-01 12:50:01,968 gcron.py:178 main] DEBUG locking_file = /var/run/gluster/shared_storage/snaps/lock_files/A1
[2015-07-01 12:50:01,969 gcron.py:179 main] DEBUG volname = vol0
[2015-07-01 12:50:01,969 gcron.py:180 main] DEBUG jobname = A1
[2015-07-01 12:50:01,981 gcron.py:96 doJob] DEBUG /var/run/gluster/shared_storage/snaps/lock_files/A1 last modified at Wed Jul  1 12:45:19 2015
[2015-07-01 12:50:01,981 gcron.py:98 doJob] DEBUG Processing job Scheduled-A1-vol0
[2015-07-01 12:50:01,982 gcron.py:68 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-A1-vol0 vol0'
[2015-07-01 12:50:20,747 gcron.py:75 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-A1-vol0 vol0' returned '0'
[2015-07-01 12:50:20,747 gcron.py:83 takeSnap] INFO Snapshot of vol0 successful
[2015-07-01 12:50:20,747 gcron.py:100 doJob] INFO Job Scheduled-A1-vol0 succeeded


gluster snapshot list |wc -l 
254


getenforce
Enforcing

rpm -qa |grep selinux
libselinux-debuginfo-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-29.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-devel-3.13.1-29.el7.noarch
libselinux-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-29.el7.noarch

I tried this on a 2 node cluster and found that only one node was picking uo the job , need to look into this further

Comment 31 Avra Sengupta 2015-07-02 09:07:40 UTC
*** Bug 1236333 has been marked as a duplicate of this bug. ***

Comment 33 Avra Sengupta 2015-07-03 10:08:06 UTC
Tried the latest policy update. With that the file /etc/cron.d/glusterfs_snap_cron_tasks is being picked up by crond. However because /etc/cron.d/gcron_update_task is created by renaming it from a tmp file created in /tmp/crontab it has a different file context which prevents it from being picked up by crond.

In order to resolve this, we need to create /etc/cron.d/gcron_update_task by renaming it from a tmp file created in /var/run/gluster/shared_storage/snaps/tmp_file.

Comment 34 Avra Sengupta 2015-07-03 10:50:33 UTC
But even with the above change I don't see crond reloading /etc/cron.d/glusterfs_snap_cron_tasks whenever the modified time of the file changes. Am not sure is this is a crond bug for rhel 7.1 or not, but it seems not to reload a file even though the file's last modified time is changed.

Comment 35 Milos Malik 2015-07-03 12:40:59 UTC
I need the output of following commands:

# ls -l /etc/cron.d/glusterfs_snap_cron_tasks
# ls -Z /etc/cron.d/glusterfs_snap_cron_tasks
# ausearch -m avc -m user_avc -m selinux_err -i -c crond -ts today

Comment 36 senaik 2015-07-03 12:43:23 UTC
ls -l /etc/cron.d/glusterfs_snap_cron_tasks
lrwxrwxrwx. 1 root root 63 Jul  3 16:18 /etc/cron.d/glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks


 ls -Z /etc/cron.d/glusterfs_snap_cron_tasks
lrwxrwxrwx. root root unconfined_u:object_r:system_cron_spool_t:s0 /etc/cron.d/glusterfs_snap_cron_tasks -> /var/run/gluster/shared_storage/snaps/glusterfs_snap_cron_tasks

ausearch -m avc -m user_avc -m selinux_err -i -c crond -ts today
<no matches>

Comment 38 Avra Sengupta 2015-07-05 07:06:11 UTC
Cloning this bug upstream(https://bugzilla.redhat.com/show_bug.cgi?id=1239269) and on release 3.7 branch(https://bugzilla.redhat.com/show_bug.cgi?id=1239270) to send the /tmp snap_scheduler change upstream

Comment 42 senaik 2015-07-15 12:00:28 UTC
Version :  glusterfs-3.7.1-9 

I have enabled the boolean as mentioned and Scheduler is able to create snapshots 

setsebool cron_system_cronjob_use_shares on

getsebool -a |grep cron_system_cronjob_use_shares
cron_system_cronjob_use_shares --> on

rpm -qa |grep selinux
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-32.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-32.el7.noarch

snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
RHEL7_JOB1       */5 * * * *      Snapshot Create  volume0     

gluster snapshot list 
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.45.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.50.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-07.55.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.00.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.05.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.10.01
Scheduled-RHEL7_JOB1-volume0_GMT-2015.07.15-08.15.01


[2015-07-15 14:50:01,627 gcron.py:96 doJob] DEBUG /var/run/gluster/shared_storage/snaps/lock_files/RHEL7_JOB1 last modified at Wed Jul 15 14:45:05 2015
[2015-07-15 14:50:01,628 gcron.py:98 doJob] DEBUG Processing job Scheduled-RHEL7_JOB1-volume0
[2015-07-15 14:50:01,628 gcron.py:68 takeSnap] DEBUG Running command 'gluster snapshot create Scheduled-RHEL7_JOB1-volume0 volume0'
[2015-07-15 14:50:05,869 gcron.py:75 takeSnap] DEBUG Command 'gluster snapshot create Scheduled-RHEL7_JOB1-volume0 volume0' returned '0'
[2015-07-15 14:50:05,870 gcron.py:83 takeSnap] INFO Snapshot of volume0 successful
[2015-07-15 14:50:05,871 gcron.py:100 doJob] INFO Job Scheduled-RHEL7_JOB1-volume0 succeeded

Marking the bug 'Verified'

Comment 43 senaik 2015-07-15 12:05:15 UTC
It has to be documented that the boolean value for cron_system_cronjob_use_shares must be enabled for Scheduler to work as expected on RHEL 7.1 .

Comment 44 monti lawrence 2015-07-23 13:51:15 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 45 Avra Sengupta 2015-07-27 07:11:03 UTC
Doc text Looks Good. Verified.

Comment 46 errata-xmlrpc 2015-07-29 05:03:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.