Bug 1235613 - [SELinux] SMB: SELinux policy to be set for /usr/sbin/ctdbd_wrapper.
Summary: [SELinux] SMB: SELinux policy to be set for /usr/sbin/ctdbd_wrapper.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: samba
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Jose A. Rivera
QA Contact: surabhi
URL:
Whiteboard: SELinux
Depends On:
Blocks: 1202842 1212796 1235636 1235637
TreeView+ depends on / blocked
 
Reported: 2015-06-25 10:23 UTC by surabhi
Modified: 2015-07-29 05:07 UTC (History)
13 users (show)

Fixed In Version: ctdb2.5-2.5.5-6.el6rhs
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1235636 1235637 (view as bug list)
Environment:
Last Closed: 2015-07-29 05:07:45 UTC
Embargoed:


Attachments (Terms of Use)
AVC for ctdb (2.83 MB, text/plain)
2015-06-25 10:45 UTC, surabhi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description surabhi 2015-06-25 10:23:00 UTC
Description of problem:
**************************************
CTDB nodes not coming to healthy state after starting ctdb service.
SELinux is set to enforcing.

type=AVC msg=audit(06/25/2015 02:45:46.844:2625) : avc:  denied  { write } for  pid=22921 comm=net name=ctdbd.socket dev=dm-0 ino=1443389 scontext=unconfined_u:system_r:samba_net_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file 

If we check the context for socket file :
srwx------. root root unconfined_u:object_r:var_run_t:s0 /var/run/ctdb/ctdbd.socket

after running restorecon -R -v /var/run/ctdb/ctdbd.socket we are again getting the correct context..

After analysis and debugging from development team it looks like the ctdb context has to be set on /usr/sbin/ctdbd_wrapper because this is creating the /var/run/ctdb directory and it will apply the context on the contents of this directory.
When we tried to set the context for /usr/sbin/ctdbd_wrapper , remove /var/run/ctdb and then start ctdb service it works fine.

so we need same attributes to be set for /usr/sbin/ctdbd_wrapper as it is done for /usr/sbin/ctdbd.

Version-Release number of selected component (if applicable):
rpm -qa | grep ctdb
ctdb2.5-2.5.5-2.el6rhs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Install RHEL6.7 , latest gluster rpms, latest samba and ctdb rpms
2. Do ctdb setup
3. Start ctdb service 

Actual results:
CTDB fails to start smb service and remains in UNHEALTHY state.

Expected results:
CTDB should be able to start smb service and should come to HEALTHY state.


Additional info:

Comment 2 surabhi 2015-06-25 10:45:16 UTC
Created attachment 1043054 [details]
AVC for ctdb

Comment 3 Prasanth 2015-06-29 10:33:35 UTC
Surabhi,

Please see https://bugzilla.redhat.com/show_bug.cgi?id=1235636#c2 and provide the requested info

Comment 4 surabhi 2015-07-03 05:45:53 UTC
ctdbd_wrapper is a wrapper around ctdbd which sets a few command line options for ctdbd and also creates /var/run/ctdb if it does not exist. ctdbd_wrapper is the recommended and supported way of starting ctdbd.
The setting is required for ctdbd_wrapper.

The issue is being discussed in RHEL BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1235636

Waiting on info from SELinux team.

Comment 5 Poornima G 2015-07-03 12:44:35 UTC
Commands to be added in ctdb spec file in post install:
#semanage fcontext -a -t ctdbd_exec_t /usr/sbin/ctdbd_wrapper 
#restorecon -R -v /usr/sbin/ctdbd_wrapper 
"/usr/sbin/" should be replaced accordingly.

Also semanage requires some selinux packages to be installed.

Comment 7 surabhi 2015-07-06 10:56:26 UTC
Doc bz created https://bugzilla.redhat.com/show_bug.cgi?id=1240247 for documenting the SELinux context for ctdb_wrapper

Comment 8 Michael Adam 2015-07-07 13:45:30 UTC
What we need is this:

semanage fcontext -a -t ctdbd_var_run_t "/var/run/ctdb(/.*)?"
semanage fcontext -a -t ctdbd_exec_t /usr/sbin/ctdbd_wrapper

and the according restorecon commands.

plus:
- dep for policycoreutils-python for semanage).
- %dir /var/run/ctdb in the files section

The semanage commands (and dep) can be removed once the RHEL has added the corresponding entries to the selinux-policy package. (They are in RHEL7).

Comment 9 Miroslav Grepl 2015-07-07 15:28:06 UTC
(In reply to Michael Adam from comment #8)
> What we need is this:
> 
> semanage fcontext -a -t ctdbd_var_run_t "/var/run/ctdb(/.*)?"
Why is this needed? I see it in the policy.

> semanage fcontext -a -t ctdbd_exec_t /usr/sbin/ctdbd_wrapper
> 
> and the according restorecon commands.
> 
> plus:
> - dep for policycoreutils-python for semanage).
> - %dir /var/run/ctdb in the files section
> 
> The semanage commands (and dep) can be removed once the RHEL has added the
> corresponding entries to the selinux-policy package. (They are in RHEL7).

Comment 10 Michael Adam 2015-07-07 22:49:40 UTC
(In reply to Miroslav Grepl from comment #9)
> (In reply to Michael Adam from comment #8)
> > What we need is this:
> > 
> > semanage fcontext -a -t ctdbd_var_run_t "/var/run/ctdb(/.*)?"
> Why is this needed? I see it in the policy.

Oh, indeed. The rhel 6.7 policy has it.

> > semanage fcontext -a -t ctdbd_exec_t /usr/sbin/ctdbd_wrapper
> > 
> > and the according restorecon commands.
> > 
> > plus:
> > - dep for policycoreutils-python for semanage).
> > - %dir /var/run/ctdb in the files section
> > 
> > The semanage commands (and dep) can be removed once the RHEL has added the
> > corresponding entries to the selinux-policy package. (They are in RHEL7).

Comment 11 surabhi 2015-07-08 10:44:53 UTC
I followed following steps to verify if the correct context is set for ctdb_wrapper:

1. Install 3.0.4 ISO
2. Subscribe to CDN for rhs and samba
3. Yum update
4. Check the context for /usr/sbin/ctdbd_wrapper : it is not set to 
ctdbd_exec_t
5. Subscribe to latest rhgs sever puddle and rhgs samba puddle.
6 Yum update ctdb
7 Verify the context for path /usr/sbin/ctdbd_wrapper:
the context is been correctly set as follows
ls -lZ /usr/sbin/ctdbd_wrapper
-rwxr-xr-x. root root system_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper
8. Verified for  /var/run/ctdb as well
ls -lZ /var/run/ctdb
-rw-r--r--. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.pid
srwx------. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.socket

9. Again deleted the /var/run/ctdb directory and started ctdb service again,
when the directory gets recreated, it got the expected context.
# ausearch -m avc -m user_avc -m selinux_err -i -ts recent
<no matches>
# ctdb status
Number of nodes:2
pnn:0 10.16.159.241    OK (THIS NODE)
pnn:1 10.16.159.100    OK
Generation:1017047738
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:1
[root@dhcp159-241 /]# ls -lZ /var/run/ctdb
-rw-r--r--. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.pid
srwx------. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.socket
[root@dhcp159-241 /]# ls -lZ /usr/sbin/ctdbd_wrapper
-rwxr-xr-x. root root system_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper


The ctdb failover test passed and all the nodes comes to healthy state as the context for /var/run/ctdb and /usr/sbin/ctdbd_wrapper is set correctly.

Marking this BZ as verified.
rpm -qa | grep ctdb
ctdb2.5-2.5.5-4.el6rhs.x86_64

Comment 12 surabhi 2015-07-08 12:46:21 UTC
Not yet moving to verified. Expecting another build with correct dependencies set, will verify the same with that build.

Comment 14 surabhi 2015-07-09 10:36:09 UTC
With the latest build i am seeing new AVC's and CTDB nodes not coming to healthy state:


type=AVC msg=audit(07/09/2015 10:34:21.580:52524) : avc:  denied  { connectto } for  pid=3898 comm=smbd path=/var/run/ctdb/ctdbd.socket scontext=unconfined_u:system_r:smbd_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=unix_stream_socket 
----
type=SYSCALL msg=audit(07/09/2015 10:34:27.060:52525) : arch=x86_64 syscall=connect success=no exit=-13(Permission denied) a0=0xd a1=0x7ffd6e116ea0 a2=0x6e a3=0x0 items=0 ppid=4002 pid=4003 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=8331 comm=smbd exe=/usr/sbin/smbd subj=unconfined_u:system_r:smbd_t:s0 key=(null) 
type=AVC msg=audit(07/09/2015 10:34:27.060:52525) : avc:  denied  { connectto } for  pid=4003 comm=smbd path=/var/run/ctdb/ctdbd.socket scontext=unconfined_u:system_r:smbd_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=unix_stream_socket 


The contexts for follwoing directories are correctly set.
ls -lZ /var/run/ctdb
-rw-r--r--. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.pid
srwx------. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.socket
[root@dhcp159-143 ctdb]# ls -lZ /usr/sbin/ctdbd_wrapper
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper

Comment 15 Miroslav Grepl 2015-07-09 12:03:16 UTC
It looks ctdbd is running as unconfined_t. Any chance to check it?

ps -eZ |grep ctdb

Comment 16 surabhi 2015-07-09 12:19:11 UTC
 ps -eZ |grep ctdb
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 13952 ? 00:00:18 ctdbd
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 14103 ? 00:00:07 ctdb_recovered

Comment 17 Poornima G 2015-07-09 12:39:35 UTC
On Rhel 7:
[root@dhcp159-50 ~]# ps -eZ |grep ctdb
system_u:system_r:ctdbd_t:s0    24639 ?        00:00:03 ctdbd
system_u:system_r:ctdbd_t:s0    24832 ?        00:00:00 ctdb_recovered
[root@dhcp159-50 ~]# 

On RHEL 6:
[root@dhcp159-143 ~]# ll -lZ /usr/sbin/ct*
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper
[root@dhcp159-143 ~]# 

In RHEL 7 it works fine, the issue is in RHEL6 alone,
One difference we see is its system_u in RHEL7 and unconfined_u in RHEL6, is it to do with it?

Comment 18 Raghavendra Talur 2015-07-09 13:28:14 UTC
So the question here is, if smbd process wants to connect to a unix socket that is created by ctdbd what should be the context on ctdb binaries, smbd binaries and socket file that ctdb creates.

Comment 19 Poornima G 2015-07-09 13:50:12 UTC
On RHEL 6.7

[root@dhcp159-143 selinux]#  semanage fcontext --list | grep ctdb
/etc/ctdb(/.*)?                                    all files          system_u:object_r:ctdbd_var_lib_t:s0 
/etc/ctdb/events\.d/.*                             regular file       system_u:object_r:bin_t:s0 
/etc/rc\.d/init\.d/ctdb                            regular file       system_u:object_r:ctdbd_initrc_exec_t:s0 
/usr/sbin/ctdbd                                    regular file       system_u:object_r:ctdbd_exec_t:s0 
/usr/sbin/ctdbd_wrapper                            all files          system_u:object_r:ctdbd_exec_t:s0 

[root@dhcp159-143 selinux]# ll -lZ /usr/sbin/ct*
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper
[root@dhcp159-143 selinux]#

discrepancy in the user, semanage show system_u, on disk its unconfined_u

Comment 20 Miroslav Grepl 2015-07-09 13:57:14 UTC
Well did you try to add labels using chcon?

# restorecon -R -v /usr/sbin/ct*

should fix it. But this is not a reason why we see ctdbd running as unconfined_t. 

 ps -eZ |grep ctdb
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 13952 ? 00:00:18 ctdbd
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 14103 ? 00:00:07 ctdb_recovered

It means it is started without init script.

Comment 21 Michael Adam 2015-07-09 15:31:21 UTC
(In reply to Miroslav Grepl from comment #20)
> 
> # restorecon -R -v /usr/sbin/ct*
> 
> should fix it.

It does not:

]# semanage fcontext --list | grep sbin/ctdbd
/usr/sbin/ctdbd                                    regular file       system_u:object_r:ctdbd_exec_t:s0 
/usr/sbin/ctdbd_wrapper                            all files          system_u:object_r:ctdbd_exec_t:s0 
# ls -lZ /usr/sbin/ctdbd*
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper
# restorecon -R -v /usr/sbin/ctdbd*
# ls -lZ /usr/sbin/ctdbd*
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd
-rwxr-xr-x. root root unconfined_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper
#

Comment 22 Michael Adam 2015-07-09 16:34:08 UTC
(In reply to Miroslav Grepl from comment #20)
>  ps -eZ |grep ctdb
> unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 13952 ? 00:00:18 ctdbd
> unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 14103 ? 00:00:07
> ctdb_recovered
> 
> It means it is started without init script.

Aha, that got me further!

The context of ctdb init script is also not correct!
It misses the system_u (has unconfined_u).

Comment 23 Poornima G 2015-07-10 06:06:23 UTC
Have tried chcon -u system_u  on /usr/sbin/ctdbd*, /etc/sysconfig/ctdb, yet the process runs as unconfined_t, any other ctdb files needs to be changed?

Comment 24 Miroslav Grepl 2015-07-10 06:59:30 UTC
(In reply to Poornima G from comment #23)
> Have tried chcon -u system_u  on /usr/sbin/ctdbd*, /etc/sysconfig/ctdb, yet
> the process runs as unconfined_t, any other ctdb files needs to be changed?

Please could you show me how the process is started? Is there a machine where I could log in?

Comment 25 Poornima G 2015-07-10 07:07:23 UTC
The service is started using service ctdb start
system 10.16.159.143
Also another thing is all the deamons started using service run as unconfined_u user on RHEL6, i.e. glusterd, ctdbd, smbd; where as in RHEL 7, all these deamons run as system_u.

Comment 26 Michael Adam 2015-07-10 08:14:42 UTC
RCA so far:
The init script (/etc/init.d/ctdb) was mis-labeled.

It was calling ctdbd_wrapper and thereby ctdb with wrong labels.
==> Samba failed to connect to the ctdbd socket.

And the init script also (re?)created /var/lib/ctdb with wrong labels.
==> Samba could not access the db files in /var/lib/ctdb. 

The question is how this had happened.
Uninstalling ctdb, removing /var/{lib,run}/ctdb, and installing ctdb again led to a correct labeling and a functioning system.

==> need to re-test a full fresh install.

Comment 27 surabhi 2015-07-11 07:07:14 UTC
Moving the BZ to ON_QA to retest.

Comment 28 surabhi 2015-07-11 07:07:48 UTC
The above failure could have happened due to ctdb package got updated without updating SELinux and that's why the context were not set properly and CTDB nodes not coming to HEALTHY state.

Tested again on a fresh setup with following steps:

Install latest RHEL6.7 ISO
Install glusterfs/samba/ctdb
Do ctdb setup
Start ctdb services
Verify the context for ctdb wrapper
Verify if the ctdb status comes to ok for all nodes.


ls -lZ /var/run/ctdb
-rw-r--r--. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.pid
srwx------. root root unconfined_u:object_r:ctdbd_var_run_t:s0 ctdbd.socket


ls -lZ /usr/sbin/ctdbd
-rwxr-xr-x. root root system_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd

ls -lZ /usr/sbin/ctdbd_wrapper
-rwxr-xr-x. root root system_u:object_r:ctdbd_exec_t:s0 /usr/sbin/ctdbd_wrapper

ctdb status
Number of nodes:1
pnn:0 10.16.159.163    OK (THIS NODE)
Generation:273100348
Size:1
hash:0 lmaster:0
Recovery mode:NORMAL (0)
Recovery master:0



****************************************************************************

Also tested in an upgrade scenario :

Install 3.0.4 ISO:
setup ctdb
start ctdb services
Add samba/ctdb/gluster repo for 3.1
Follow upgrade procedure
yum update
enable selinux
Check all the contexts
With both the tests it has been verified that wrapper context is set properly and ctdb nodes comes to ok state
No AVC's

Marking the BZ to verified.


BUILD: ctdb2.5.x86_64 0:2.5.5-6.el6rhs

Comment 29 errata-xmlrpc 2015-07-29 05:07:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.