This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1016138 - enable selinux for glusterfs server
enable selinux for glusterfs server
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: selinux-policy (Show other bugs)
rawhide
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Miroslav Grepl
Fedora Extras Quality Assurance
: Reopened
Depends On:
Blocks: 1052206 1061468
  Show dependency treegraph
 
Reported: 2013-10-07 11:09 EDT by Brian Foster
Modified: 2014-02-10 08:28 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1052206 (view as bug list)
Environment:
Last Closed: 2014-01-10 10:15:56 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
audit.log for vm1 (1.54 MB, text/x-log)
2013-10-07 11:09 EDT, Brian Foster
no flags Details
audit.log for vm2 (1.53 MB, text/x-log)
2013-10-07 11:10 EDT, Brian Foster
no flags Details
files of relevance for glusterfs (1.84 KB, text/plain)
2013-10-07 11:11 EDT, Brian Foster
no flags Details
test description (1.93 KB, text/plain)
2013-10-07 11:13 EDT, Brian Foster
no flags Details
audit.log for vm1 (fc20) (271.16 KB, text/x-log)
2013-10-23 17:05 EDT, Brian Foster
no flags Details
audit.log for vm2 (fc20) (354.55 KB, text/x-log)
2013-10-23 17:05 EDT, Brian Foster
no flags Details
audit.log with regard to comment #29 (60.11 KB, text/x-log)
2013-12-03 15:56 EST, Brian Foster
no flags Details
Latest audit log for gluster core tests (53.33 KB, text/x-log)
2014-01-02 09:33 EST, Brian Foster
no flags Details
VM 2 log (associated with comment #34) (166.74 KB, text/x-log)
2014-01-02 09:38 EST, Brian Foster
no flags Details

  None (edit)
Description Brian Foster 2013-10-07 11:09:00 EDT
The purpose of this bug is to track selinux policy generation for the glusterfs server. selinux is currently disabled for glusterfs.

We have exercised the glusterfs server on F19 in permissive mode and collected the audit logs, listed the relevant files associated with glusterfs and documented the test cases performed for reference. Please advise if any data is missing or not granular enough.
Comment 1 Brian Foster 2013-10-07 11:09:54 EDT
Created attachment 808889 [details]
audit.log for vm1
Comment 2 Brian Foster 2013-10-07 11:10:32 EDT
Created attachment 808890 [details]
audit.log for vm2
Comment 3 Brian Foster 2013-10-07 11:11:23 EDT
Created attachment 808891 [details]
files of relevance for glusterfs
Comment 4 Brian Foster 2013-10-07 11:13:39 EDT
Created attachment 808901 [details]
test description

This provides a brief description of the tests involved with exercising the glusterfs server. Two VMs were involved (audit.log files from each are also attached) to capture the use cases of creating a cluster, a distributed volume and exercising several recovery and modification scenarios.
Comment 5 Daniel Walsh 2013-10-07 11:38:21 EDT
Ok I see glusterd trying to bind to lots of different ports?

allow glusterd_t afs3_callback_port_t:tcp_socket name_bind;
allow glusterd_t afs_fs_port_t:tcp_socket name_bind;
allow glusterd_t amanda_port_t:tcp_socket name_bind;
allow glusterd_t amavisd_recv_port_t:tcp_socket name_bind;
allow glusterd_t amavisd_send_port_t:tcp_socket name_bind;
allow glusterd_t amqp_port_t:tcp_socket name_bind;
allow glusterd_t aol_port_t:tcp_socket name_bind;
allow glusterd_t apcupsd_port_t:tcp_socket name_bind;
allow glusterd_t asterisk_port_t:tcp_socket name_bind;
allow glusterd_t boinc_client_port_t:tcp_socket name_bind;
allow glusterd_t boinc_port_t:tcp_socket name_bind;
allow glusterd_t clamd_port_t:tcp_socket name_bind;
allow glusterd_t cluster_port_t:tcp_socket name_bind;
allow glusterd_t cma_port_t:tcp_socket name_bind;
allow glusterd_t cobbler_port_t:tcp_socket name_bind;
allow glusterd_t commplex_link_port_t:tcp_socket name_bind;
allow glusterd_t commplex_main_port_t:tcp_socket name_bind;
allow glusterd_t condor_port_t:tcp_socket name_bind;
allow glusterd_t couchdb_port_t:tcp_socket name_bind;
allow glusterd_t ctdb_port_t:tcp_socket name_bind;
allow glusterd_t cvs_port_t:tcp_socket name_bind;
allow glusterd_t cyphesis_port_t:tcp_socket name_bind;
allow glusterd_t daap_port_t:tcp_socket name_bind;
allow glusterd_t dbskkd_port_t:tcp_socket name_bind;
allow glusterd_t dccm_port_t:tcp_socket name_bind;
allow glusterd_t dict_port_t:tcp_socket name_bind;
allow glusterd_t distccd_port_t:tcp_socket name_bind;
allow glusterd_t dnssec_port_t:tcp_socket name_bind;
allow glusterd_t dogtag_port_t:tcp_socket name_bind;
allow glusterd_t embrace_dp_c_port_t:tcp_socket name_bind;
allow glusterd_t epmd_port_t:tcp_socket name_bind;
allow glusterd_t fmpro_internal_port_t:tcp_socket name_bind;
allow glusterd_t gatekeeper_port_t:tcp_socket name_bind;
allow glusterd_t gds_db_port_t:tcp_socket name_bind;
allow glusterd_t giftd_port_t:tcp_socket name_bind;
allow glusterd_t git_port_t:tcp_socket name_bind;
allow glusterd_t glance_port_t:tcp_socket name_bind;
allow glusterd_t glance_registry_port_t:tcp_socket name_bind;
allow glusterd_t gpsd_port_t:tcp_socket name_bind;
allow glusterd_t hadoop_namenode_port_t:tcp_socket name_bind;
allow glusterd_t hddtemp_port_t:tcp_socket name_bind;
allow glusterd_t howl_port_t:tcp_socket name_bind;
allow glusterd_t hplip_port_t:tcp_socket name_bind;
allow glusterd_t http_cache_port_t:tcp_socket name_bind;
allow glusterd_t i18n_input_port_t:tcp_socket name_bind;
allow glusterd_t imaze_port_t:tcp_socket name_bind;
allow glusterd_t interwise_port_t:tcp_socket name_bind;
allow glusterd_t ionixnetmon_port_t:tcp_socket name_bind;
allow glusterd_t ipsecnat_port_t:tcp_socket name_bind;
allow glusterd_t ircd_port_t:tcp_socket name_bind;
allow glusterd_t iscsi_port_t:tcp_socket name_bind;
allow glusterd_t isns_port_t:tcp_socket name_bind;
allow glusterd_t jabber_client_port_t:tcp_socket name_bind;
allow glusterd_t jabber_interserver_port_t:tcp_socket name_bind;
allow glusterd_t jabber_router_port_t:tcp_socket name_bind;
allow glusterd_t jacorb_port_t:tcp_socket name_bind;
allow glusterd_t jboss_debug_port_t:tcp_socket name_bind;
allow glusterd_t jboss_management_port_t:tcp_socket name_bind;
allow glusterd_t jboss_messaging_port_t:tcp_socket name_bind;
allow glusterd_t l2tp_port_t:tcp_socket name_bind;
allow glusterd_t lirc_port_t:tcp_socket name_bind;
allow glusterd_t luci_port_t:tcp_socket name_bind;
allow glusterd_t mail_port_t:tcp_socket name_bind;
allow glusterd_t memcache_port_t:tcp_socket name_bind;
allow glusterd_t milter_port_t:tcp_socket name_bind;
allow glusterd_t mmcc_port_t:tcp_socket name_bind;
allow glusterd_t mongod_port_t:tcp_socket name_bind;
allow glusterd_t monopd_port_t:tcp_socket name_bind;
allow glusterd_t movaz_ssc_port_t:tcp_socket name_bind;
allow glusterd_t mpd_port_t:tcp_socket name_bind;
allow glusterd_t ms_streaming_port_t:tcp_socket name_bind;
allow glusterd_t msnp_port_t:tcp_socket name_bind;
allow glusterd_t mssql_port_t:tcp_socket name_bind;
allow glusterd_t munin_port_t:tcp_socket name_bind;
allow glusterd_t mxi_port_t:tcp_socket name_bind;
allow glusterd_t mysqld_port_t:tcp_socket name_bind;
allow glusterd_t mysqlmanagerd_port_t:tcp_socket name_bind;
allow glusterd_t mythtv_port_t:tcp_socket name_bind;
allow glusterd_t nessus_port_t:tcp_socket name_bind;
allow glusterd_t netport_port_t:tcp_socket name_bind;
allow glusterd_t netsupport_port_t:tcp_socket name_bind;
allow glusterd_t nodejs_debug_port_t:tcp_socket name_bind;
allow glusterd_t ntop_port_t:tcp_socket name_bind;
allow glusterd_t oa_system_port_t:tcp_socket name_bind;
allow glusterd_t ocsp_port_t:tcp_socket name_bind;
allow glusterd_t openhpid_port_t:tcp_socket name_bind;
allow glusterd_t openvpn_port_t:tcp_socket name_bind;
allow glusterd_t oracle_port_t:tcp_socket name_bind;
allow glusterd_t osapi_compute_port_t:tcp_socket name_bind;
allow glusterd_t pdps_port_t:tcp_socket name_bind;
allow glusterd_t pegasus_http_port_t:tcp_socket name_bind;
allow glusterd_t pegasus_https_port_t:tcp_socket name_bind;
allow glusterd_t pgpkeyserver_port_t:tcp_socket name_bind;
allow glusterd_t pingd_port_t:tcp_socket name_bind;
allow glusterd_t pki_ca_port_t:tcp_socket name_bind;
allow glusterd_t pki_kra_port_t:tcp_socket name_bind;
allow glusterd_t pki_ocsp_port_t:tcp_socket name_bind;
allow glusterd_t pki_ra_port_t:tcp_socket name_bind;
allow glusterd_t pki_tks_port_t:tcp_socket name_bind;
allow glusterd_t pki_tps_port_t:tcp_socket name_bind;
allow glusterd_t pktcable_cops_port_t:tcp_socket name_bind;
allow glusterd_t postfix_policyd_port_t:tcp_socket name_bind;
allow glusterd_t postgresql_port_t:tcp_socket name_bind;
allow glusterd_t pptp_port_t:tcp_socket name_bind;
allow glusterd_t prelude_port_t:tcp_socket name_bind;
allow glusterd_t presence_port_t:tcp_socket name_bind;
allow glusterd_t ptal_port_t:tcp_socket name_bind;
allow glusterd_t pulseaudio_port_t:tcp_socket name_bind;
allow glusterd_t puppet_port_t:tcp_socket name_bind;
allow glusterd_t quantum_port_t:tcp_socket name_bind;
allow glusterd_t radsec_port_t:tcp_socket name_bind;
allow glusterd_t razor_port_t:tcp_socket name_bind;
allow glusterd_t redis_port_t:tcp_socket name_bind;
allow glusterd_t repository_port_t:tcp_socket name_bind;
allow glusterd_t ricci_modcluster_port_t:tcp_socket name_bind;
allow glusterd_t ricci_port_t:tcp_socket name_bind;
allow glusterd_t rtp_media_port_t:tcp_socket name_bind;
allow glusterd_t rtsclient_port_t:tcp_socket name_bind;
allow glusterd_t salt_port_t:tcp_socket name_bind;
allow glusterd_t sap_port_t:tcp_socket name_bind;
allow glusterd_t saphostctrl_port_t:tcp_socket name_bind;
allow glusterd_t servistaitsm_port_t:tcp_socket name_bind;
allow glusterd_t sge_port_t:tcp_socket name_bind;
allow glusterd_t sieve_port_t:tcp_socket name_bind;
allow glusterd_t sip_port_t:tcp_socket name_bind;
allow glusterd_t sixxsconfig_port_t:tcp_socket name_bind;
allow glusterd_t soundd_port_t:tcp_socket name_bind;
allow glusterd_t speech_port_t:tcp_socket name_bind;
allow glusterd_t squid_port_t:tcp_socket name_bind;
allow glusterd_t ssdp_port_t:tcp_socket name_bind;
allow glusterd_t svn_port_t:tcp_socket name_bind;
allow glusterd_t sype_transport_port_t:tcp_socket name_bind;
allow glusterd_t syslog_tls_port_t:tcp_socket name_bind;
allow glusterd_t tcs_port_t:tcp_socket name_bind;
allow glusterd_t tor_port_t:tcp_socket name_bind;
allow glusterd_t tram_port_t:tcp_socket name_bind;
allow glusterd_t transproxy_port_t:tcp_socket name_bind;
allow glusterd_t trisoap_port_t:tcp_socket name_bind;
allow glusterd_t unreserved_port_t:tcp_socket name_bind;
allow glusterd_t ups_port_t:tcp_socket name_bind;
allow glusterd_t varnishd_port_t:tcp_socket name_bind;
allow glusterd_t virt_port_t:tcp_socket name_bind;
allow glusterd_t virtual_places_port_t:tcp_socket name_bind;
allow glusterd_t vnc_port_t:tcp_socket name_bind;
allow glusterd_t websm_port_t:tcp_socket name_bind;
allow glusterd_t winshadow_port_t:tcp_socket name_bind;
allow glusterd_t wsdapi_port_t:tcp_socket name_bind;
allow glusterd_t wsicopy_port_t:tcp_socket name_bind;
allow glusterd_t xen_port_t:tcp_socket name_bind;
allow glusterd_t xfs_port_t:tcp_socket name_bind;
allow glusterd_t xserver_port_t:tcp_socket name_bind;
allow glusterd_t zabbix_agent_port_t:tcp_socket name_bind;
allow glusterd_t zabbix_port_t:tcp_socket name_bind;
allow glusterd_t zebra_port_t:tcp_socket name_bind;
allow glusterd_t zented_port_t:tcp_socket name_bind;
allow glusterd_t zookeeper_client_port_t:tcp_socket name_bind;
allow glusterd_t zookeeper_election_port_t:tcp_socket name_bind;
allow glusterd_t zookeeper_leader_port_t:tcp_socket name_bind;
allow glusterd_t zope_port_t:tcp_socket name_bind;

Is it basically trying to bind to every port on the system?
Any port > 1023?

I see gluster trying to write all over /usr?

fstest_24516e04822ab9cd302949ab1d3fefb5

  
/export/test1/fstest_5c86195eb68127beef9fe920c531c7af/fstest_24516e04822ab9cd302949ab1d3fefb5
Comment 6 Daniel Walsh 2013-10-07 11:38:56 EDT
Is it executing some kind of make ?
Comment 7 Brian Foster 2013-10-07 12:09:19 EDT
(In reply to Daniel Walsh from comment #5)
> Ok I see glusterd trying to bind to lots of different ports?
> 
...
> Is it basically trying to bind to every port on the system?
> Any port > 1023?
> 

Interesting. I see some code in glusterd that scans through ports 0-64k to determine whether each is free or not. It does so via attempting to bind() a socket to each (and then close() it). The scan appears to be an initialization behavior, but I see some other checks that could mean retries once an attempt to allocate a previously unused port occurs.

IOW, this appears to be a used port cache mechanism. I'll have to dig into it more to understand what it's used for, but this certainly suggests we're simply looking for a free port in certain cases.

> I see gluster trying to write all over /usr?
> 
> fstest_24516e04822ab9cd302949ab1d3fefb5
> 
>   
> /export/test1/fstest_5c86195eb68127beef9fe920c531c7af/
> fstest_24516e04822ab9cd302949ab1d3fefb5

Hmm, I'm not aware of why that would happen. I did run a test suite against a gluster mount point (/mnt) simply to exercise the server with I/O. The files here look like they could be artifacts of that test, but what is the relation to /usr here? Could you elaborate on this observation?

(In reply to Daniel Walsh from comment #6)
> Is it executing some kind of make ?

I may have also run that on the mount point. IIRC, I intended to compile a glusterfs source tree, found I was missing a bunch of packages and opted for the test suite instead. Has that confused things in the audit.log output?
Comment 8 Daniel Walsh 2013-10-07 13:18:59 EDT
0f57b3320fff2b8f37d79610178ec27ada2ae3ac
e46f07226579128b4624d9781ef666034c804756

Have fixes for these AVC's in git.
Comment 9 Brian Foster 2013-10-23 17:03:45 EDT
I had a brief chat with Miroslav who indicated that this should now be fixed. So I've moved on to testing f20 with the 3.12.1-90.fc20 selinux packages and still hit some avc errors when running through the tests described in the attached doc. At that point, I set both nodes to permissive and ran through the suite of tests.

It's not totally clear whether this build is new enough or not, but Miroslav indicated the latest build should have the fixes so I'm re-opening this and will attach the latest log files momentarily... Let me know if you need me to provide anything else. Thanks!
Comment 10 Brian Foster 2013-10-23 17:05:08 EDT
Created attachment 815564 [details]
audit.log for vm1 (fc20)
Comment 11 Brian Foster 2013-10-23 17:05:56 EDT
Created attachment 815566 [details]
audit.log for vm2 (fc20)
Comment 12 Miroslav Grepl 2013-10-24 10:59:30 EDT
# matchpathcon /export
/export	system_u:object_r:usr_t:s0

so we end up with usr_t labeling.
Comment 13 Miroslav Grepl 2013-10-25 03:58:45 EDT
Any idea about

/run/9ff4cc0d581696a3deaa803bfbf62149.socket

basically it causes

allow glusterd_t unconfined_t:unix_stream_socket connectto;

and others AVC msgs which you attached. 

I guess it should be created by glusterd?

usr_t issue should go away with the boolean how we have been talking about it.
Comment 14 Brian Foster 2013-10-25 10:42:54 EDT
(In reply to Miroslav Grepl from comment #13)
> Any idea about
> 
> /run/9ff4cc0d581696a3deaa803bfbf62149.socket
> 
> basically it causes
> 
> allow glusterd_t unconfined_t:unix_stream_socket connectto;
> 
> and others AVC msgs which you attached. 
> 
> I guess it should be created by glusterd?
> 

I suspect that this is a socket that is used to communicate between glusterd and the bricks (glusterfsd). For example, for each running instance of glusterfsd (or glusterfs, NFS server), I see something like:

/usr/sbin/glusterfsd ... -S /var/run/48fe31ab227aa051b2c858c8d9c1cb37.socket ...

... in the process arguments. The sockets are unique per glusterfs[d] instance. I suspect they are defined/named by glusterd, since they are provided as an argument to glusterfs[d] and the latter should be started by the former. A quick experiment to manually kill/run a glusterfs[d] instance shows that the glusterfs[d] process actually creates the socket file that is passed as a parameter, however.

> usr_t issue should go away with the boolean how we have been talking about
> it.

Ok. Once fixes are available for the unrelated avc's, I'll use this option in the next cycle of testing. Let me know if you need anything else, thanks!
Comment 15 Brian Foster 2013-11-05 17:25:36 EST
Miroslav,

On IRC you asked if I could reproduce the following issue:

"allow glusterd_t unconfined_t:unix_stream_socket connectto;"

It turns out I can reproduce when I restart glusterfsd during a self-heal test (i.e., kill one replica leg of a volume, create some files, restart), but I was restarting glusterfsd manually as root. Your suggestion to check ps -eZ confirmed that the context of the process had changed as opposed to restarting via restarting the service, so I suspect this was a flaw in my testing.

Out of curiosity, is there a way to run the glusterfsd process directly without causing the problem I'm hitting here?

Also, were there any other issues or pending policy changes I should wait for? Let me know and I'll do another round of testing with the latest bits. Thanks.
Comment 16 Brian Foster 2013-11-06 12:28:29 EST
I started running through the test sequence on the latest rawhide and hit the following avc's:

type=AVC msg=audit(1383757245.100:136): avc:  denied  { relabelfrom } for  pid=29927 comm="glusterfsd" name="test1" dev="vdb" ino=131 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir
type=SYSCALL msg=audit(1383757245.100:136): arch=c000003e syscall=189 success=no exit=-13 a0=7f4f0f5c4cb0 a1=7f4f1d92afe0 a2=7f4f1d96df10 a3=1b items=0 ppid=1 pid=29927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

type=AVC msg=audit(1383757245.115:137): avc:  denied  { relabelfrom } for  pid=29936 comm="glusterfsd" name="test2" dev="vdb" ino=8388736 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:usr_t:s0 tclass=dir
type=SYSCALL msg=audit(1383757245.115:137): arch=c000003e syscall=189 success=no exit=-13 a0=7f171c87ecb0 a1=7f1725e8be70 a2=7f1725e8be40 a3=1b items=0 ppid=1 pid=29936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

This occurs when I attempt to start a distributed replicated volume for the first time, and only occurs on the remote server (with respect to the gluster cli commands). The test1 and test2 inodes refer to the top level brick directories of each replica set on the remote server. These directories are created by the remote 'glusterd' process under /export, which is an XFS mounted fs.

The settings on both servers are as follows:

gluster_anon_write --> off
gluster_export_all_ro --> off
gluster_export_all_rw --> on

The current labels are:

drwxr-xr-x. root root system_u:object_r:usr_t:s0       /export/

drwxr-xr-x. root root system_u:object_r:usr_t:s0       test1
drwxr-xr-x. root root system_u:object_r:usr_t:s0       test2

Note that this only seems to occur when creating/starting the first volume in the lifetime of glusterd. It's not clear to me if this is a policy issue, a gluster issue or a setup/test issue. Is there any other data I can collect to help characterize this? Thanks.
Comment 17 Daniel Walsh 2013-11-11 12:14:53 EST
What are test1 and test2?   Are they the bricks?  

If these are bricks it would be better if we labeled them something appropriate like gluster_var_lib_t or came up with a new type like gluster_data_t or gluster_brick_t.


Does cluster doing some SELinux smarts built in?  IE What is trying to realable?
Comment 18 Brian Foster 2013-11-11 12:45:09 EST
(In reply to Daniel Walsh from comment #17)
> What are test1 and test2?   Are they the bricks?  
> 

Yes, these are the top-level brick directories for a given server.

> If these are bricks it would be better if we labeled them something
> appropriate like gluster_var_lib_t or came up with a new type like
> gluster_data_t or gluster_brick_t.
> 

Are there any administrative or configuration ramifications of using a new label like this? E.g., should an admin know to relabel the bricks after volume creation (or should gluster build this in one way or another)?

We ultimately defer to the SELinux folks with regard to what the best policy is. We'll just need to make sure that gluster sets things up correctly and/or the admin understands the appropriate procedure.

> 
> Does cluster doing some SELinux smarts built in?  IE What is trying to
> realable?

Nothing at the moment. In the context of this bug, we're just attempting to make the server work correctly with selinux enabled. I think Miroslav made some sense of the issue reported in comment #16 over irc and determined that it was associated with glusterd attempting to set some extended attributes on the brick directories. He hadn't posted what specifically needed to change in the policy, but only that he needed to make an update.
Comment 19 Daniel Walsh 2013-11-11 16:21:18 EST
When an admin creates a brick does he do this with a separate tool or is this done through the glusterd?

No the best way to do this would be to label the content as a gluster_brick_t or gluster_var_lib_t, but it depends on how much of a problem this would be.


gluster_anon_write --> off
gluster_export_all_ro --> off
gluster_export_all_rw --> on

If we labeled bricks properly we might not need these booleans.

In smbd we have the samba_share_t label, so an admin could share a particular directory rather then allowing samba to take on the entire machine.

I would rather not require system admins to know about this labeling if possible.  If if there was a command like

gluster-tool --share /usr/mybrick

Then we could have gluster-tool make sure the SELinux labeling is done correctly behind the scenes.  This would make a lot of sense if this type of tool was also changing ownership and permissions.
Comment 20 Brian Foster 2013-11-12 07:58:06 EST
(In reply to Daniel Walsh from comment #19)
> When an admin creates a brick does he do this with a separate tool or is
> this done through the glusterd?
> 

The glusterd process executes the brick creation on behalf of a cli client. The admin runs a 'gluster volume create ...,' which talks to the local glusterd, which I believe then communicates the remote glusterd processes in order to create the brick directories for the volume throughout the cluster.

> No the best way to do this would be to label the content as a
> gluster_brick_t or gluster_var_lib_t, but it depends on how much of a
> problem this would be.
> 

I don't think this would be too much of a problem. It looks like we already set our own internal xattrs on the brick, so at minimum we could hook into this path and relabel the brick directory appropriately. We also have a script/hook mechanism that might support this scenario, but I'll have to take a closer look at that...

To that point... is there a preferred method to relabel a directory (i.e., via script or programmatically)? Also, is it enough to "set and forget" the context on the top-level brick dir? i.e., will subsequent operations inherit the context?

> 
> gluster_anon_write --> off
> gluster_export_all_ro --> off
> gluster_export_all_rw --> on
> 
> If we labeled bricks properly we might not need these booleans.
> 
> In smbd we have the samba_share_t label, so an admin could share a
> particular directory rather then allowing samba to take on the entire
> machine.
> 

Interesting, thanks for the context.

> I would rather not require system admins to know about this labeling if
> possible.  If if there was a command like
> 
> gluster-tool --share /usr/mybrick
> 
> Then we could have gluster-tool make sure the SELinux labeling is done
> correctly behind the scenes.  This would make a lot of sense if this type of
> tool was also changing ownership and permissions.

Yeah (re: above), I don't see anywhere we are changing ownership/perms, but we do set a volume id xattr for our own purposes.

Let me know if you guys think we should proceed in this manner and whether we should use the existing glusterd_var_lib_t context or set up a new type for brick data. For the purpose of this bug, I assume I can manually relabel the bricks and test for AVC's and whatnot... thanks!
Comment 21 Brian Foster 2013-11-12 08:30:41 EST
Just FYI, out of curiosity I gave a quick try to setting the parent of the brick directories (pre brick creation) to the following:

  security.selinux="system_u:object_r:glusterd_var_lib_t:s0"

... and I do still see the associated AVC's reported in comment #16:

type=AVC msg=audit(1384262607.705:91): avc:  denied  { relabelfrom } for  pid=841 comm="glusterfsd" name="test1" dev="dm-0" ino=1046392 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir

type=SYSCALL msg=audit(1384262607.705:91): arch=c000003e syscall=189 success=no exit=-13 a0=7f33a3d01cb0 a1=7f33b09fdf00 a2=7f33b09fded0 a3=28 items=0 ppid=1 pid=841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

type=AVC msg=audit(1384262607.717:92): avc:  denied  { relabelfrom } for  pid=850 comm="glusterfsd" name="test2" dev="dm-0" ino=1046393 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir

type=SYSCALL msg=audit(1384262607.717:92): arch=c000003e syscall=189 success=no exit=-13 a0=7f40d7efdcb0 a1=7f40e536fec0 a2=7f40e536fe90 a3=28 items=0 ppid=1 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

So it appears that the policy still requires an update irrespective of the type..? Thanks again.
Comment 22 Brian Foster 2013-11-14 10:44:28 EST
I've now picked up the 3.12.1-100.fc21 selinux policy packages from rawhide. The avc's associated with volume start up (comment #16 and comment #21) appear to be fixed.

I'm running into a failure after starting/mounting the volume and running the Tuxera posix fs test suite. I can narrow this down to attempting to create a fifo on the volume. E.g., 'mkfifo fifo' fails with a permission denied.

Note that my brick parent directories are still manually labelled with glusterd_var_lib_t here, but I reproduce the same general problem using the default usr_t label. When I set to permissive mode and run the mkfifo test, the following is logged:

type=AVC msg=audit(1384443619.111:193): avc:  denied  { create } for  pid=857 comm="glusterfsd" name="fifo" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=fifo_file
type=SYSCALL msg=audit(1384443619.111:193): arch=c000003e syscall=133 success=yes exit=0 a0=7fb44a1d1af0 a1=11a4 a2=0 a3=2 items=0 ppid=1 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1384443619.115:194): avc:  denied  { getattr } for  pid=857 comm="glusterfsd" path="/export/test/fifo" dev="vdb" ino=501 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=fifo_file
type=SYSCALL msg=audit(1384443619.115:194): arch=c000003e syscall=6 success=yes exit=0 a0=7fb44a1d1af0 a1=7fb44a1d19b0 a2=7fb44a1d19b0 a3=2 items=0 ppid=1 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1384443619.116:195): avc:  denied  { setattr } for  pid=857 comm="glusterfsd" name="fifo" dev="vdb" ino=501 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=fifo_file
type=SYSCALL msg=audit(1384443619.116:195): arch=c000003e syscall=189 success=yes exit=0 a0=7fb44a1d1af0 a1=7fb44bab034c a2=7fb4530ff9d0 a3=10 items=0 ppid=1 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1384443619.129:196): avc:  denied  { link } for  pid=857 comm="glusterfsd" name="fifo" dev="vdb" ino=501 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=fifo_file
type=SYSCALL msg=audit(1384443619.129:196): arch=c000003e syscall=265 success=yes exit=0 a0=ffffffffffffff9c a1=7fb44a1d1af0 a2=ffffffffffffff9c a3=7fb44a1d1840 items=0 ppid=1 pid=857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

Let me know if I can provide anything else on this one, thanks.
Comment 23 Miroslav Grepl 2013-11-14 11:59:43 EST
Yes, I have just fixed to not blow up on relabelfrom/relabelto on the same labels. But I like idea where we would have a label for bricks and then yes, the booleans would go away.
Comment 24 Brian Foster 2013-11-14 12:21:52 EST
... and I just noticed the 3.13.1-1.fc21 policy became available. Just an FYI that I updated and still see the avc's from comment #22.
Comment 25 Miroslav Grepl 2013-11-14 14:30:46 EST
Yes, it has not been fixed.
Comment 26 Brian Foster 2013-11-15 10:56:23 EST
Ok, thanks. I figured, just wanted to be sure. ;)

To get a little ahead on the brick label idea, here's an issue I hit during a self-heal (recovery after some downtime) with the bricks labelled as glusterd_var_lib_t:

type=AVC msg=audit(1384530182.983:398): avc:  denied  { relabelfrom } for  pid=2779 comm="glusterfsd" name="src" dev="vdb" ino=8397492 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1384530182.983:398): avc:  denied  { relabelto } for  pid=2779 comm="glusterfsd" name="src" dev="vdb" ino=8397492 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir
type=SYSCALL msg=audit(1384530182.983:398): arch=c000003e syscall=189 success=yes exit=0 a0=7f79e105dc80 a1=7f79ea67aba0 a2=7f79ea689c00 a3=28 items=0 ppid=1 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

I suspect it's probably better to have a new and independent label for bricks, as glusterd_var_lib_t is for configuration info. Note that this doesn't seem to occur with the bricks labelled as the default usr_t, so I'm not totally clear on whether it will be an issue. Let me know if I should start using a new label for bricks.
Comment 27 Miroslav Grepl 2013-11-25 09:31:39 EST
(In reply to Brian Foster from comment #26)
> Ok, thanks. I figured, just wanted to be sure. ;)
> 
> To get a little ahead on the brick label idea, here's an issue I hit during
> a self-heal (recovery after some downtime) with the bricks labelled as
> glusterd_var_lib_t:
> 
> type=AVC msg=audit(1384530182.983:398): avc:  denied  { relabelfrom } for 
> pid=2779 comm="glusterfsd" name="src" dev="vdb" ino=8397492
> scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir
> type=AVC msg=audit(1384530182.983:398): avc:  denied  { relabelto } for 
> pid=2779 comm="glusterfsd" name="src" dev="vdb" ino=8397492
> scontext=system_u:system_r:glusterd_t:s0
> tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=dir
> type=SYSCALL msg=audit(1384530182.983:398): arch=c000003e syscall=189
> success=yes exit=0 a0=7f79e105dc80 a1=7f79ea67aba0 a2=7f79ea689c00 a3=28
> items=0 ppid=1 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0
> egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfsd"
> exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

It is about directory and we allow relabel files.

> 
> I suspect it's probably better to have a new and independent label for
> bricks, as glusterd_var_lib_t is for configuration info. Note that this
> doesn't seem to occur with the bricks labelled as the default usr_t, so I'm
> not totally clear on whether it will be an issue. Let me know if I should
> start using a new label for bricks.

Because of the boolean.

So what do you think about gluster_brick_t how Dan suggested?
Comment 28 Brian Foster 2013-11-25 09:40:51 EST
(In reply to Miroslav Grepl from comment #27)
> (In reply to Brian Foster from comment #26)
...
> > 
> > I suspect it's probably better to have a new and independent label for
> > bricks, as glusterd_var_lib_t is for configuration info. Note that this
> > doesn't seem to occur with the bricks labelled as the default usr_t, so I'm
> > not totally clear on whether it will be an issue. Let me know if I should
> > start using a new label for bricks.
> 
> Because of the boolean.
> 
> So what do you think about gluster_brick_t how Dan suggested?

That sounds fine to me. Let me know when something is available to test and I'll give it a shot. :)
Comment 29 Brian Foster 2013-12-03 15:54:02 EST
I've run some tests against the selinux-policy-3.13.1-7.fc21 bits using the new glusterd_brick_t type. I still see the fifo related avc's in comment #22. I created a local policy as suggested by mgrepl and that causes these failures to stop.

Next, I hit a few avcs when running a rebalance. The following occurred during the initial fix-layout stage:

type=AVC msg=audit(1386099129.634:152): avc:  denied  { create } for  pid=9441 comm="glusterfs" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386099129.634:152): arch=c000003e syscall=49 success=yes exit=0 a0=6 a1=7f059ca0e088 a2=6e a3=7f059a9ba700 items=0 ppid=9440 pid=9441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1386099134.610:153): avc:  denied  { write } for  pid=610 comm="glusterd" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" dev="dm-0" ino=667978 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386099134.610:153): arch=c000003e syscall=42 success=yes exit=0 a0=16 a1=7f4943670630 a2=6e a3=7f4941c5eec0 items=0 ppid=1 pid=610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1386099161.716:154): avc:  denied  { unlink } for  pid=9443 comm="glusterfs" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" dev="dm-0" ino=667978 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386099161.716:154): arch=c000003e syscall=87 success=yes exit=0 a0=7f059c9fd8c0 a1=7f0598a2e694 a2=7f059ca29550 a3=529e31d9 items=0 ppid=1 pid=9443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

... and the following during the actual rebalance:

type=AVC msg=audit(1386099432.139:174): avc:  denied  { create } for  pid=9566 comm="glusterfs" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386099432.139:174): arch=c000003e syscall=49 success=yes exit=0 a0=6 a1=7f6fa07e5088 a2=6e a3=7f6f9efa5700 items=0 ppid=9565 pid=9566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1386099437.115:175): avc:  denied  { write } for  pid=610 comm="glusterd" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" dev="dm-0" ino=667978 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386099437.115:175): arch=c000003e syscall=42 success=yes exit=0 a0=16 a1=7f49436704f0 a2=6e a3=7f4941c5eec0 items=0 ppid=1 pid=610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1386101336.724:185): avc:  denied  { unlink } for  pid=9569 comm="glusterfs" name="d69d1eca-6c33-4032-8285-5bef363414ae.sock" dev="dm-0" ino=667978 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:glusterd_var_lib_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1386101336.724:185): arch=c000003e syscall=87 success=yes exit=0 a0=7f6fa07d48c0 a1=7f6f9d01ba70 a2=7f6fa0800550 a3=529e3a58 items=0 ppid=1 pid=9569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="glusterfs" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)

I will upload the full audit log that covers these failures shortly.
Comment 30 Brian Foster 2013-12-03 15:56:52 EST
Created attachment 832291 [details]
audit.log with regard to comment #29

This is the audit log that covers the testing described in comment #29 (mkfifo tests and gluster rebalance). This testing started with selinux in enforcing mode. Permissive mode was subsequently enabled to capture the complete content associated with the particular test(s).
Comment 31 Daniel Walsh 2013-12-04 09:14:32 EST
Miroslav the policy has the following commented out?

#manage_sock_files_pattern(glusterd_t, glusterd_var_lib_t, glusterd_var_lib_t)


Any idea why?
Comment 32 Miroslav Grepl 2013-12-04 09:54:18 EST
No idea. 

Brian,
should not we see

glusterd_brick_t

?
Comment 33 Miroslav Grepl 2013-12-04 09:55:04 EST
(In reply to Miroslav Grepl from comment #32)
> No idea. 
> 
> Brian,
> should not we see
> 
> glusterd_brick_t
> 
> ?

Ah, I see it in the comment #30.
Comment 34 Brian Foster 2014-01-02 09:33:21 EST
Created attachment 844575 [details]
Latest audit log for gluster core tests

I've been away from this for a couple weeks so I've run through the core tests with the latest rawhide bits:

selinux-policy-3.13.1-10.fc21.noarch
glusterfs-3.4.2-0.1.qa5.fc21.x86_64

The attached log contains the AVC's encountered. At a high level:

- execmem/execstack issues from various operations, such as restart, peer probe, volume start, add-brick, etc. These also appear to occur via a few other executables, such as ssh, python, rpc.statd. (new issue)

- fifo file issues via the posix fs test suite (known issue)

- sock file issues via rebalance (known issue)

This testing was performed using the glusterd_brick_t brick label. I think we can get this bug closed and move on to getting the policy backported if we can get these last few issues resolved. Thanks.

NOTE: Also FWIW, the latest rawhide couldn't run dhclient in enforcing mode so I ran the entire test in permissive mode. The (somewhat unrelated to gluster) data should also be included in the audit log, if that is of any interest to the selinux folks.
Comment 35 Brian Foster 2014-01-02 09:38:49 EST
Created attachment 844578 [details]
VM 2 log (associated with comment #34)
Comment 36 Daniel Walsh 2014-01-02 12:29:52 EST
execmem/execstack seems to be an kerberos issue.

 https://bugzilla.redhat.com/show_bug.cgi?id=1047947


be75c8399eef7131f291e95048b843057b06b633 fixes the other issues in git.
Comment 37 Brian Foster 2014-01-08 12:24:16 EST
I've tested the latest policy bits (3.13.1-11.fc21) and last documented AVC's appear to be addressed. I hit some denials when using a couple automatic labelling scripts I'm proposing for gluster, but I'll file a separate bug for that. I also noticed a couple denials from sshd when running geo-replication. It's not clear to me whether these are general problems or not, here is the audit.log output:

type=AVC msg=audit(1389198980.498:289): avc:  denied  { dyntransition } for  pid=29573 comm="sshd" scontext=system_u:system_r:init_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process
type=SYSCALL msg=audit(1389198980.498:289): arch=c000003e syscall=1 success=yes exit=32 a0=6 a1=7f67c2044680 a2=20 a3=7fff43178da0 items=0 ppid=29572 pid=29573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_net_t:s0 key=(null)
...
type=AVC msg=audit(1389198980.744:301): avc:  denied  { dyntransition } for  pid=29572 comm="sshd" scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
type=SYSCALL msg=audit(1389198980.744:301): arch=c000003e syscall=1 success=yes exit=42 a0=5 a1=7f67c2064a70 a2=2a a3=666e6f636e753a72 items=0 ppid=715 pid=29572 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=9 tty=(none) comm="sshd" exe="/usr/sbin/sshd" subj=unconfined_u:unconfined_r:unconfined_t:s0 key=(null)
Comment 38 Miroslav Grepl 2014-01-10 02:36:06 EST
http://koji.fedoraproject.org/koji/taskinfo?taskID=6381693

the latest rawhide scratch build.
Comment 39 Brian Foster 2014-01-10 09:42:07 EST
(In reply to Miroslav Grepl from comment #38)
> http://koji.fedoraproject.org/koji/taskinfo?taskID=6381693
> 
> the latest rawhide scratch build.

Thanks! This appears to fix all of the AVCs. The glusterfs executable works fine again and I'm not hitting anything during geo-replication. This bug can probably be closed once this policy build is released. Thanks again.

Note You need to log in before you can comment on or make changes to this bug.