Bug 1406470 - SELinux "preventing rpc.mountd from ioctl access on the blk_file /dev/rbd1" when sharing rbd device over NFS and mounting as v3 on a client
Summary: SELinux "preventing rpc.mountd from ioctl access on the blk_file /dev/rbd1" w...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.3
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 7.3
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 1477664
TreeView+ depends on / blocked
 
Reported: 2016-12-20 15:43 UTC by Kyle Squizzato
Modified: 2020-06-11 13:09 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 09:59:46 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2818611 None None None 2016-12-20 15:45:56 UTC
Red Hat Product Errata RHBA-2018:3111 None None None 2018-10-30 10:00:50 UTC

Description Kyle Squizzato 2016-12-20 15:43:50 UTC
Description of problem:
Customer is attempting to export an rbd device formatted with xfs over NFSv3 but SELinux is denying the request with: 

~~~
SELinux is preventing /usr/sbin/rpc.mountd from ioctl access on the blk_file /dev/rbd1.

*****  Plugin device (91.4 confidence) suggests   ****************************

If you want to allow rpc.mountd to have ioctl access on the rbd1 blk_file
Then you need to change the label on /dev/rbd1 to a type of a similar device.
Do
# semanage fcontext -a -t SIMILAR_TYPE '/dev/rbd1'
# restorecon -v '/dev/rbd1'

*****  Plugin catchall (9.59 confidence) suggests   **************************

If you believe that rpc.mountd should be allowed ioctl access on the rbd1 blk_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'rpc.mountd' --raw | audit2allow -M my-rpcmountd
# semodule -i my-rpcmountd.pp


Additional Information:
Source Context                system_u:system_r:nfsd_t:s0
Target Context                system_u:object_r:device_t:s0
Target Objects                /dev/rbd1 [ blk_file ]
Source                        rpc.mountd
Source Path                   /usr/sbin/rpc.mountd
Port                          <Unknown>
Host                          ceph-admin-01.foo.bar
Source RPM Packages           nfs-utils-1.3.0-0.21.el7_2.1.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.13.1-102.el7_3.4.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     ceph-admin-01.foo.bar
Platform                      Linux ceph-admin-01.foo.bar
                              3.10.0-327.28.3.el7.x86_64 #1 SMP Fri Aug 12
                              13:21:05 EDT 2016 x86_64 x86_64
Alert Count                   12
First Seen                    2016-12-02 15:15:04 CST
Last Seen                     2016-12-02 17:57:25 CST
Local ID                      72ba71b9-8e1a-414c-bc8c-797612f4af3a

Raw Audit Messages
type=AVC msg=audit(1480723045.27:207768): avc:  denied  { ioctl } for  pid=155606 comm="rpc.mountd" path="/dev/rbd1" dev="devtmpfs" ino=13674609 scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=blk_file


type=SYSCALL msg=audit(1480723045.27:207768): arch=x86_64 syscall=ioctl success=yes exit=0 a0=e a1=80081272 a2=7fe5c6ec8310 a3=7ffec9987170 items=0 ppid=1 pid=155606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=rpc.mountd exe=/usr/sbin/rpc.mountd subj=system_u:system_r:nfsd_t:s0 key=(null)

Hash: rpc.mountd,nfsd_t,device_t,blk_file,ioctl
~~~

Disabling SELinux resolves this issue.  This occurs when an NFS client attempts to mount one of the rbd shares: 

Dec  2 17:57:25 ceph-admin-01 rpc.mountd[155606]: authenticated mount request from 172.20.0.10:616 for /storage/ceph/rstore/user/testuser2/default (/storage/ceph/rstore/user/testuser2/default)
Dec  2 17:57:25 ceph-admin-01 setroubleshoot: SELinux is preventing /usr/sbin/rpc.mountd from read access on the blk_file rbd1. For complete SELinux messages. run sealert -l 005961f1-ead1-4d9a-b6d6-a9f20788a80d

# mount
/dev/rbd0 on /storage/ceph/rstore/user/testuser/default type xfs (rw,relatime,seclabel,attr2,inode64,sunit=8192,swidth=8192,noquota)
/dev/rbd1 on /storage/ceph/rstore/user/testuser2/default type xfs (rw,relatime,seclabel,attr2,inode64,sunit=8192,swidth=8192,noquota)

------

Version-Release number of selected component (if applicable):
ceph-common-10.2.3-13.el7cp.x86_64
ceph-selinux-10.2.3-13.el7cp.x86_64

How reproducible:
I was not able to get this reproduced; but have provided steps on how it could be reproduced if your willing to try, it's just as simple as exporting an RBD device and restarting nfs-server.  For whatever reason I was able to configure an NFS RBD export which was exporting an XFS filesystem just fine, details can be found in the attached case in comment #8.

Steps to Reproduce:

The steps to reproduced this require a Ceph cluster to be already configured with the intent of providing an rbd client.

1. Map an rbd to a ceph client which will be configured as your NFS server
2. Format the rbd with xfs
3. Mount the rbd to /test/nfsexport
4. Export the rbd device by appending the following to /etc/exports: 

 /test/nfsexport *(rw,sync)

5. Restart the nfs-server.service 

 # systemctl restart nfs-server 

6. Inspect the status of nfs-mountd.service to see if mountd actually started: 

 # systemctl status nfs-mountd.service

7. Mount the NFS share on an NFS client using -o vers=3 with: 

 # mount -t nfs -o vers=3 nfs-server.example.com:/test/nfsexport /mnt/nfs 

8. Wait to see if issue is reproduced on the NFS server.  The customer stated that the mount from the client side usually timed out/failed after about 15 seconds (as rpc.mountd was denied). 

Actual results:
rpc.mountd is unable to start when SELinux is enabled, must be disabled to start

Expected results:
rpc.mountd is able to start when SELinux is enabled 

Additional info:
* The ceph-selinux package has been installed on the customers system, but from my research the ceph-selinux package doesn't really touch any of the rbd blk_file's and primarily affects the daemons.
* His selinux-policy versions match mine: 

# rpm -qa | grep selinux-policy
selinux-policy-targeted-3.13.1-102.el7_3.4.noarch
selinux-policy-3.13.1-102.el7_3.4.noarch

Comment 1 Boris Ranto 2017-01-04 12:25:12 UTC
Hi,

thanks for the detailed report. It is true that we do not set a specific label for the rbd devices. This just means that they will inherit the label from their parent directory (i.e. they will be device_t). We could change this and provide a new label for these files so that people can fine-tune their policy specifically for the rbd files but we did not do it simply because we did not need it so far. Even if we do that, I do not think we want to allow the generic ioctl access to these files for rpc/nfs daemons in our policy (we are not the ones that need it).

If I understand it correctly, the customer is building an application on top of our product and the application itself needs this permission. Therefore, it is their application that should handle these SELinux permissions in its own SELinux policy -- they can easily generate the policy with audit2allow from the avc denials or they can simply manage these rules (semi-)manually as described in the 'Plugin device' suggestion.

btw: The fact that you could not reproduce this can very well be related to the fact that you are running these commands from the terminal, i.e. in an unconfined mode, while they might be running them in a confined mode.

[1] probably we even should change the label at some point in the future -- we did not do it simply because we did not need it so far


Regards,
Boris

Comment 2 Kyle Squizzato 2017-01-04 22:46:09 UTC
(In reply to Boris Ranto from comment #1)
> If I understand it correctly, the customer is building an application on top
> of our product and the application itself needs this permission.

From my understanding, it's not any specialized application it's just kernel NFS that we ship which has it's own SELinux policy.

Comment 4 Boris Ranto 2017-01-25 10:06:26 UTC
(In reply to Kyle Squizzato from comment #2)
> (In reply to Boris Ranto from comment #1)
> > If I understand it correctly, the customer is building an application on top
> > of our product and the application itself needs this permission.
> 
> From my understanding, it's not any specialized application it's just kernel
> NFS that we ship which has it's own SELinux policy.

Well, kinda. The SELinux does not allow the rpc.mountd daemon to access the files in /dev but I believe this is by design. The nfs usually exports regular directories (and files), not /dev (or files under /dev). The whole point of SELinux is to not give hacked daemons too much kernel over the system -- if we allowed the access by default then a hacked rpc.mountd could access the block devices which is definitely not good.

However, the customer seems to be building an application where exporting /dev is a common scenario. It is their application that requires this access and hence, it is that application that should handle it by a policy.

Comment 5 Kyle Squizzato 2017-01-31 17:14:19 UTC
(In reply to Boris Ranto from comment #4)
> (In reply to Kyle Squizzato from comment #2)
> > (In reply to Boris Ranto from comment #1)
> > > If I understand it correctly, the customer is building an application on top
> > > of our product and the application itself needs this permission.
> > 
> > From my understanding, it's not any specialized application it's just kernel
> > NFS that we ship which has it's own SELinux policy.
> 
> Well, kinda. The SELinux does not allow the rpc.mountd daemon to access the
> files in /dev but I believe this is by design. The nfs usually exports
> regular directories (and files), not /dev (or files under /dev). The whole
> point of SELinux is to not give hacked daemons too much kernel over the
> system -- if we allowed the access by default then a hacked rpc.mountd could
> access the block devices which is definitely not good.
> 
> However, the customer seems to be building an application where exporting
> /dev is a common scenario. It is their application that requires this access
> and hence, it is that application that should handle it by a policy.

Well in reality they're not really exporting /dev, they're utilizing rbd in the same way that you'd use a traditional block device.  They've mapped the rbd device to the NFS server as /dev/rbd0, formatted it with XFS and then mounted that to a directory such as /mnt/export then exported /mnt/export.  

That's what makes it odd (and my SELinux foo is poor so maybe I'm just missing something) I would expect NFS's services to be none the wiser and just think that it's any other block device.

Comment 6 Boris Ranto 2017-02-01 11:55:41 UTC
Now, that makes much more sense. It could be that this is caused by us not labelling the /dev/rbdX files properly and using a generic device_t label instead of fixed_disk_device_t (that is the one that most mountable devices like sd or dm use) or creating our own label (probably preferred).

Will running something like this

# chcon -t fixed_disk_device_t /dev/rbd*

help you in this case (if it does then please note that it is just a temporary fix/workaround)?

In any case, since this is kernel/client-side and we ship the kernel/client bits in RHEL, we might need to fix it there (reassigning).

Comment 8 Milos Malik 2017-02-01 12:36:00 UTC
SELinux policy already contains correct file labeling patterns:

# rpm -qa selinux\*
selinux-policy-3.13.1-102.el7.noarch
selinux-policy-targeted-3.13.1-102.el7.noarch
# matchpathcon /dev/rdb0
/dev/rdb0	system_u:object_r:fixed_disk_device_t:s0
# matchpathcon /dev/rdb1
/dev/rdb1	system_u:object_r:fixed_disk_device_t:s0
# 

How are the /dev/rdb* devices created? Do they exist since the boot?

Comment 9 Boris Ranto 2017-02-06 22:45:06 UTC
@Milos: They are created at runtime through CLI or via an init script at boot time (the script runs rbdmap command that reads /etc/ceph/rbdmap and maps the described devices). This should both be created well after the kernel is initiated.

Comment 10 John-Paul Robinson 2017-02-07 17:56:24 UTC
Here are is the matchpathcon con info for some of the relevant files (included /home for comparison, not currently sharing /home).


$ matchpathcon /dev/rbd0 /storage/ceph/rstore/user/testuser/default /dev/mapper/rhel_ceph--admin--01-home /home
/dev/rbd0	system_u:object_r:device_t:s0
/storage/ceph/rstore/user/testuser/default	system_u:object_r:default_t:s0
/dev/mapper/rhel_ceph--admin--01-home	system_u:object_r:device_t:s0
/home	system_u:object_r:home_root_t:s0

And the df:

Filesystem                             Size  Used Avail Use% Mounted on
/dev/rbd0                              1.0T   34M  1.0T   1% /storage/ceph/rstore/user/testuser/default
/dev/mapper/rhel_ceph--admin--01-home  1.1T  133M  1.1T   1% /home


And the relevant lines from mount:

nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/rhel_ceph--admin--01-home on /home type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/rbd0 on /storage/ceph/rstore/user/testuser/default type xfs (rw,relatime,seclabel,attr2,inode64,sunit=8192,swidth=8192,noquota)

The rbd device gets mapped at boot.  During create it is formatted and then mounted.

The problem I see is only with rpc.mountd TCP listener.  The UDP listener appears to start fine.  Here's the output from rpcinfo:

$ rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  10002  status
    100005    1   udp  10004  mountd
    100005    2   udp  10004  mountd
    100005    3   udp  10004  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100021    1   udp  10006  nlockmgr
    100021    3   udp  10006  nlockmgr
    100021    4   udp  10006  nlockmgr
    100021    1   tcp  10006  nlockmgr
    100021    3   tcp  10006  nlockmgr
    100021    4   tcp  10006  nlockmgr


I'll test the proposed work around.

Comment 12 Lukas Vrabec 2017-09-05 12:18:46 UTC
[root@rhel7 ~]# sesearch -A -s nfsd_t -t fixed_disk_device_t -c blk_file 
Found 3 semantic av rules:
   allow nfsd_t fixed_disk_device_t : blk_file { ioctl read getattr lock open } ; 
   allow nfsd_t device_node : blk_file getattr ; 
   allow nfsd_t device_node : blk_file getattr ; 

[root@rhel7 ~]# matchpathcon /dev/rdb0
/dev/rdb0	system_u:object_r:fixed_disk_device_t:s0
[root@rhel7 ~]# matchpathcon /dev/rdb1
/dev/rdb1	system_u:object_r:fixed_disk_device_t:s0

This issue looks fixed from my POV. Could somebody test this?

Comment 14 Milos Malik 2018-02-22 07:05:25 UTC
# rpm -qa selinux\*
selinux-policy-targeted-3.13.1-189.el7.noarch
selinux-policy-3.13.1-189.el7.noarch
#

There is a typo error in file context patterns:

# matchpathcon /dev/rbd0
/dev/rbd0	system_u:object_r:device_t:s0
# matchpathcon /dev/rdb0
/dev/rdb0	system_u:object_r:fixed_disk_device_t:s0
#

Please notice that "rbd" and "rdb" are different strings. This bug is NOT fixed.

Comment 20 errata-xmlrpc 2018-10-30 09:59:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3111


Note You need to log in before you can comment on or make changes to this bug.