Bug 1566609 - [GSS][RFE] Add security policy to client mounts of glusterfs volumes (CNS)
Summary: [GSS][RFE] Add security policy to client mounts of glusterfs volumes (CNS)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment
Version: ocs-3.11
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Patric Uebele
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks: 1622458
TreeView+ depends on / blocked
 
Reported: 2018-04-12 15:28 UTC by Cal Calhoun
Modified: 2019-11-06 14:17 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 14:17:34 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Cal Calhoun 2018-04-12 15:28:27 UTC
Description of problem:

  Customer would like to have some kind of security policy in the gluster volumes created by CNS. Currently a privileged pod in the project A is able to mount every gluster volume present in the OCP cluster, regardless to which project belongs to. To do so they only need to know the gluster volume name, and mount it (root user and glusterfs-fuse required), as in the following traces:

[root@ocp-master ~]# oc get pod
NAME                     READY     STATUS    RESTARTS   AGE
httpd-<pod>   1/1       Running   1          12m

[root ~]# oc get pod httpd-<pod> -o jsonpath={.spec.containers[*].securityContext}
map[privileged:true runAsUser:0]

[root ~]# oc rsh httpd-<pod>

sh-4.2# id                                          
uid=0(root) gid=0(root) groups=0(root),2000         

sh-4.2# df                                          
Filesystem                                                                      1K-blocks    Used Available Use% Mounted on                                                                                        
overlay                                                                          10474476 6583872   3890604  63% /                                                                                                 
tmpfs                                                                               65536       0     65536   0% /dev                                                                                              
shm                                                                                 65536       0     65536   0% /dev/shm                                                                                          
tmpfs                                                                             1940792       0   1940792   0% /sys/fs/cgroup                                                                                    
tmpfs                                                                             1940792    2660   1938132   1% /run/secrets                                                                                      
192.168.xxx.xxx:test-vol_<volid>   1039616   33408   1006208   4% /mnt                                                                                              
/dev/vda1                                                                        10474476 6583872   3890604  63% /etc/hosts                                                                                        
tmpfs                                                                             1940792      16   1940776   1% /run/secrets/kubernetes.io/serviceaccount                                                         

sh-4.2# mount -t glusterfs 192.168.xxx.xxx:test-vol_<volid> /media/                                                                                                

sh-4.2# echo "new" > /media/new_file                                                                                                                                                                               

sh-4.2# cat /media/secret 
Top secret

sh-4.2# df -h  /media/
Filesystem                                                                        Size  Used Avail Use% Mounted on
192.168.xxx.xxx:test-vol_<volid> 1016M   33M  983M   4% /media

  Besides of mounting the volume to manipulate data, they can also access other volumes without the mount command (and therefore without a privileged container) using containers with tools like glfs-cli (https://github.com/gluster/glusterfs-coreutils).

  They would like to have a way to mitigate these security gaps, since they are crucial in their environment.

Comment 7 John Strunk 2018-10-11 13:09:23 UTC
This can be mitigated by enabling Gluster's management and data TLS. By doing so, only processes w/ access to the keys will be able to mount (or use gfapi-based tools).

In the case of PVs, mount is handled by kubelet, so the TLS keys need not be exposed to any pods, only the kubelet itself.

Comment 8 Raul Sevilla 2018-11-07 13:49:49 UTC
Gluster performance will be impacted if enabling TLS encryption, customer is looking for another solution without this performance impact.

Comment 9 Yaniv Kaul 2019-01-02 12:05:23 UTC
(In reply to rsevilla from comment #8)
> Gluster performance will be impacted if enabling TLS encryption, customer is
> looking for another solution without this performance impact.

I wonder if we can enable it just with authentication and not encryption and if we can enable it only on the mgmt path.

Comment 10 John Strunk 2019-01-02 16:15:10 UTC
Enabling on the mgmt path will stop clients w/o the key from retrieving the volfile from glusterd, but I don't think it will prevent direct access to the bricks.
Preventing access to glusterd will increase the level of difficulty, but the info in the volfile can be guessed (via trial and error) such that a direct brick connection could still be established.

Atin, Can you confirm if management TLS affects whether the client <=> brick communication link can be established?

Comment 11 Atin Mukherjee 2019-01-04 04:17:03 UTC
To my best of knowledge, management TLS will not affect or add in any security layer between client to brick communication until and unless client/server ssl are enabled.

Milind/Mohit - Looping you folks in for your comment. It might be worth to give a two liners about mgmt, client/server ssl and their functionality and how the flow is established.

Comment 14 John Strunk 2019-01-04 20:28:03 UTC
Mohit - By "ACL based option", do you mean the allow/deny volume options?

I don't think this will be sufficient for containerized deployments as gluster is currently deployed using the node's host network, meaning traffic that arrives at the server will have exited the SDN and appear to be originating from the client node. There would be no way to distinguish traffic originating from a legitimate fuse mount (started by kubelet) from requests sent by a rogue pod... they'd have the same IP.

Comment 29 Pedro Amoedo 2019-07-01 11:06:29 UTC
FYI, I've requested a documentation review via BZ#1725731 suggesting to include an SSL encryption recommendation note.

Regards.


Note You need to log in before you can comment on or make changes to this bug.