Bug 1368575 - GlusterFS mount point fails to mount, can't continue past 2D
Summary: GlusterFS mount point fails to mount, can't continue past 2D
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Quickstart Cloud Installer
Classification: Red Hat
Component: WebUI
Version: 1.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ga
: 1.0
Assignee: Derek Whatley
QA Contact: Sudhir Mallamprabhakara
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-19 20:21 UTC by Dylan Murray
Modified: 2021-01-19 14:42 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-19 14:42:02 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1862 0 normal SHIPPED_LIVE Red Hat Quickstart Installer 1.0 2016-09-13 20:18:48 UTC

Description Dylan Murray 2016-08-19 20:21:35 UTC
Description of problem:
Selinux violations causing gluster mount to fail

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Launch a deployment with the latest ISO
2. Select GlusterFS to be used with RHV storage
3. Enter in GlusterFS mount point information and click next

Actual results:
"Failed to mount storage domain 'my_storage'. Please make sure it is a valid mount point"

Expected results:
Should return a successful mount and allow the user to continue.

Additional info:

From /var/log/audit/audit.log:

type=SYSCALL msg=audit(1471633806.894:2317): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7ffe8a34b870 a2=10 a3=7ffe8a34b870 items=0 ppid=994 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="umount.nfs" exe="/usr/sbin/mount.nfs" subj=system_u:system_r:passenger_t:s0 key=(null)
type=AVC msg=audit(1471633806.911:2318): avc:  denied  { name_connect } for  pid=997 comm="mount.nfs" dest=38465 scontext=system_u:system_r:passenger_t:s0 tcontext=system_u:object_r:gluster_port_t:s0 tclass=tcp_socket

Comment 2 bmorriso 2016-08-24 15:27:24 UTC
I ran into this as well on the QCI-1.0-RHEL-7-20160819.t.0 compose

Setting the Fusor server to "Permissive" appeared to work around the issue and allowed the deployment wizard to continue past page 2D.

Comment 3 Dylan Murray 2016-08-24 18:12:18 UTC
https://github.com/fusor/fusor-selinux/pull/31

Comment 4 Dylan Murray 2016-08-25 15:04:12 UTC
https://github.com/fusor/fusor-selinux/pull/31

Comment 5 Dylan Murray 2016-08-26 14:46:49 UTC
This fix made it into QCI-1.0-RHEL-7-20160825.t.0

Comment 7 Dylan Murray 2016-08-26 20:38:14 UTC
https://github.com/fusor/fusor-selinux/pull/33, posted the wrong link previously. Marking private.

Comment 8 bmorriso 2016-09-01 17:55:42 UTC
Verified with compose QCI-1.0-RHEL-7-20160831.t.1

I was able to mount a Gluster share on the RHV Storage page without error and also did not receive any error messages on the Deployment Summary page.

Comment 10 errata-xmlrpc 2016-09-13 16:37:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1862

Comment 11 Antonin Pagac 2016-12-20 16:45:09 UTC
Hit this with QCI-1.1-RHEL-7-20161215.t.0 upgraded from QCI 1.0.

From production.log:

"2016-12-20 08:58:36 [app] [E] Error running command: sudo safe-mount.sh '4' 'rhv' '10.8.58.116' 'gv0' 'nfs'
2016-12-20 08:58:36 [app] [E] Status code: 1
2016-12-20 08:58:36 [app] [E] Command output: sudo: unable to send audit message: Permission denied
 | mount.nfs: Connection timed out
 | Failed to mount gv0 share"


In permissive mode it also fails:

"2016-12-20 10:24:41 [app] [I] Completed 200 OK in 125269ms (Views: 5.1ms | ActiveRecord: 1.8ms)
2016-12-20 10:24:41 [app] [E] Error running command: sudo safe-mount.sh '4' 'rhv' '10.8.58.116' 'gv0' 'nfs'
2016-12-20 10:24:41 [app] [E] Status code: 1
2016-12-20 10:24:41 [app] [E] Command output: mount.nfs: Connection timed out
 | Failed to mount gv0 share"


I'm able to manually mount the share on the fusor machine with:
"mount -t glusterfs <IP>:/gv0 /mnt/"

The 'Gluster' option is selected in WebUI when trying to mount. It seems that it tries to use 'nfs' though.

Comment 12 Derek Whatley 2016-12-20 18:52:41 UTC
Seems that the UI was providing 'glusterfs' as the mount type while the back-end was looking for the string 'GFS', and otherwise defaulting to using NFS as a mount type. 

https://github.com/fusor/fusor/pull/1315

Comment 13 Dylan Murray 2017-01-11 15:31:15 UTC
This fix is in QCI-1.1-RHEL-7-20170111.t.1

Comment 14 James Olin Oden 2017-01-11 18:31:18 UTC
Compose: QCI-1.1-RHEL-7-20170106.t.0

Could enter in gluster and NFS mount info in the RHV Storage screen and in the OCP screen where mount information may be entered.


Note You need to log in before you can comment on or make changes to this bug.