Bug 1375546

Summary: Failed to start VM on the host with the sanlock-3.2.4-3.el7_2.x86_64
Product: Red Hat Enterprise Linux 7 Reporter: Artyom <alukiano>
Component: selinux-policyAssignee: Miroslav Grepl <mgrepl>
Status: CLOSED NOTABUG QA Contact: BaseOS QE Security Team <qe-baseos-security>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 7.2CC: agk, alukiano, cluster-maint, eedri, lvrabec, mgrepl, mmalik, plautrba, pvrabec, ssekidde
Target Milestone: rcKeywords: Regression
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-14 15:33:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs none

Description Artyom 2016-09-13 11:10:21 UTC
Created attachment 1200469 [details]
logs

Description of problem:
Failed to start VM on the host with the sanlock-3.2.4-3.el7_2.x86_64
Under /var/log/messages I can see:
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.264:334898): avc:  denied  { read write } for  pid=72305 comm="sanlock" name="ids" dev="fuse" ino=11628941369935594429 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:fusefs_t:s0 tclass=file
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.264:334899): avc:  denied  { open } for  pid=72305 comm="sanlock" path="/rhev/data-center/mnt/glusterSD/10.35.65.26:_GE5__volume03/af2c7ffc-24a2-4523-ac74-0cd9e80dc81b/dom_md/ids" dev="fuse" ino=11628941369935594429 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:fusefs_t:s0 tclass=file
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.264:334900): avc:  denied  { getattr } for  pid=72305 comm="sanlock" path="/rhev/data-center/mnt/glusterSD/10.35.65.26:_GE5__volume03/af2c7ffc-24a2-4523-ac74-0cd9e80dc81b/dom_md/ids" dev="fuse" ino=11628941369935594429 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:fusefs_t:s0 tclass=file
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.398:334901): avc:  denied  { read write } for  pid=72307 comm="sanlock" name="ids" dev="0:40" ino=67639752 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.398:334902): avc:  denied  { open } for  pid=72307 comm="sanlock" path="/rhev/data-center/mnt/10.35.118.113:_nas01_ge__5__nfs__2/6fa8735a-7970-4c9f-9f04-566c662a21b6/dom_md/ids" dev="0:40" ino=67639752 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file
Sep 13 13:21:39 lynx24 kernel: type=1400 audit(1473762099.398:334903): avc:  denied  { getattr } for  pid=72307 comm="sanlock" path="/rhev/data-center/mnt/10.35.118.113:_nas01_ge__5__nfs__2/6fa8735a-7970-4c9f-9f04-566c662a21b6/dom_md/ids" dev="0:40" ino=67639752 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=file

Version-Release number of selected component (if applicable):
3.10.0-327.22.2.el7.x86_64
sanlock-python-3.2.4-3.el7_2.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.5.x86_64
sanlock-3.2.4-3.el7_2.x86_64
sanlock-lib-3.2.4-3.el7_2.x86_64
vdsm-4.18.13-1.el7ev.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Start the VM on the host with the sanlock package sanlock-3.2.4-3.el7_2.x86_64
2.
3.

Actual results:
VM fails to start with error message under vdsm.log:
Thread-70826::ERROR::2016-09-13 13:17:26,773::vm::765::virt.vm::(_startUnderlyingVm) vmId=`53c725ee-78b0-43b4-a171-d6bdfcca4f8e`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 706, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1996, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: Process exited prior to exec: libvirt: Lock Driver error : Failed to open socket to sanlock daemon: Permission denied

Thread-70826::INFO::2016-09-13 13:17:26,775::vm::1308::virt.vm::(setDownStatus) vmId=`53c725ee-78b0-43b4-a171-d6bdfcca4f8e`::Changed state to Down: internal error: Process exited prior to exec: libvirt: Lock Driver error : Failed to open socket to sanlock daemon: Permission denied
 (code=1)
Thread-70826::INFO::2016-09-13 13:17:26,775::guestagent::415::virt.vm::(stop) vmId=`53c725ee-78b0-43b4-a171-d6bdfcca4f8e`::Stopping connection

Expected results:
VM succeeds to start without any errors

Additional info:

Comment 1 David Teigland 2016-09-13 15:21:29 UTC
If this is a regression, and worked before, then I'm guessing that selinux is misconfigured.  But, if this is attempt to use sanlock in a new way, e.g. on storage that it's not been used with before, then perhaps selinux has not been updated to allow that kind of access.

Comment 2 Artyom 2016-09-14 08:42:08 UTC
I tried to run VM when host SELinux has the permissive mode and the VM succeeds to run, so I believe the bug relates to the second scenario.