Bug 1467765

Summary: Live migration with shared NFS storage, nova instance creation will only work if Selinux is in permissive mode
Product: Red Hat OpenStack Reporter: Michael Jarrett <mjarrett>
Component: openstack-selinuxAssignee: Lon Hohberger <lhh>
Status: CLOSED NOTABUG QA Contact: Udi Shkalim <ushkalim>
Severity: high Docs Contact:
Priority: high    
Version: 10.0 (Newton)CC: ftaylor, mburns, mgrepl, mjarrett, mweetman, owalsh, rhallise, rlocke, sclewis, srevivo
Target Milestone: ---Keywords: ZStream
Target Release: 10.0 (Newton)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-04 12:23:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michael Jarrett 2017-07-05 04:34:30 UTC
Description of problem:
When live migration is configured using NFS shared storage nova cannot create an instance unless selinux is in permissive mode on the compute nodes or a new policy module is created.
ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
semodule -i my-virtlogd.pp

openstack-selinux 0.7.13-3.el7ost

How reproducible:


Steps to Reproduce:
1. Configure live migration using NFS shared storage from the "Red Hat OpenStack Platform 10 Migrating Instances" documentation.
2. Create a nova instance.


Actual results:
Instance fails and displays "Error creating server:"

Expected results:
Instance is created and becomes active.

Additional info:
After the instance fails sealert -a /var/log/audit/audit.log displays the following:
SELinux is preventing /usr/sbin/virtlogd from search access on the directory /var/lib/nova/instances.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that virtlogd should be allowed search access on the instances directory by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
# semodule -i my-virtlogd.pp


Additional Information:
Source Context                system_u:system_r:virtlogd_t:s0-s0:c0.c1023
Target Context                system_u:object_r:nfs_t:s0
Target Objects                /var/lib/nova/instances [ dir ]
Source                        virtlogd
Source Path                   /usr/sbin/virtlogd
Port                          <Unknown>
Host                          <Unknown>
Source RPM Packages           libvirt-daemon-2.0.0-10.el7_3.4.x86_64
Target RPM Packages           openstack-nova-common-14.0.3-8.el7ost.noarch
Policy RPM                    selinux-policy-3.13.1-102.el7_3.13.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     overcloud-compute-1.localdomain
Platform                      Linux overcloud-compute-1.localdomain
                              3.10.0-514.6.2.el7.x86_64 #1 SMP Fri Feb 17
                              19:21:31 EST 2017 x86_64 x86_64
Alert Count                   1
First Seen                    2017-07-04 05:07:23 UTC
Last Seen                     2017-07-04 05:07:23 UTC
Local ID                      d0015c00-a56c-462b-8a4c-97f38a2e47be

Raw Audit Messages
type=AVC msg=audit(1499144843.894:1252): avc:  denied  { search } for  pid=85319 comm="virtlogd" name="/" dev="0:42" ino=1379158 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir


type=SYSCALL msg=audit(1499144843.894:1252): arch=x86_64 syscall=open success=no exit=EACCES a0=7fc888000d30 a1=80441 a2=180 a3=7fc888000d90 items=0 ppid=1 pid=85319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=virtlogd exe=/usr/sbin/virtlogd subj=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 key=(null)

Hash: virtlogd,virtlogd_t,nfs_t,dir,search

Importing the module per the instructions in the alert fixes the issue and an instance is created without error. The following is the nova-conductor.log before fixing Selinux:

2017-07-04 05:07:29.552 286042 ERROR nova.scheduler.utils [req-b0bc5119-2e2f-423c-aae4-ce3df5426bfd 95cc191596d246698aad16ef770f66f8 d8bdac2a38624261aac2939dee28514f - - -] [instance: 1d0debd2-0a3a-463f-9acc-3abf2a31b777] Error from last host: overcloud-compute-1.localdomain (node overcloud-compute-1.localdomain): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1779, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1976, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 1d0debd2-0a3a-463f-9acc-3abf2a31b777 was re-scheduled: Unable to open file: /var/lib/nova/instances/1d0debd2-0a3a-463f-9acc-3abf2a31b777/console.log: Permission denied\n']
2017-07-04 05:10:47.709 286037 ERROR nova.scheduler.utils [req-b0bc5119-2e2f-423c-aae4-ce3df5426bfd 95cc191596d246698aad16ef770f66f8 d8bdac2a38624261aac2939dee28514f - - -] [instance: 1d0debd2-0a3a-463f-9acc-3abf2a31b777] Error from last host: overcloud-compute-0.localdomain (node overcloud-compute-0.localdomain): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1779, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1976, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 1d0debd2-0a3a-463f-9acc-3abf2a31b777 was re-scheduled: Unable to open file: /var/lib/nova/instances/1d0debd2-0a3a-463f-9acc-3abf2a31b777/console.log: Permission denied\n']
2017-07-04 05:10:48.534 286037 WARNING nova.scheduler.utils [req-b0bc5119-2e2f-423c-aae4-ce3df5426bfd 95cc191596d246698aad16ef770f66f8 d8bdac2a38624261aac2939dee28514f - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 199, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
    dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2017-07-04 05:10:48.543 286037 WARNING nova.scheduler.utils [req-b0bc5119-2e2f-423c-aae4-ce3df5426bfd 95cc191596d246698aad16ef770f66f8 d8bdac2a38624261aac2939dee28514f - - -] [instance: 1d0debd2-0a3a-463f-9acc-3abf2a31b777] Setting instance to ERROR state.

Comment 1 Lon Hohberger 2017-07-11 14:51:31 UTC
I believe you need to set a context when mounting, here:

# mount -o context='system_u:object_r:nova_var_lib_t:s0' server:/mntpoint /var/lib/nova/instances

Comment 2 Mike Burns 2017-08-04 12:23:56 UTC
Closing as NOTABUG.  If Lon's comment 1 is not true, then please reopen.

Comment 3 Red Hat Bugzilla 2023-09-14 04:00:36 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days