Hide Forgot
Description of problem: [root@compute-1 heat-admin]# docker exec -ti nova_libvirt /bin/bash tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified ()[root@compute-1 /]# ssh -i /etc/nova/migration/identity -p 2022 nova_migration touch /var/lib/nova/instances/foo.txt /bin/bash: Permission denied ()[root@compute-1 /]# exit [root@compute-1 heat-admin]# journalctl ... Sep 26 10:29:17 compute-1 sshd[31798]: ssh_selinux_change_context: setcon system_u:system_r:sshd_net_t:s0 from system_u:system_r:spc_t:s0 failed with Permission denied [preauth] Sep 26 10:29:18 compute-1 sshd[31798]: Accepted publickey for nova_migration from 172.17.1.20 port 52586 ssh2: RSA SHA256:8SAe9gdnBj+WiaEAE+3JnOD4IuP7w/fBHb7DtslAtjc Sep 26 10:29:18 compute-1 systemd[1]: Created slice User Slice of saslauth. Sep 26 10:29:18 compute-1 systemd[1]: Starting User Slice of saslauth. Sep 26 10:29:18 compute-1 systemd-logind[738]: New session c14 of user saslauth. Sep 26 10:29:18 compute-1 systemd[1]: Failed to start Session c14 of user saslauth. Sep 26 10:29:18 compute-1 sshd[31798]: pam_systemd(sshd:session): Failed to create session: Start job for unit session-c14.scope failed with 'failed' Sep 26 10:29:18 compute-1 sshd[31798]: pam_unix(sshd:session): session opened for user nova_migration by (uid=0) Sep 26 10:29:18 compute-1 sshd[31801]: sshd_selinux_copy_context: setcon failed with Permission denied Sep 26 10:29:18 compute-1 systemd-logind[738]: Removed session c14. Sep 26 10:29:18 compute-1 systemd[1]: Removed slice User Slice of saslauth. Sep 26 10:29:18 compute-1 systemd[1]: Stopping User Slice of saslauth. Sep 26 10:29:18 compute-1 sshd[31801]: Received disconnect from 172.17.1.20 port 52586:11: disconnected by user Sep 26 10:29:18 compute-1 sshd[31801]: Disconnected from 172.17.1.20 port 52586 Sep 26 10:29:18 compute-1 sshd[31798]: pam_unix(sshd:session): session closed for user nova_migration [root@compute-1 heat-admin]# ausearch -m avc -ts recent ... time->Tue Sep 26 10:29:17 2017 type=PROCTITLE msg=audit(1506421757.952:650): proctitle=737368643A205B61636365707465645D type=SYSCALL msg=audit(1506421757.952:650): arch=c000003e syscall=1 success=no exit=-13 a0=6 a1=555cb401fd40 a2=20 a3=7ffd1f19acf0 items=0 ppid=31798 pid=31799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:spc_t:s0 key=(null) type=AVC msg=audit(1506421757.952:650): avc: denied { dyntransition } for pid=31799 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process ---- time->Tue Sep 26 10:29:18 2017 type=PROCTITLE msg=audit(1506421758.266:666): proctitle=737368643A206E6F76615F6D6967726174696F6E205B707269765D type=SYSCALL msg=audit(1506421758.266:666): arch=c000003e syscall=1 success=no exit=-13 a0=6 a1=555cb4021400 a2=2a a3=666e6f636e753a72 items=0 ppid=31798 pid=31801 auid=996 uid=996 gid=994 euid=996 suid=996 fsuid=996 egid=994 sgid=994 fsgid=994 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:spc_t:s0 key=(null) type=AVC msg=audit(1506421758.266:666): avc: denied { dyntransition } for pid=31801 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process ---- time->Tue Sep 26 10:29:18 2017 type=PROCTITLE msg=audit(1506421758.308:669): proctitle=737368643A206E6F76615F6D6967726174696F6E406E6F747479 type=SYSCALL msg=audit(1506421758.308:669): arch=c000003e syscall=59 success=no exit=-13 a0=555cb40223e0 a1=7ffd1f19a3d0 a2=555cb40379c0 a3=6 items=0 ppid=31801 pid=31802 auid=996 uid=996 gid=994 euid=996 suid=996 fsuid=996 egid=994 sgid=994 fsgid=994 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:spc_t:s0 key=(null) type=AVC msg=audit(1506421758.308:669): avc: denied { transition } for pid=31802 comm="sshd" path="/usr/bin/bash" dev="vda2" ino=29360726 scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process Version-Release number of selected component (if applicable): OSP12 openstack-selinux-0.8.10-0.20170914195211.e16a8f8.2.el7ost.noarch docker-images - 2017-09-22.5 How reproducible: Always Steps to Reproduce: 1.Deploy OSP12 cluster and launch instance 2.Scale-up compute node 3.nova live-migration instance new_host Actual results: after migration instance didn't change hypervisor due to selinux problem in nova_migration_target
w/a - selinux permissive mode in nova_migration_target container (overcloud) [stack@undercloud-0 ~]$ nova live-migration after_deploy compute-1.localdomain (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | MIGRATING | migrating | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | MIGRATING | migrating | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | MIGRATING | migrating | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | MIGRATING | migrating | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | MIGRATING | migrating | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+-----------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+--------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+--------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | ACTIVE | - | Running | tenantvxlan=192.168.32.6, 10.0.0.194 | +--------------------------------------+--------------+--------+------------+-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova show after_deploy +--------------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute-1.localdomain | | OS-EXT-SRV-ATTR:hostname | after-deploy | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute-1.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-32rb64sz | | OS-EXT-SRV-ATTR:root_device_name | /dev/vda | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2017-09-26T08:38:29.000000 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2017-09-26T08:38:15Z | | description | - | | flavor:disk | 1 | | flavor:ephemeral | 0 | | flavor:extra_specs | {} | | flavor:original_name | m1.tiny | | flavor:ram | 512 | | flavor:swap | 0 | | flavor:vcpus | 1 | | hostId | e81b9fc5ee423f3216e3aaaca2fc2e962383bb01887658efb4de8bb6 | | host_status | UP | | id | e38ac801-0108-4350-b12a-f35d1727ccd9 | | image | cirros (00029d37-bef3-440d-8f63-e7e3fd686daf) | | key_name | oskey | | locked | False | | metadata | {} | | name | after_deploy | | os-extended-volumes:volumes_attached | [{"id": "0bbdee18-0e82-4f0b-93b5-937d637aa7b6", "delete_on_termination": false}] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tags | [] | | tenant_id | c134544c13144892b64e5e9c351e75ed | | tenantvxlan network | 192.168.32.6, 10.0.0.194 | | updated | 2017-09-26T10:50:04Z | | user_id | dd0f81920c0c4859b7362862ad2a1002 | +--------------------------------------+----------------------------------------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+--------------+--------+------------+-------------+--------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------+--------+------------+-------------+--------------------------------------+ | e38ac801-0108-4350-b12a-f35d1727ccd9 | after_deploy | ACTIVE | - | Running | tenantvxlan=192.168.32.6, 10.0.0.194 |
Do we have this working on the 0.4 build? I ask because there are quite a few changes there, concerning selinux and instances.
Hi Lon, I'll check that!
I'm still seeing this issue during the upgrade process: openstack-selinux-0.8.11-0.20171013192233.ce13ba7.el7ost.noarch type=AVC msg=audit(1508350132.299:4682): avc: denied { dyntransition } for pid=90266 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process type=AVC msg=audit(1508350132.589:4698): avc: denied { dyntransition } for pid=90268 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1508350132.631:4701): avc: denied { transition } for pid=90269 comm="sshd" path="/usr/bin/bash" dev="vda1" ino=12738088 scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1508350136.936:4721): avc: denied { read } for pid=90307 comm="iptables-restor" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350136.936:4723): avc: denied { read } for pid=90307 comm="iptables-restor" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350136.945:4725): avc: denied { read } for pid=90309 comm="ip6tables-resto" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350136.946:4727): avc: denied { read } for pid=90309 comm="ip6tables-resto" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350228.949:4770): avc: denied { read } for pid=90589 comm="iptables-restor" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350228.949:4770): avc: denied { open } for pid=90589 comm="iptables-restor" path="/run/xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350228.950:4771): avc: denied { lock } for pid=90589 comm="iptables-restor" path="/run/xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350229.892:4800): avc: denied { dyntransition } for pid=90598 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process type=AVC msg=audit(1508350230.196:4816): avc: denied { dyntransition } for pid=90600 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process type=AVC msg=audit(1508350329.184:4859): avc: denied { read } for pid=91066 comm="iptables-restor" name="xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350329.184:4859): avc: denied { open } for pid=91066 comm="iptables-restor" path="/run/xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350329.185:4860): avc: denied { lock } for pid=91066 comm="iptables-restor" path="/run/xtables.lock" dev="tmpfs" ino=43117 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file type=AVC msg=audit(1508350331.020:4889): avc: denied { dyntransition } for pid=91077 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=system_u:system_r:sshd_net_t:s0 tclass=process type=AVC msg=audit(1508350331.318:4905): avc: denied { dyntransition } for pid=91079 comm="sshd" scontext=system_u:system_r:spc_t:s0 tcontext=unconfined_u:unconfined_r:unconfined_t:s0 tclass=process
First two AVCs shouldn't have appeared: Was caused by: Unknown - would be allowed by active policy Possible mismatch between this policy and the one under which thh e audit message was generated. Possible mismatch between current in-memory boolean settings vs permanent ones. The rest I'll look more into.
Rechecked with container-selinux 2.30, no change to the other AVCs apart from the ones noted in comment #7. Will follow up.
After discussing with the SELinux team, it sounds like /sys/fs/selinux may be accessible from within the nova-libvirt container, which is incorrect - the SELinux store should not be accessible from inside containers. Since /sys/fs/selinux is available, is_selinux_enabled() returns true and sshd tries to do a domain transition, but fails.
From Oliver Walsh: https://review.openstack.org/502681 https://review.openstack.org/502656
So, seems /sys/fs/selinux is needed by the libvirt container, but causes problems for the sshd container.
reproduced.
(In reply to Lon Hohberger from comment #9) > After discussing with the SELinux team, it sounds like /sys/fs/selinux may > be accessible from within the nova-libvirt container, which is incorrect - > the SELinux store should not be accessible from inside containers. That should be the nova_migration_target container, which runs an sshd daemon to tunnel libvirt. Mounting /sys/fs/selinux is necessary for the nova_libvirt container so that selinux contexts are applied to images/logs etc...(https://bugzilla.redhat.com/show_bug.cgi?id=1488503).
*** Bug 1508341 has been marked as a duplicate of this bug. ***
https://review.openstack.org/517125 has merged
VERIFIED (undercloud) [stack@undercloud-0 ~]$ sudo rpm -qa "*templates*" openstack-tripleo-heat-templates-7.0.3-9.el7ost.noarch check selinux inside docker nova_libvirt ()[root@compute-0 /]# sudo sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28 ()[root@compute-0 /]# ()[root@compute-1 /]# sudo sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28 check selinux on compute nodes [heat-admin@compute-0 ~]$ sudo sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28 [heat-admin@compute-1 ~]$ sudo sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28 perform live_migration from compute-0 to compute-1 (overcloud) [stack@undercloud-0 ~]$ nova show after_deploy |grep hyp | OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.localdomain (overcloud) [stack@undercloud-0 ~]$ nova show after_deploy |grep hyp | OS-EXT-SRV-ATTR:hypervisor_hostname | compute-1.localdomain
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462