Created attachment 882531 [details] overcloud controller node avcs Description of problem: On the overcloud instances and in permissive mode, I'm seeing denials from sshd, crond, libvirtd, qemu-system-x86, and systemd. The policies should be updated to allow these services to complete their tasks. Version-Release number of selected component (if applicable): selinux-policy-3.12.1-149.fc20.noarch selinux-policy-targeted-3.12.1-149.fc20.noarch openssh-server-6.4p1-3.fc20.x86_64 libvirt-1.1.3.4-4.fc20.x86_64 qemu-system-x86-1.6.2-1.fc20.x86_64 systemd-208-15.fc20.x86_64 How reproducible: Always Steps to Reproduce: 1. install instack-undercloud using instructions on https://github.com/agroup, use README-virt to setup your baremetal nodes and a instack vm. Afterwards I use the packages based install using the instack-undercloud source instead of the rpm. I will attach instack.answers and deployrc which are additional configuration files. The commands I run to install the undercloud and then deploy the overcloud are: sudo yum -y install http://repos.fedorapeople.org/repos/openstack-m/openstack-m/openstack-m-release-icehouse-2.noarch.rpm sudo yum -y install yum-utils git sudo yum-config-manager --enable fedora-openstack-m-testing [ -d instack-undercloud ] || git clone https://github.com/agroup/instack-undercloud -b selinux sed -i 's/\/usr\/share\/instack-undercloud\/json/instack-undercloud\/json/g' instack-undercloud/scripts/instack-install-undercloud-packages sed -i 's/\/usr\/share\/instack-undercloud/instack-undercloud\/elements/g' instack-undercloud/scripts/instack-install-undercloud-packages export PATH=instack-undercloud/scripts:$PATH sudo yum -y install instack instack-install-undercloud-packages sudo tuskar-dbsync --config-file /etc/tuskar/tuskar.conf sudo service openstack-tuskar-api restart instack-prepare-for-overcloud source deployrc instack-deploy-overcloud Actual results: avc denials Expected results: no avc denials Additional info:
Created attachment 882532 [details] overcloud nova compute node avcs
Created attachment 882533 [details] overcloud cinder volume avcs
Created attachment 884393 [details] custom policies to fix avc denials on the overcloud I'm still having trouble booting user instances on the overcloud, so there may be another update.
The problem is the system is mislabeled.
Hi Miroslav. Can you provide more details on how the system is mislabeled?
Miroslav mean that that your files have wrong label. Use restorecon to fix it.
Ok, but which avcs are due to a mislabel? Wondering which files I should be relabeling.
You run in kernel_t which means the systemd is mislabeled. It looks there is a problem how the policy is placed. If you execute # semodule -B does it blow up?
Looks like it does blow up. [root@overcloud-notcompute0-u7s5bt5k5brs ~]# semodule -B Killed
Were you in permissive mode? setenforce 0 semodule -B
Not sure before. But now on the overcloud block storage and nova compute nodes, "semodule -B" does not blow up in permissive or enforcing modes. So for the kernel_t errors, are they corrected by running "semodule -B"?
I've updated the startup script to run semodule -B restorecon -R / and the same avcs appear in permissive mode. The kernel_t errors don't affect overcloud deployment as far as I can tell. What is interesting here is that when I set the instances on enforcing mode, additional avcs appear that aren't shown in permissive mode. It seems those are the relevant avcs. I have cleaned up the policies and created a consolidated custom policy from the ones I attached above that were found when the instance was in enforcing mode. I left out the ones that we seen in permissive mode which included the kernel_t errors, so far they don't seem to be relevant to the operation of the overcloud, as the overcloud controller booted up and os-collect-config ran to success. But more testing there is needed, because I see dhcp-interface and systemd-sysctl services had failed. I will attached the cleaned up policies below. Do you need the audit logs to incorporate policy changes to the next selinux package updates? If so I will need to rerun the previous tests as I did not save the logs, only the custom policies generated from audit2allow.
Created attachment 886758 [details] combined enforcing avcs into consolidated and permissive into kernelt
I recreated the overcloud custom policy to get the audit.log for you to review. Basically the avcs only appear in enforcing mode, so I had to restart os-refresh-config, run instack-test-overcloud, run "ausearch -m avc |audit2allow -M new", and repeat until no more avcs are logged. Below you will find audit.log and the custom policy, new15.pp.
Created attachment 887005 [details] enforcing mode, repeat os-refresh-config, instack-test-overcloud, etc..
Created attachment 887006 [details] new15.pp custom policy
Created attachment 887007 [details] new15.te
(In reply to Richard Su from comment #14) > I recreated the overcloud custom policy to get the audit.log for you to > review. Basically the avcs only appear in enforcing mode, so I had to > restart os-refresh-config, run instack-test-overcloud, run "ausearch -m avc > |audit2allow -M new", and repeat until no more avcs are logged. > > Below you will find audit.log and the custom policy, new15.pp. Ok and did it work with this policy in enforcing mode?
I added fixes. commit 1e7ebaa91ceab4a5b439f6f2278ba8af295f1662 Author: Miroslav Grepl <mgrepl> Date: Thu Apr 17 15:03:25 2014 +0200 Additional fixes for instack overcloud
Hi Miroslav, yes it did work in enforcing mode with the new15 policy. Please let me know when the next selinux policy rpm is released to testing. Thanks.
Hi, I have tested selinux-policy-targeted-3.12.1-156.fc20.noarch which contains commit 1e7ebaa91ceab4a5b439f6f2278ba8af295f1662. I am still seeing a subset of the same avc denials as before. I will attach the audit.log and custom policy to work around them.
Created attachment 889535 [details] audit.log
Created attachment 889547 [details] mypol7.pp custom policy
Created attachment 889548 [details] mypol7.te
#!!!! This avc is allowed in the current policy allow swift_t xserver_port_t:tcp_socket name_bind; do you know a reason for this? and type=AVC msg=audit(1398391607.897:4548): avc: denied { name_bind } for pid=25715 comm="neutron-ns-meta" src=9697 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke is tcp/9697 used by default?
allow swift_t xserver_port_t:tcp_socket name_bind; is used by the core swift services openstack-swift-account on port 6002 openstack-swift-container on port 6001 openstack-swift-object on port 6000 --- type=AVC msg=audit(1398391607.897:4548): avc: denied { name_bind } for pid=25715 comm="neutron-ns-meta" src=9697 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke is tcp/9697 used by default? This is from the neutron-l3-agent when it creates the metadata service for the namespace. It uses 9697 as the default port. https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L163
(In reply to Richard Su from comment #26) > allow swift_t xserver_port_t:tcp_socket name_bind; > > is used by the core swift services > openstack-swift-account on port 6002 > openstack-swift-container on port 6001 > openstack-swift-object on port 6000 Why are xserver ports used for it? > > --- > type=AVC msg=audit(1398391607.897:4548): avc: denied { name_bind } for > pid=25715 comm="neutron-ns-meta" src=9697 > scontext=system_u:system_r:neutron_t:s0 > tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke > > is tcp/9697 used by default? Fixed. commit d140fa7d72e31443fd27b66ba1fa25238365cb0c Author: Miroslav Grepl <mgrepl> Date: Wed Apr 30 12:33:24 2014 +0200 add support for tcp/9697 > > This is from the neutron-l3-agent when it creates the metadata service for > the namespace. It uses 9697 as the default port. > > https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent. > py#L163
They are setup to use ports 6000, 6001, and 6002 by default. Is it a problem to add such rule? [root@overcloud-notcompute0-s4mr26drmxnc bin]# pwd /usr/bin [root@overcloud-notcompute0-s4mr26drmxnc bin]# grep 600 swift*server swift-account-server: 'account-server', default_port=6002, **options)) swift-container-server: 'container-server', default_port=6001, **options)) swift-object-server: sys.exit(run_wsgi(conf_file, 'object-server', default_port=6000, Also what about the other avcs in the audit.log? The custom policy mypol7.te contained alot of work arounds. Can any of those be incorporated into the default policy?
Richard, it is just weird to use the X Ports for this, we can allow this, but it would allow a hacked swift domain to connect to any open X Port and read the X Traffic. I added a couple of rules for neutron, but we see it trying to bind to port 9697? Should we define this as a newtron port?
> I added a couple of rules for neutron, but we see it trying to bind to port > 9697? Should we define this as a newtron port? commit d140fa7d72e31443fd27b66ba1fa25238365cb0c Author: Miroslav Grepl <mgrepl> Date: Wed Apr 30 12:33:24 2014 +0200 add support for tcp/9697
Dan, Let me try moving the swift services to a different port number. Do you foresee any issues with using the ephemeral port range? I have no issues with assigning that as a neutron port. But what's the criteria for assign ports to a specific services?
I've tested moving the swift services from ports 6000-6002 to 6201-6203. And that cleared alot of the avcs I was seeing before. Two avcs remain, one denial for name_bind and another for name_connect on unreserved_port_t. I will attached the audit logs and custom policies. I also tested using ephemeral ports 60000-60002 and got the same results, only the two denials were on ephemeral_port_t. Either set of ports would appear to work for us. I think ports 6201-6203 would be easier to implement in upstream tripleo, as using ephemeral ports would require additional changes to reserve those ports in upstream tripleo. Will need to see if upstream swift is ok with moving the default ports. Or we may need to carry patches in the fedora package to change the ports. What are your thoughts on using ports 6201-6203 for swift? How should this implemented in the upstream selinux policy and swift packages? Or should this be done via a custom policy within tripleo?
Created attachment 893055 [details] audit.log swift name_bind
Created attachment 893056 [details] swift-620x.pp name_bind custom policy
Created attachment 893057 [details] audit.log swift name_connect
Created attachment 893058 [details] swift-name-connect.pp name_connect custom policy
Is this still any issue?
This message is a reminder that Fedora 20 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 20. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '20'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 20 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.