Bug 1084310 - avc denials seen in instack overcloud
Summary: avc denials seen in instack overcloud
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: selinux-policy
Version: 20
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Miroslav Grepl
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-04 07:02 UTC by Richard Su
Modified: 2015-06-30 00:58 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-30 00:58:37 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
overcloud controller node avcs (429.72 KB, text/x-log)
2014-04-04 07:02 UTC, Richard Su
no flags Details
overcloud nova compute node avcs (102.35 KB, text/x-log)
2014-04-04 07:03 UTC, Richard Su
no flags Details
overcloud cinder volume avcs (277.97 KB, text/x-log)
2014-04-04 07:03 UTC, Richard Su
no flags Details
custom policies to fix avc denials on the overcloud (8.38 KB, application/zip)
2014-04-09 06:43 UTC, Richard Su
no flags Details
combined enforcing avcs into consolidated and permissive into kernelt (3.41 KB, application/x-compressed-tar)
2014-04-16 06:27 UTC, Richard Su
no flags Details
enforcing mode, repeat os-refresh-config, instack-test-overcloud, etc.. (2.47 MB, text/x-log)
2014-04-16 22:00 UTC, Richard Su
no flags Details
new15.pp custom policy (11.43 KB, application/octet-stream)
2014-04-16 22:02 UTC, Richard Su
no flags Details
new15.te (8.37 KB, text/plain)
2014-04-16 22:03 UTC, Richard Su
no flags Details
audit.log (1.47 MB, text/x-log)
2014-04-25 06:54 UTC, Richard Su
no flags Details
mypol7.pp custom policy (2.07 KB, application/octet-stream)
2014-04-25 06:55 UTC, Richard Su
no flags Details
mypol7.te (958 bytes, text/plain)
2014-04-25 06:55 UTC, Richard Su
no flags Details
audit.log swift name_bind (402.05 KB, text/x-log)
2014-05-07 02:49 UTC, Richard Su
no flags Details
swift-620x.pp name_bind custom policy (965 bytes, application/octet-stream)
2014-05-07 02:50 UTC, Richard Su
no flags Details
audit.log swift name_connect (234.98 KB, text/x-log)
2014-05-07 02:50 UTC, Richard Su
no flags Details
swift-name-connect.pp name_connect custom policy (976 bytes, application/octet-stream)
2014-05-07 02:51 UTC, Richard Su
no flags Details

Description Richard Su 2014-04-04 07:02:33 UTC
Created attachment 882531 [details]
overcloud controller node avcs

Description of problem:
On the overcloud instances and in permissive mode, I'm seeing denials from sshd, crond, libvirtd, qemu-system-x86, and systemd. The policies should be updated to allow these services to complete their tasks.

Version-Release number of selected component (if applicable):
selinux-policy-3.12.1-149.fc20.noarch
selinux-policy-targeted-3.12.1-149.fc20.noarch
openssh-server-6.4p1-3.fc20.x86_64
libvirt-1.1.3.4-4.fc20.x86_64
qemu-system-x86-1.6.2-1.fc20.x86_64
systemd-208-15.fc20.x86_64

How reproducible:
Always

Steps to Reproduce:
1. install instack-undercloud using instructions on https://github.com/agroup, use README-virt to setup your baremetal nodes and a instack vm. Afterwards I use the packages based install using the instack-undercloud source instead of the rpm. I will attach instack.answers and deployrc which are additional configuration files. The commands I run to install the undercloud and then deploy the overcloud are:
sudo yum -y install http://repos.fedorapeople.org/repos/openstack-m/openstack-m/openstack-m-release-icehouse-2.noarch.rpm
sudo yum -y install yum-utils git
sudo yum-config-manager --enable fedora-openstack-m-testing
[ -d instack-undercloud ] || git clone https://github.com/agroup/instack-undercloud -b selinux
sed -i 's/\/usr\/share\/instack-undercloud\/json/instack-undercloud\/json/g' instack-undercloud/scripts/instack-install-undercloud-packages
sed -i 's/\/usr\/share\/instack-undercloud/instack-undercloud\/elements/g' instack-undercloud/scripts/instack-install-undercloud-packages
export PATH=instack-undercloud/scripts:$PATH
sudo yum -y install instack
instack-install-undercloud-packages
sudo tuskar-dbsync --config-file /etc/tuskar/tuskar.conf
sudo service openstack-tuskar-api restart
instack-prepare-for-overcloud
source deployrc
instack-deploy-overcloud


Actual results:
avc denials

Expected results:
no avc denials

Additional info:

Comment 1 Richard Su 2014-04-04 07:03:05 UTC
Created attachment 882532 [details]
overcloud nova compute node avcs

Comment 2 Richard Su 2014-04-04 07:03:39 UTC
Created attachment 882533 [details]
overcloud cinder volume avcs

Comment 3 Richard Su 2014-04-09 06:43:50 UTC
Created attachment 884393 [details]
custom policies to fix avc denials on the overcloud

I'm still having trouble booting user instances on the overcloud, so there may be another update.

Comment 4 Miroslav Grepl 2014-04-09 12:46:51 UTC
The problem is the system is mislabeled.

Comment 5 Richard Su 2014-04-09 14:09:21 UTC
Hi Miroslav. Can you provide more details on how the system is mislabeled?

Comment 6 Lukas Vrabec 2014-04-09 14:13:43 UTC
Miroslav mean that that your files have wrong label.

Use restorecon to fix it.

Comment 7 Richard Su 2014-04-09 14:21:52 UTC
Ok, but which avcs are due to a mislabel? Wondering which files I should be relabeling.

Comment 8 Miroslav Grepl 2014-04-10 10:55:08 UTC
You run in kernel_t which means the systemd is mislabeled. It looks there is a problem how the policy is placed.

If you execute

# semodule -B

does it blow up?

Comment 9 Richard Su 2014-04-10 19:27:06 UTC
Looks like it does blow up.
[root@overcloud-notcompute0-u7s5bt5k5brs ~]# semodule -B
Killed

Comment 10 Daniel Walsh 2014-04-14 16:07:29 UTC
Were you in permissive mode?

setenforce 0
semodule -B

Comment 11 Richard Su 2014-04-15 15:59:37 UTC
Not sure before. But now on the overcloud block storage and nova compute nodes, "semodule -B" does not blow up in permissive or enforcing modes.

So for the kernel_t errors, are they corrected by running "semodule -B"?

Comment 12 Richard Su 2014-04-16 04:29:55 UTC
I've updated the startup script to run
semodule -B
restorecon -R /
and the same avcs appear in permissive mode.

The kernel_t errors don't affect overcloud deployment as far as I can tell.

What is interesting here is that when I set the instances on enforcing mode, additional avcs appear that aren't shown in permissive mode. It seems those are the relevant avcs.  I have cleaned up the policies and created a consolidated custom policy from the ones I attached above that were found when the instance was in enforcing mode. I left out the ones that we seen in permissive mode which included the kernel_t errors, so far they don't seem to be relevant to the operation of the overcloud, as the overcloud controller booted up and os-collect-config ran to success. But more testing there is needed, because I see dhcp-interface and systemd-sysctl services had failed.

I will attached the cleaned up policies below. Do you need the audit logs to incorporate policy changes to the next selinux package updates? If so I will need to rerun the previous tests as I did not save the logs, only the custom policies generated from audit2allow.

Comment 13 Richard Su 2014-04-16 06:27:57 UTC
Created attachment 886758 [details]
combined enforcing avcs into consolidated and permissive into kernelt

Comment 14 Richard Su 2014-04-16 21:58:45 UTC
I recreated the overcloud custom policy to get the audit.log for you to review. Basically the avcs only appear in enforcing mode, so I had to restart os-refresh-config, run instack-test-overcloud, run "ausearch -m avc  |audit2allow -M new", and repeat until no more avcs are logged. 

Below you will find audit.log and the custom policy, new15.pp.

Comment 15 Richard Su 2014-04-16 22:00:02 UTC
Created attachment 887005 [details]
enforcing mode, repeat os-refresh-config, instack-test-overcloud, etc..

Comment 16 Richard Su 2014-04-16 22:02:31 UTC
Created attachment 887006 [details]
new15.pp custom policy

Comment 17 Richard Su 2014-04-16 22:03:06 UTC
Created attachment 887007 [details]
new15.te

Comment 18 Miroslav Grepl 2014-04-17 06:05:21 UTC
(In reply to Richard Su from comment #14)
> I recreated the overcloud custom policy to get the audit.log for you to
> review. Basically the avcs only appear in enforcing mode, so I had to
> restart os-refresh-config, run instack-test-overcloud, run "ausearch -m avc 
> |audit2allow -M new", and repeat until no more avcs are logged. 
> 
> Below you will find audit.log and the custom policy, new15.pp.

Ok and did it work with this policy in enforcing mode?

Comment 19 Miroslav Grepl 2014-04-17 13:03:46 UTC
I added fixes.

commit 1e7ebaa91ceab4a5b439f6f2278ba8af295f1662
Author: Miroslav Grepl <mgrepl>
Date:   Thu Apr 17 15:03:25 2014 +0200

    Additional fixes for  instack overcloud

Comment 20 Richard Su 2014-04-17 15:09:25 UTC
Hi Miroslav, yes it did work in enforcing mode with the new15 policy. 

Please let me know when the next selinux policy rpm is released to testing. Thanks.

Comment 21 Richard Su 2014-04-25 06:53:39 UTC
Hi, I have tested selinux-policy-targeted-3.12.1-156.fc20.noarch which contains commit 1e7ebaa91ceab4a5b439f6f2278ba8af295f1662. I am still seeing a subset of the same avc denials as before. I will attach the audit.log and custom policy to work around them.

Comment 22 Richard Su 2014-04-25 06:54:25 UTC
Created attachment 889535 [details]
audit.log

Comment 23 Richard Su 2014-04-25 06:55:05 UTC
Created attachment 889547 [details]
mypol7.pp custom policy

Comment 24 Richard Su 2014-04-25 06:55:39 UTC
Created attachment 889548 [details]
mypol7.te

Comment 25 Miroslav Grepl 2014-04-25 11:23:34 UTC
#!!!! This avc is allowed in the current policy
allow swift_t xserver_port_t:tcp_socket name_bind;


do you know a reason for this?

and
type=AVC msg=audit(1398391607.897:4548): avc:  denied  { name_bind } for  pid=25715 comm="neutron-ns-meta" src=9697 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke


is tcp/9697 used by default?

Comment 26 Richard Su 2014-04-28 23:34:35 UTC
allow swift_t xserver_port_t:tcp_socket name_bind;

is used by the core swift services
openstack-swift-account on port 6002
openstack-swift-container on port 6001
openstack-swift-object on port 6000

---
type=AVC msg=audit(1398391607.897:4548): avc:  denied  { name_bind } for  pid=25715 comm="neutron-ns-meta" src=9697 scontext=system_u:system_r:neutron_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke

is tcp/9697 used by default?

This is from the neutron-l3-agent when it creates the metadata service for the namespace. It uses 9697 as the default port.

https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L163

Comment 27 Miroslav Grepl 2014-04-30 10:36:04 UTC
(In reply to Richard Su from comment #26)
> allow swift_t xserver_port_t:tcp_socket name_bind;
> 
> is used by the core swift services
> openstack-swift-account on port 6002
> openstack-swift-container on port 6001
> openstack-swift-object on port 6000

Why are xserver ports used for it?

> 
> ---
> type=AVC msg=audit(1398391607.897:4548): avc:  denied  { name_bind } for 
> pid=25715 comm="neutron-ns-meta" src=9697
> scontext=system_u:system_r:neutron_t:s0
> tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socke
> 
> is tcp/9697 used by default?

Fixed.

commit d140fa7d72e31443fd27b66ba1fa25238365cb0c
Author: Miroslav Grepl <mgrepl>
Date:   Wed Apr 30 12:33:24 2014 +0200

    add support for tcp/9697

> 
> This is from the neutron-l3-agent when it creates the metadata service for
> the namespace. It uses 9697 as the default port.
> 
> https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.
> py#L163

Comment 28 Richard Su 2014-05-01 02:13:40 UTC
They are setup to use ports 6000, 6001, and 6002 by default. Is it a problem to add such rule?

[root@overcloud-notcompute0-s4mr26drmxnc bin]# pwd
/usr/bin
[root@overcloud-notcompute0-s4mr26drmxnc bin]# grep 600 swift*server
swift-account-server:                      'account-server', default_port=6002, **options))
swift-container-server:                      'container-server', default_port=6001, **options))
swift-object-server:    sys.exit(run_wsgi(conf_file, 'object-server', default_port=6000,


Also what about the other avcs in the audit.log? The custom policy mypol7.te contained alot of work arounds. Can any of those be incorporated into the default policy?

Comment 29 Daniel Walsh 2014-05-01 12:45:28 UTC
Richard, it is just weird to use the X Ports for this, we can allow this, but it would allow a hacked swift domain to connect to any open X Port and read the X Traffic.

I added a couple of rules for neutron, but we see it trying to bind to port 9697?  Should we define this as a newtron port?

Comment 30 Miroslav Grepl 2014-05-02 08:32:22 UTC
> I added a couple of rules for neutron, but we see it trying to bind to port
> 9697?  Should we define this as a newtron port?

commit d140fa7d72e31443fd27b66ba1fa25238365cb0c
Author: Miroslav Grepl <mgrepl>
Date:   Wed Apr 30 12:33:24 2014 +0200

    add support for tcp/9697

Comment 31 Richard Su 2014-05-06 01:40:23 UTC
Dan,

Let me try moving the swift services to a different port number. Do you foresee any issues with using the ephemeral port range?

I have no issues with assigning that as a neutron port. But what's the criteria for assign ports to a specific services?

Comment 32 Richard Su 2014-05-07 02:45:46 UTC
I've tested moving the swift services from ports 6000-6002 to 6201-6203. And that cleared alot of the avcs I was seeing before. Two avcs remain, one denial for name_bind and another for name_connect on unreserved_port_t. I will attached the audit logs and custom policies.

I also tested using ephemeral ports 60000-60002 and got the same results, only the two denials were on ephemeral_port_t. 

Either set of ports would appear to work for us. I think ports 6201-6203 would be easier to implement in upstream tripleo, as using ephemeral ports would require additional changes to reserve those ports in upstream tripleo.

Will need to see if upstream swift is ok with moving the default ports. Or we may need to carry patches in the fedora package to change the ports.

What are your thoughts on using ports 6201-6203 for swift? How should this implemented in the upstream selinux policy and swift packages? Or should this be done via a custom policy within tripleo?

Comment 33 Richard Su 2014-05-07 02:49:16 UTC
Created attachment 893055 [details]
audit.log swift name_bind

Comment 34 Richard Su 2014-05-07 02:50:20 UTC
Created attachment 893056 [details]
swift-620x.pp name_bind custom policy

Comment 35 Richard Su 2014-05-07 02:50:50 UTC
Created attachment 893057 [details]
audit.log swift name_connect

Comment 36 Richard Su 2014-05-07 02:51:26 UTC
Created attachment 893058 [details]
swift-name-connect.pp name_connect custom policy

Comment 38 Miroslav Grepl 2014-11-07 13:16:18 UTC
Is this still any issue?

Comment 39 Fedora End Of Life 2015-05-29 11:27:15 UTC
This message is a reminder that Fedora 20 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 20. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '20'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 20 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 40 Fedora End Of Life 2015-06-30 00:58:37 UTC
Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.