Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
From audit.log:
type=AVC msg=audit(1403735226.090:476): avc: denied { name_connect } for pid=6033 comm="glance-api" dest=6800 scontext=system_u:system_r:glance_api_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
The ceph client uses tcp to talk to ceph daemons, which typically use ports 6789-6900.
Cinder is not affected by this, I think because it is not listed in any of the selinux configuration in /etc/selinux/targeted. Nova does have selinux policies applied to it, so it may be affected as well.
(In reply to Josh Durgin from comment #1)
> From audit.log:
>
> type=AVC msg=audit(1403735226.090:476): avc: denied { name_connect } for
> pid=6033 comm="glance-api" dest=6800
> scontext=system_u:system_r:glance_api_t:s0
> tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
>
> The ceph client uses tcp to talk to ceph daemons, which typically use ports
> 6789-6900.
>
> Cinder is not affected by this, I think because it is not listed in any of
> the selinux configuration in /etc/selinux/targeted. Nova does have selinux
> policies applied to it, so it may be affected as well.
So we want to define
6789-6900
as glance_port_t?
If you execute
# semanage port -a -t glance_port_t -p tcp 6789-6900
does it work then?
(In reply to Miroslav Grepl from comment #3)
> (In reply to Josh Durgin from comment #1)
> > From audit.log:
> >
> > type=AVC msg=audit(1403735226.090:476): avc: denied { name_connect } for
> > pid=6033 comm="glance-api" dest=6800
> > scontext=system_u:system_r:glance_api_t:s0
> > tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
> >
> > The ceph client uses tcp to talk to ceph daemons, which typically use ports
> > 6789-6900.
> >
> > Cinder is not affected by this, I think because it is not listed in any of
> > the selinux configuration in /etc/selinux/targeted. Nova does have selinux
> > policies applied to it, so it may be affected as well.
>
> So we want to define
>
> 6789-6900
>
> as glance_port_t?
>
> If you execute
>
> # semanage port -a -t glance_port_t -p tcp 6789-6900
>
> does it work then?
No, it's still denied:
type=AVC msg=audit(1403805393.007:9315): avc: denied { name_connect } for pid=15716 comm="glance-api" dest=6800 scontext=system_u:system_r:glance_api_t:s0 tcontext=system_u:object_r:glance_port_t:s0 tclass=tcp_socket
Looking into it more, I was underestimating based on old behavior. These days ceph-osd uses the next 5 available ports starting from 6800. There's usually one ceph-osd daemon per disk, so with a typical deployment using < 30 disks per node, with some headroom for ports still in use after daemon restarts, a good range would be 6800-7000, and 6789 for ceph-mon. Production setups will only have one ceph-mon on a node.
#============= glance_api_t ==============
#!!!! This avc can be allowed using the boolean 'glance_use_execmem'
allow glance_api_t self:process execmem;
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2015-0458.html
Comment 21Red Hat Bugzilla
2023-09-14 02:10:39 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days