Description of problem: Upgrade to CentOS 6.6 from 6.5 breaks osad service Version-Release number of selected component (if applicable): spacewalk-client-repo-2.1-2.el6.noarch How reproducible: Always Steps to Reproduce: 1. Upgrade to CentOS 6.6 2. Restart osad service - reports "Error connecting to jabber server: Unable to connect to the host and port specified 3. OSA status shows as 'Offline' in spacewalk gui. Actual results: Audit log showing errors as follows. type=AVC msg=audit(1415305418.751:811): avc: denied { name_connect } for pid=21388 comm="osad" dest=5222 scontext=unconfined_u:system_r:osad_t:s0 tcontext=system_u:object_r:jabber_client_port_t:s0 tclass=tcp_socket messages log showing the following. setroubleshoot: SELinux is preventing /usr/bin/python from name_connect access on the tcp_socket . For complete SELinux messages. run sealert -l ba494312-3e90-4a85-b159-31be6212a5af Expected results: osad to be functional so jobs can be scheduled and completed successfully from spacewalk. Additional info:
I've experienced the same issue. Additionally, the osad_t type seems to be missing, so we can't compile a custom module ; root# cat osad.te module osad 1.0; require { type osad_t; type jabber_client_port_t; class tcp_socket name_connect; } #============= osad_t ============== allow osad_t jabber_client_port_t:tcp_socket name_connect; root# semodule -i osad.pp libsepol.print_missing_requirements: osad's global requirements were not met: type/attribute osad_t (No such file or directory). libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory). semodule: Failed! Until this is fixed, I've changed the context on /usr/sbin/osad ; root# chcon -t unconfined_exec_t /usr/sbin/osad
Also affected by this as are others on the spacewalk mailing list. Another suggested work around is as follows: semanage permissive -a osad_t
Is this a problem with Spacewalk 2.2 clients as well? Generally it's not recommended to use 2.2 clients against a 2.1 Spacewalk, but the clients generally are backwards-compatible so everything /should/ work fine. If the 2.2 clients work correctly then I would say this bug should be closed as CurrentRelease, generally only security-related fixes are backported to old versions of Spacewalk.
And as an additional note, even without osad working on the clients you should still be able to schedule actions and have them be picked up and successfully completed. The only difference is that instead of happening instantly as with osad you have to wait until rhnsd wakes up and runs rhn_check, default of every 4 hours (can be set in /etc/sysconfig/rhn/rhnsd).
Hi, Yes CentOS 6.6 and Spacewalk 2.2 still retain the bug. rpm -qa -last|grep osad osad-5.11.43-1.el6.noarch Tue 02 Sep 2014 11:10:56 AM BST rpm -qa -last|grep spacew* spacewalk-client-repo-2.2-1.el6.noarch Tue 02 Sep 2014 11:07:47 AM BST rpm -qa -last|grep rhn* rhn-check-2.2.7-1.el6.noarch Tue 02 Sep 2014 11:10:56 AM BST yum-rhn-plugin-2.2.7-1.el6.noarch Tue 02 Sep 2014 11:10:55 AM BST rhn-setup-2.2.7-1.el6.noarch Tue 02 Sep 2014 11:10:55 AM BST rhnsd-5.0.14-1.el6.x86_64 Tue 02 Sep 2014 11:10:54 AM BST rhn-client-tools-2.2.7-1.el6.noarch Tue 02 Sep 2014 11:10:52 AM BST rhnlib-2.5.72-1.el6.noarch Tue 02 Sep 2014 11:10:45 AM BST ]# service osad restart Shutting down osad: [ OK ] Starting osad: Error connecting to jabber server: Unable to connect to the host and port specified 2014-11-21 14:41:51 jabber_lib.main: Unable to connect to jabber servers, sleeping 94 seconds [ OK ] type=1400 audit(1416581098.262:584): avc: denied { name_connect } for pid=17560 comm="osad" dest=5222 scontext=unconfined_u:system_r:osad_t:s0 tcontext=system_u:object_r:jabber_client_port_t:s0 tclass=tcp_socket # semanage permissive -a osad_t # service osad restart Shutting down osad: [ OK ] Starting osad: [ OK ] #
same result here as well with 2.2 client. audit: type=AVC msg=audit(1416588874.705:20084): avc: denied { name_connect } for pid=13031 comm="osad" dest=5222 scontext=unconfined_u:system_r:osad_t:s0 tcontext=system_u:object_r:jabber_client_port_t:s0 tclass=tcp_socket
Not sure if this will be helpful at all. Some overservations of my current selinux contexts. CentOS 5.11 ls -Z /usr/sbin/osad -rwxr-xr-x root root system_u:object_r:sbin_t:s0 /usr/sbin/osad CentOS 6.6 ls -Z /usr/sbin/osad -rwxr-xr-x. root root system_u:object_r:osad_exec_t:s0 /usr/sbin/osad The guy's at nginx also appear to have a similar issue, where they suggest relabeling has occured during yum update http://forum.nginx.org/read.php?2,254456,254511#msg-254511
The proper way to fix this issue is for an updated selinux-policy rpm to be released into CentOS 6. I believe there is work on that underway, but that's not something the Spacewalk team has direct control over, so we may have to wait a bit. For now the recommendation is to do: # semanage permissive -a osad_t
We've been fighting this as well, and have been constantly pushing out an ever-changing policy to get around the various selinux violations. I'm a bit stumped why more people aren't hitting this. It seems that anyone running osad would have problems? In our case, I've seen hundreds of osad processes running simultaneously. Presumably one fires up, hits a violation, and sits there. Something fires up a duplicate process. Unfortunately, the out-of-memory detector eventually kicks in, but since each osad process is extremely low memory, it decides to kill something else instead. MySQL is a common target. Kinda nasty. Can anyone confirm whether an updated policy is in the works from RedHat?
(In reply to Norman Elton from comment #9) > We've been fighting this as well, and have been constantly pushing out an > ever-changing policy to get around the various selinux violations. I'm a > bit stumped why more people aren't hitting this. It seems that anyone > running osad would have problems? > > In our case, I've seen hundreds of osad processes running simultaneously. > Presumably one fires up, hits a violation, and sits there. Something fires > up a duplicate process. Unfortunately, the out-of-memory detector eventually > kicks in, but since each osad process is extremely low memory, it decides to > kill something else instead. MySQL is a common target. Kinda nasty. > > Can anyone confirm whether an updated policy is in the works from RedHat? One is in the works but I have no control or comment on when it will be made available. If this issue is important to you then you need to contact Red Hat support to complain about it, commenting in a Spacewalk bug won't cause them to move any faster.
It appears the fix for this just hit the repos.
Agreed. Fixed in selinux-policy-3.7.19-260. Closing CurrentRelease.