Description of problem: When starting fail2ban I got failures of this kind: type=AVC msg=audit(1205593665.838:287): avc: denied { connectto } for pid=10365 comm="fail2ban-server" path=002F746D702F66616D2D726F6F742D000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=unconfined_u:system_r:fail2ban_t:s0 tcontext=unconfined_u:system_r:fail2ban_t:s0 tclass=unix_stream_socket type=SYSCALL msg=audit(1205593665.838:287): arch=c000003e syscall=42 success=no exit=-13 a0=6 a1=413febc0 a2=6e a3=0 items=0 ppid=1 pid=10365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) comm="fail2ban-server" exe="/usr/bin/python" subj=unconfined_u:system_r:fail2ban_t:s0 key=(null) I'm not quite sure what the strange value of "path" means. But it apparently blocked fail2ban from working properly. None of the iptalbes chains I expected were created, and on a status request it replied it had 0 jails. Version-Release number of selected component (if applicable): I've taken selinux-policy from updates-testing: selinux-policy-3.0.8-93.fc8 selinux-policy-targeted-3.0.8-93.fc8 fail2ban-0.8.1-11.fc8 How reproducible: Every time. Additional info: I used audit2allow to create a module "fail2banextra" with only this additional allow rule in it: allow fail2ban_t self:unix_stream_socket connectto; After adding this to the policy with semodule and restarting fail2ban, it seems to work correctly. (It hasn't actually blocked anything yet, but everything looks right at least.)
You can allow this for now by executing # audit2allow -M mypol -i /var/log/audit/audit.log # semodule -i mypol.pp Fixed in selinux-policy-3.0.8-94.fc8
Dan, this doesn't actually seem to be fixed: Summary: SELinux is preventing fail2ban-server (fail2ban_t) "connectto" to 002F746D702F66616D2D726F6F742D000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 (rpm_t). Detailed Description: SELinux denied access requested by fail2ban-server. It is not expected that this access is required by fail2ban-server and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: You can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context unconfined_u:system_r:fail2ban_t:s0 Target Context system_u:system_r:rpm_t:s0 Target Objects 002F746D702F66616D2D726F6F742D00000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 0000000000000000 [ unix_stream_socket ] Source fail2ban-server Source Path /usr/bin/python Port <Unknown> Host withnail.phys.ucl.ac.uk Source RPM Packages python-2.5.1-15.fc8 Target RPM Packages Policy RPM selinux-policy-3.0.8-95.fc8 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall Host Name withnail.phys.ucl.ac.uk Platform Linux withnail.phys.ucl.ac.uk 2.6.24.3-50.fc8 #1 SMP Thu Mar 20 13:39:08 EDT 2008 x86_64 x86_64 Alert Count 26 First Seen Thu 27 Mar 2008 12:02:00 GMT Last Seen Thu 27 Mar 2008 12:02:01 GMT Local ID 31212865-5b89-4e07-967b-0863dc7decd6 Line Numbers Raw Audit Messages host=withnail.phys.ucl.ac.uk type=AVC msg=audit(1206619321.672:55): avc: denied { connectto } for pid=3499 comm="fail2ban-server" path=002F746D702F66616D2D726F6F742D000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=unconfined_u:system_r:fail2ban_t:s0 tcontext=system_u:system_r:rpm_t:s0 tclass=unix_stream_socket host=withnail.phys.ucl.ac.uk type=SYSCALL msg=audit(1206619321.672:55): arch=c000003e syscall=42 success=no exit=-13 a0=6 a1=7fff6d1a7790 a2=6e a3=0 items=0 ppid=1 pid=3499 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) comm="fail2ban-server" exe="/usr/bin/python" subj=unconfined_u:system_r:fail2ban_t:s0 key=(null) # rpm -qa | grep selinux libselinux-devel-2.0.43-1.fc8 libselinux-2.0.43-1.fc8 selinux-policy-targeted-3.0.8-95.fc8 libselinux-python-2.0.43-1.fc8 libselinux-2.0.43-1.fc8 selinux-policy-devel-3.0.8-95.fc8 selinux-policy-3.0.8-95.fc8 ]# rpm -qa | grep fail2ban fail2ban-0.8.2-13.fc8
It's not quite the same as mine. I had fail2ban_t connectto fail2ban_t. You have fail2ban_t connectto rpm_t. I haven't had time to update to more than selinux-policy-3.0.8-93.fc8 yet. But I have made a module which just adds allow fail2ban_t self:unix_stream_socket connectto; That would not allow connection to an rpm_t socket I believe, but I still don't see your avc. The path looks the same. (/tmp/fam-root- if I decode it correctly. Why are socket paths displayed so strange?) Any idea why your would have type rpm_t? Have you activated any non-standard jails that could explain it?
(In reply to comment #3) > It's not quite the same as mine. I had fail2ban_t connectto fail2ban_t. You > have fail2ban_t connectto rpm_t. > > I haven't had time to update to more than selinux-policy-3.0.8-93.fc8 yet. But > I have made a module which just adds > > allow fail2ban_t self:unix_stream_socket connectto; > > That would not allow connection to an rpm_t socket I believe, but I still don't > see your avc. > That's strange. I don't understand why mine is rpm_t actually. > The path looks the same. (/tmp/fam-root- if I decode it correctly. Why are > socket paths displayed so strange?) Any idea why your would have type rpm_t? Nope. > Have you activated any non-standard jails that could explain it? I have a slightly customised shorewall/ssh jail rather than the default iptables/ssh jail, but I don't think it should cause anything to change to rpm_t. I need to do some more reading on selinux to fathom this.
The fail2ban_t self: connectto is Fixed in selinux-policy-3.0.8-96.fc8 I have no idea why you would get rpm_t for this? rpm_script_t might make more sense or it is connecting to a service that is running as rpm_t (yumupdated or packagekitd)?
(In reply to comment #5) > I have no idea why you would get rpm_t for this? rpm_script_t might make more > sense or it is connecting to a service that is running as rpm_t (yumupdated or > packagekitd)? No, it's baffling. I did just reproduce the exact same thing on a completely different F8 machine though. Is there anyway to glean more information about what operation is actually triggering the avc denial?
The string /tmp/fam-, which seems to be part of that strange path, is a string in my gam_server binary and in the libgamin-1.so library. I have a gam_server running that was started at the same time my fail2ban-server was started, and it is running in the fail2ban_t domain. Jonathan, do you also have a gam_server running which was started at the same time as fail2ban-server? And if so, in what domain is it running?
[root@withnail jgu]# /sbin/service fail2ban start Starting fail2ban: [ OK ] [root@withnail jgu]# ps auxZ | grep fail2ban unconfined_u:system_r:fail2ban_t:s0 root 20174 0.0 0.2 151772 5312 ? S 23:47 0:00 /usr/bin/python /usr/bin/fail2ban-server -b -s /var/run/fail2ban.sock -x unconfined_u:system_r:unconfined_t:s0-s0:c0.c1023 root 20179 0.0 0.0 82236 748 pts/1 R+ 23:48 0:00 grep fail2ban [root@withnail jgu]# ps auxZ | grep gam system_u:system_r:rpm_t:s0 root 2544 0.0 0.0 8984 1132 ? SN Mar27 0:00 /usr/libexec/gam_server unconfined_u:system_r:unconfined_t:s0-s0:c0.c1023 root 20183 0.0 0.0 82236 740 pts/1 R+ 23:49 0:00 grep gam So gam_server is running in the rpm_t domain....
Perhpas relevant: http://www.redhat.com/archives/fedora-extras-commits/2008-March/msg02026.html where i see... +@@ -353,6 +372,11 @@ + ') + + optional_policy(` ++ gamin_domtrans(rpm_t) ++ gamin_stream_connect(rpm_t) ++') ++
Another "perhaps relevant": Your gam_server in comment 8 is not started at the same time as your fail2ban. Was it started together with something else? ("ps -O lstart" is useful to get more precise time stamps.) If you stop your fail2ban server, will the gam_server exit? (Mine does, after a short delay.) I looked a bit at the documentation that comes with gamin, and I think I have an hypothesis what is happening here. It states that it opens the socket "\0/tmp/fam-$USER-$GAM_CLIENT_ID". Now, assume there are more than one SELinux-constrained server that uses gamin. None of them sets GAM_CLIENT_ID. (Which makes sense, it is mentioned as a debugging tool in the documentation.) The first one, something that runs in rpm_t, will make the gam_server run in rpm_t, and open "\0/tmp/fam-root-". After that, fail2ban starts and also tries to use the same socket. Since it already exists, it doesn't start any new gam_server but tries to connect to the running one. For different applications run by a desktop user that makes sense. For two services in different security domains it does not. Jonathan, do you have any other process running in rpm_t? Some service that was started at the same time as your gam_server? If this hypothesis is correct, one way to solve this would be to set GAM_CLIENT_ID in the startup script for any SELinux-constrained service using gamin. Minimalistically, it could be set to the domain of the service, but in practice it's probably just as easy to just use the service name. Alternatively, we could bring it up with the gamin developers, if it would be a good idea to change the socket name to contain the domain too? Thoughts about this?
Yes this looks like gamin was started in the postinstall or by rpm itself. (packagekit or yum-updatesd)? Confined Applications should not be allowed to share the output of gamin_server. I would guess the application should either start a gamin_server which matches its name or security context, of if it fails to be able to connect, it should start up another one with a different name. We do not want one confined domain to be able to see the files that another domain is seeing through gamin.
(In reply to comment #10) > Another "perhaps relevant": > > Your gam_server in comment 8 is not started at the same time as your fail2ban. > Was it started together with something else? ("ps -O lstart" is useful to get > more precise time stamps.) Aha, yes, you're right. The gam_server I see was started at the same time as yum-updatesd. > If you stop your fail2ban server, will the > gam_server exit? (Mine does, after a short delay.) > Nope, it persists, presumably because it's still in use with yum-updated. > I looked a bit at the documentation that comes with gamin, and I think I have an > hypothesis what is happening here. It states that it opens the socket > "\0/tmp/fam-$USER-$GAM_CLIENT_ID". Now, assume there are more than one > SELinux-constrained server that uses gamin. None of them sets GAM_CLIENT_ID. > (Which makes sense, it is mentioned as a debugging tool in the documentation.) > The first one, something that runs in rpm_t, will make the gam_server run in > rpm_t, and open "\0/tmp/fam-root-". After that, fail2ban starts and also tries > to use the same socket. Since it already exists, it doesn't start any new > gam_server but tries to connect to the running one. For different applications > run by a desktop user that makes sense. For two services in different security > domains it does not. > I think this analysis is spot on. > Jonathan, do you have any other process running in rpm_t? Some service that was > started at the same time as your gam_server? > [root@withnail jgu]# ps Zaux | grep rpm_t system_u:system_r:rpm_t:s0 root 2541 0.0 0.3 250088 7868 ? SN Mar27 0:00 /usr/bin/python -tt /usr/sbin/yum-updatesd system_u:system_r:rpm_t:s0 root 2544 0.0 0.0 8984 1132 ? SN Mar27 0:00 /usr/libexec/gam_server unconfined_u:system_r:unconfined_t:s0-s0:c0.c1023 root 23458 0.0 0.0 82236 732 pts/1 R+ 15:27 0:00 grep rpm_t [root@withnail jgu]# ps -O lstart 2541 2544 PID STARTED S TTY TIME COMMAND 2541 Thu Mar 27 12:00:18 2008 S ? 00:00:00 /usr/bin/python -tt /usr/sbin 2544 Thu Mar 27 12:00:18 2008 S ? 00:00:00 /usr/libexec/gam_server So, yes, yum-updatesd. > If this hypothesis is correct, one way to solve this would be to set > GAM_CLIENT_ID in the startup script for any SELinux-constrained service using > gamin. Minimalistically, it could be set to the domain of the service, but in > practice it's probably just as easy to just use the service name. > > Alternatively, we could bring it up with the gamin developers, if it would be a > good idea to change the socket name to contain the domain too? > > Thoughts about this? As Dan says :).
> So, yes, yum-updatesd. And since I don't run yum-updatesd, I don't see this problem. It all fits together now.
Looking a bit at the documentation for gamin (and fam), it seems to be that it is designed to only have one gam_server to be running, but connecting with different processes through different sockets. The problem then seems to be twofold: 1) gam_server should be running in a domain that allows other processes to connect to it from different domains 2) Each socket that gam_server creates should have a domain which constrains it to only allow the process that initiated that connection/socket to connect. I have a feeling my understanding may be a bit naive, though. What determines the security of a socket created by gam_server? Would gam_server itself need to be patched in order to create sockets with specific SElinux contexts?
If I understand you correctly, it seems like a very complicated solution. Are you saying that the gam_server should open different sockets with different SELinux types? And then based on which way a request arrives decide if that should be allowed or not according to SELinux? If so, I disagree. The case of different domains is very analogous to the case of different user id:s. One could imagine having one single gam_server for all users. This server would then have to keep track of the uid of each connecting application, and decide based on that if the process should be allowed to watch a particular file. But that would mean reimplementing the access controls done in the kernel. Very error prone and dangerous. So the current behaviour, where a new server is started for each user, is so much simpler and safer. In the same way it would be a bad idea to try to reimplement the SELinux access controls in the gam_server. Much better to start one for each domain used, and let the kernel do what the kernel does best.
Yes, the problem here is gam_server is only paying attention to UID. It should spawn a new job if it gets a permission denied, and then each confined domain would get there own gam_server. I think the way gam_server works is it tries to connectto the gam_server for the current UID, if SELinux causes this to fail, it should start up another gam_server.
OK, thanks both now I understand - had sent myself off on a tangent earlier, due to misunderstanding something I'd seen in the fam docs.
gamin was really designed to have multiple process running under the same UID share the same server for file modifications. The server is started on the fly by the first app running for that user and they share the same socket snprintf(path, MAXPATHLEN, "/tmp/fam-%s-%s", user, fam_client_id); fam_client_id can be an user specified value defined in the GAM_CLIENT_ID environment variable. I have no idea what it would take to expand that to include some SELinux context, and more importantly, what would break as a result. Maybe that would just work, but if applications on the desktop starts to have separate context (firefox, nautilus, yada, yada) this will soon become extremely painful from a desktop perspective. Daniel
In reply to comment 18: Of course I know nothing about the original design ideas, I wasn't involved. Looking at gamin as a user today, I see a scheme where one server is started for each access privilege type in use. In a traditional Unix setting, that means one server per user. In a security enhanced setting, it would mean on server per user/domain combination. It will mean a few more gamin servers sure, but not really THAT many. The implementation looks pretty simple. Do a getcon() before the snprintf(), and include a third field with the SELinux type from the getcon() call in the snprintf () format. Is there any reasonable alternative? The policy could allows all process domains to connect to the same gamin server. But then we have a problem. The fail2ban_t domain process should not be allowed to know about file A. The rpm_t domain file should be. How would you make the gamin server talking to both ensure that? I'm not saying it is impossible. But it seems much more complicated than running one server per process domain.
Reported in upstream fail2ban bug tracker: https://sourceforge.net/tracker/index.php?func=detail&aid=1971871&group_id=121032&atid=689044 My general feeling is that gamin is the wrong thing for fail2ban to be using, having followed these discussions, and that moving fail2ban to use python-inotify is probably the best solution.
If you have direct inotify access and don't need FAM compatibility/portability then yes I would suggest to not use gamin, Daniel
This message is a reminder that Fedora 8 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 8. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '8'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 8's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 8 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
I think this should be closed WONTFIX, following these discussions.