Description of problem: With the current autofs code I seem to see occasionally a leaked file descriptor which manifests itself as an SELinux problem (see below). I don't know how it triggers. I use autofs's /net mount point. Beside that I have a removable MMC on that machine but that's handled by HAL, I think. Summary: SELinux is preventing mount (mount_t) "read write" to socket (automount_t). Detailed Description: SELinux denied access requested by mount. It is not expected that this access is required by mount and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: You can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context system_u:system_r:mount_t:s0 Target Context system_u:system_r:automount_t:s0 Target Objects socket [ udp_socket ] Source mount Source Path /bin/mount Port <Unknown> Host x61.akkadia.org Source RPM Packages util-linux-ng-2.13.1-6.fc9 Target RPM Packages Policy RPM selinux-policy-3.3.1-35.fc9 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall Host Name x61.akkadia.org Platform Linux x61.akkadia.org 2.6.25-1.fc9.x86_64 #1 SMP Thu Apr 17 01:11:31 EDT 2008 x86_64 x86_64 Alert Count 2 First Seen Fri 18 Apr 2008 09:22:34 PM PDT Last Seen Fri 18 Apr 2008 09:22:34 PM PDT Local ID 5465b1ed-8782-4c68-87d3-28e6baa23a6a Line Numbers Raw Audit Messages host=x61.akkadia.org type=AVC msg=audit(1208578954.848:63): avc: denied { read write } for pid=27338 comm="mount" path="socket:[272869]" dev=sockfs ino=272869 scontext=system_u:system_r:mount_t:s0 tcontext=system_u:system_r:automount_t:s0 tclass=udp_socket host=x61.akkadia.org type=SYSCALL msg=audit(1208578954.848:63): arch=c000003e syscall=59 success=yes exit=0 a0=4198b810 a1=4198b720 a2=7fa08df17330 a3=4198aaa0 items=0 ppid=2145 pid=27338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="mount" exe="/bin/mount" subj=system_u:system_r:mount_t:s0 key=(null) Version-Release number of selected component (if applicable): autofs-5.0.3-13 How reproducible: don't know how Steps to Reproduce: 1.well... 2. 3. Actual results: above message Expected results: no leaked descriptor, no message Additional info:
(In reply to comment #0) > Description of problem: > With the current autofs code I seem to see occasionally a leaked file descriptor > which manifests itself as an SELinux problem (see below). I don't know how it > triggers. I use autofs's /net mount point. Beside that I have a removable MMC > on that machine but that's handled by HAL, I think. We know about this. I believe the problem is that a descriptor can sometimes be open but not yet have close-on-exec set when we fork a mount(8) or umount(8). The approach of sequentially closing a bunch of descriptors on every fork isn't a good idea because the highest numbered descriptor could be quite high. I'm aware that it's possible to set the close-on-exec atomically at open with recent kernels (and I guess glibc supports this) but I can't be sure that is available on all kernels and glibc's that autofs may be used with. Have a look at bug RHEL-5 bug 233481 where the problem has been discussed and a couple of patches posted. The patch in that bug which uses a mutex around open and set close-on-exec was present in Rawhide for a while but attracted some criticism regarding performance and concern over not using atfork handlers. Personally, I don't think I need to use these handlers, provided I'm careful, since we always do an exec following a fork. Do you think using atfork handlers with a mutex would give better performance? I'd appreciate you're opinion on this. Ian
(In reply to comment #1) > I'm aware that it's possible to set the close-on-exec atomically > at open with recent kernels (and I guess glibc supports this) but > I can't be sure that is available on all kernels and glibc's that > autofs may be used with. What you should do is use the flag anyway. Older kernels will ignore it. Then test afterwards whether it works. In glibc I have a global variable which indicates whether the flag is honored. I set it after the first use of the O_CLOEXEC. This allows to skip the fcntl() calls for the later open calls.
int have_cloexec; { ... fd = open(..., ...|O_CLOEXEC|...) if (have_cloexec <= 0) { int fl = fcntl(fd, F_GETFD); have_cloexec = (fl & FD_CLOEXEC) != 0 ? 1 : -1; if (have_cloexec < 0) fcntl(fc, F_SETFD, fl|FD_CLOEXEC); }
(In reply to comment #2) > (In reply to comment #1) > > I'm aware that it's possible to set the close-on-exec atomically > > at open with recent kernels (and I guess glibc supports this) but > > I can't be sure that is available on all kernels and glibc's that > > autofs may be used with. > > What you should do is use the flag anyway. Older kernels will ignore it. Then > test afterwards whether it works. Right, that's a good start. I'll do that. Ian
Changing version to '9' as part of upcoming Fedora 9 GA. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
See http://udrepper.livejournal.com/20407.html For more information on the new functionality. The calls are now in the rawhide kernel and glibc. To create a socket, for instance, use this: #ifdef SOCK_CLOEXEC fd = socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, 0); if (fd == -1 && errno == EINVAL) #endif { fd = socket(AF_INET, SOCK_STREAM, 0); fcntl(fd, F_SETFD, FD_CLOEXEC); }
*** This bug has been marked as a duplicate of bug 390591 ***