security_port_sid could certainly be optimized, although I assume that if secmark were enabled, it would vanish from the profile. **** Current refpolicy has 255 entries in the port contexts list. And it is presently just a flat list ordered (by hand) from most specific to least (taking the first match with the same protocol and a port range that contains the port we are looking up), so common case is likely walking the entire list each time. No surprise it is slow. We don't need to change the policy representation; we can just have the kernel load it into a different in-memory representation for fast lookup. **************************** Hmm, test was done with CONFIG_NETWORK_SECMARK=y CONFIG_SECURITY_SELINUX_ENABLE_SECMARK_DEFAULT=y and /selinux/compat_net = 0 This is curious. Although it would still be called on bind(2) and connect(2) - but for TCP only in the latter case. > Even with this setting, you'll be hitting security_port_sid() via > connect(2) and bind(2). We need to fix it. Yes, so this was supposed to be addressed long ago by {*not Red Hat*} (port cache and node cache), although I never had much confidence in them. Seems simpler to just optimize security_port_sid directly rather than add a caching layer, and just replace the flat list with a tree or similar structure (just needs to handle port ranges correctly and match more specific entries before less specific ones). -------- Forwarded Message -------- From: Sami Farin <safari-kernel.fi> To: linux-kernel Mailing List <linux-kernel.org> Subject: oprofile / selinux / security_port_sid Date: Tue, 27 Mar 2007 13:06:53 +0300 is there room for improvement in security_port_sid() ? little test with dns queries (dnsfilter (the client) on local host using poll() and dnscache (the server) using epoll (at max 4000 concurrent queries): (stats for only vmlinux) CPU: P4 / Xeon, speed 2797.32 MHz (estimated) Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 45000 Counted FSB_DATA_ACTIVITY events (DRDY or DBSY events on the front side bus) with a unit mask of 0x03 (multiple flags) count 45000 Counted BRANCH_RETIRED events (retired branches) with a unit mask of 0x05 (multiple flags) count 45000 Counted BRANCH_RETIRED events (retired branches) with a unit mask of 0x0a (multiple flags) count 45000 samples % samples % samples % samples % symbol name 220663 10.2181 6704 17.9737 5735 7.5171 27 1.1989 datagram_poll 140086 6.4869 3239 8.6839 3786 4.9624 24 1.0657 sock_poll 119636 5.5399 2172 5.8232 7168 9.3954 24 1.0657 do_poll 101512 4.7006 3987 10.6893 812 1.0643 14 0.6217 udp_get_port 71008 3.2881 1017 2.7266 2694 3.5311 397 17.6288 security_port_sid 64350 2.9798 144 0.3861 1912 2.5061 6 0.2664 add_wait_queue 60815 2.8161 187 0.5014 3246 4.2546 2 0.0888 remove_wait_queue 47456 2.1975 1823 4.8875 476 0.6239 31 1.3766 udp_v4_lookup_longway if dnsfilter had used epoll, security_port_sid would probably (?) be number one (or two or three) CPU user in kernel. also note that 17.6% of mispredicted branches occurr in security_port_sid.
Of the (now 261) portcon entries in refpolicy, only two are currently using a port range (vs. a single port), and those are the fallback definitions to match any otherwise unspecified reserved ports and map them to reserved_port_t. So we could put all of the single port (i.e. low == high) entries into a simple hash table and look there first, then fall back to walking the list of ranged entries.