Description of problem: I have seen the following bug in several updates of selinux: 1. selinux update installs. For some unclear reason it decides to install the file_contexts and policy.18 files as .rpmnew versions. 2. It then loads the wrong policy 3. It then runs around correcting file attributes using the wrong policy 4. Given the documentation translucency, it isn't remotely obvious how to recover after I rename the files. Suggested behavior: 1. Handle this by clobbering the files and moving the old ones to .rpmold 2. Have some straightforward mechanism by which tags can be restored. 3. Document this mechanism someplace really obvious, like man selinux_recovery and include a pointer to this man page in the selinux man page.
First off what kernel are you running? You should be using policy.19 I believe, if everything is updated correctly. (Latest, policy, kernel, policyutils, checkpolicy) You should modify file_context.local. We should update man selinux to talk about this. Do you have policy sources installed? If yes, this should be rebuilding your customized policy and reloading. You say it is loading the wrong policy.
Some RPM versions: selinux-policy-targeted-sources-1.17.30-3.16 libselinux-devel-1.19.1-8 selinux-policy-targeted-1.17.30-3.16 selinux-doc-1.14.1-1 libselinux-1.19.1-8 Kernel version: 2.6.12-1.1372_FC3 Current policy file is policy.18 In answer to your question, yes, I have the policy sources installed, but purely as a matter of curiosity. I have not modified the policy in any way. It is possiible that at some point I did a make install of the unomdified sources in an attempt to recover from an upgrade failure. This may have impacted timestamps on the installed policy, triggering the .rpmnew problem. It think the problem is more complicated than loading the wrong policy. The problem is that the new policy is unsuccessfully installed and then fails to load. I don't think that it is the sources RPM that is causing the problem here, but I haven't had a chance to look at the postinstall scripts. I do know that I did not see any errors during upgrade that would suggest a policy compile failure.
Ok you marked this bug as FC4. Not fc3, hence my confusion. FC4 uses policy.19 FC3 uses policy.19. Although the kernel you have installed will gladly use policy.19. I am not sure file_contexts.local will work on FC3.
The good news is that I don't plan on updating policy in FC3 any more. (Knock on wood). I beleive most of the problems you have seen are fixed in fc4, including policy-sources and policy interaction.
My apologies. Just to clarify, I've seen the upgrade failure occur in both FC3 and FC4. I'm about to upgrade this particular machine anyway...
I had problems too with FC3, when up2date updated my targeted policy to selinux-policy-targeted-1.17.30-3.16 (I hadn't updated for a while, so it's possible I skipped a version or two). I had previously modified the targeted policy, to disable protection for httpd, so some of the policy files were not in their pristine state. When the upgrade was done, the following files had been installed as *.rpmnew: /etc/selinux/targeted/booleans.rpmnew /etc/selinux/targeted/contexts/files/file_contexts.rpmnew /etc/selinux/targeted/policy/policy.18.rpmnew Upon rebooting, I got many warning/error messages, which are not recorded in /var/log/{messages,dmesg,boot.log,secure}. Upon logging in, I found that I could not connect to TCP ports on remote machines - e.g. Aug 10 17:40:25 sardine kernel: audit(1123695625.487:9): avc: denied { name_connect } for pid=3936 comm="kio_http" dest=80 scontext=user_u:system_r:unconfined_t tcontext=system_u:object_r:http_port_t tclass=tcp_socket The problem was easily fixed by moving the relevant files and rebooting. However, this does raise questions about what is the 'right' way for RPM to do a policy upgrade. It appears that in my case the mixture of old and new files was inconsistent, and, most alarmingly, this had unexpected results. Using 'rpmnew' is perhaps intended to preserve the host's old configuration, but fails in this case, and has further unpleasant consequences. 'rpmold' is better, because at least the set of files will be consistent; but it is still not ideal (on my system, httpd would have failed). One way or another, the upgrade appears to require subsequent sysadmin action: it cannot be completely automated with rpm. Therefore, a better solution might be to store different versions of a policy in separate directories (e.g. /etc/selinux/targeted.18.a), "upgrade" as we do for the kernel, using rpm -i instead of rpm -U, and continue using the old policy by default, until the sysadmin explicitly tells the system to migrate to the new policy (which will usually be after manually applying whatever changes are needed). This way, the system is least likely to be put into a non-functioning state; and, if it is, the sysadmin will likely be expecting to have to test the new policy, so the system will not be down for too long.
I could have this wrong, but an offline discussion with dwalsh leads me to think that SOME of the policy upgrades depend on kernel upgrades. Lately, I have adopted a new update strategy: if "yum update" shows me *both* a kernel update and an selinux policy update, I proceed by installing the kernel and rebooting, and *then* installing the new policy. This has significantly reduced the number of errors that I encounter in the postinstall phase (I also preupdate libsepol). The issue is that *some* of the policy updates depend on kernel changes (e.g. introduction of new booleans) and the automatic retagging simply doesn't work if the new kernel isn't actually running. This means that (1) the policy update needs to identify the kernel version as a dependency, (2) the retagging process needs to be deferred until the new kernel is actually running, and (3) there is a downgrade problem -- Fedora is now shipping some FC3 kernels that just don't work at all on one of my legacy SMP motherboards, which means that if the selinux policy update happens automatically and irreversibly I have a real problem. I do not know if the following revised strategy would work, but let me throw it out there in case it is useful: 1. Do NOT do the retag at RPM postinstall time. 2. Record at shutdown time the version of the kernel that was running at last shutdown. 3. Record within the /etc/selinux tree somewhere the least kernel version that is required to support the current policy. 4. On reboot, if we are now running a kernel that can support the current policy, and if we were not doing so at shutdown, bring up selinux as follows: 4.1: Load the policy, enable selinux in PERMISSIVE mode (needed so that restorecon can be run) 4.2: Run the necessary restorecon to update tags appropriately -- this can be done by executing an appropriate version-specific recovery script that lives in the /etc/selinux tree. 4.3: If and only if the script ran successfully, switch selinux to ENABLED mode. Else run a user-provided script to perform recovery. 5. On reboot, if we are running a kernel that is too old to support the policy, proceed directly to the recovery script in PERMISSIVE mode. The default recovery script should issue a complaint and ensure that we do not come up past single user mode. HOWEVER, the local administrator needs to be able to revise this to support remote administration -- a reasonable answer in some environments will be to come up to initstate S, start the network, touch /etc/nologin, and then enable sshd. My goal in laying the mechanism out this way is that (a) it shouldn't fail open as configured out of the box, but (b) I should be able to adapt it so that I don't have to do a 90 minute drive in to work just to reboot. A nice side benefit to this is that there is a script lying around in /etc/selinux that I can run to repeat the current postinstall behavior even if I do not have the sources install. I often notice a small number of errors in the policy postinstall output, but I have no straightforward way to re-execute the postinstall action so that I can find those errors and correct them by hand.
With the update to reference policy, and the kernel being able to downgrade policy version to one it can understand, I am closing this bug.