Description of problem: I cant install the libvirt-daemon Version-Release number of selected component (if applicable): Fedora 17 x86_64 How reproducible: yum install libvirt-daemon Steps to Reproduce: 1.yum install libvirt-daemon 2. 3. Actual results: Running Transaction Error in PREIN scriptlet in rpm package libvirt-daemon-0.9.11.4-3.fc17.x86_64 error: %pre(libvirt-daemon-0.9.11.4-3.fc17.x86_64) scriptlet failed, exit status 6 Verifying : libvirt-daemon-0.9.11.4-3.fc17.x86_64 1/1 Failed: libvirt-daemon.x86_64 0:0.9.11.4-3.fc17 Complete! Expected results: It should install without problems Additional info:
The groups kvm and qemu and the user qemu aren't created (see below, unfortunatly part is in German). When I create the groups/user manually, the installation works fine. D: ========== +++ libvirt-daemon-0.9.11.4-3.fc17 x86_64-linux 0x2 D: Expected size: 1898517 = lead(96)+sigs(1284)+pad(4)+data(1897133) D: Actual size: 1898517 D: libvirt-daemon-0.9.11.4-3.fc17.x86_64: Header V3 RSA/SHA256 Signature, Schlüssel-ID 1aca3465: OK D: install: libvirt-daemon-0.9.11.4-3.fc17 has 61 files D: %pre(libvirt-daemon-0.9.11.4-3.fc17.x86_64): scriptlet start D: %pre(libvirt-daemon-0.9.11.4-3.fc17.x86_64): execv(/bin/sh) pid 18035 + getent group kvm + groupadd -g 36 -r kvm groupadd: Fehler beim Schreiben der Änderungen nach /etc/group + getent group qemu + groupadd -g 107 -r qemu groupadd: Fehler beim Schreiben der Änderungen nach /etc/group + getent passwd qemu + useradd -r -u 107 -g qemu -G kvm -d / -s /sbin/nologin -c 'qemu user' qemu useradd: Gruppe »qemu« existiert nicht. D: %pre(libvirt-daemon-0.9.11.4-3.fc17.x86_64): waitpid(18035) rc 18035 status 600 Fehler: %pre(libvirt-daemon-0.9.11.4-3.fc17.x86_64) Scriptlet fehlgeschlagen, Beenden-Status 6 Fehler: libvirt-daemon-0.9.11.4-3.fc17.x86_64: install failed
This is very easy to reproduce: just remove the kvm and qemu lines frmo /etc/group, then upgrade/install libvirt. You'll see the following in audit.log: type=ADD_GROUP msg=audit(1344125374.213:6611): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/gshadow acct="kvm" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' type=ADD_GROUP msg=audit(1344125374.213:6612): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/group acct="kvm" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' type=ADD_GROUP msg=audit(1344125374.213:6613): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op= acct="kvm" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' type=ADD_GROUP msg=audit(1344125374.310:6614): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/gshadow acct="qemu" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' type=ADD_GROUP msg=audit(1344125374.311:6615): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/group acct="qemu" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' type=ADD_GROUP msg=audit(1344125374.311:6616): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op= acct="qemu" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=failed' as Tobias says, if you run the same commands from a root shell, they succeed (in that case, the following messages are logged to audit.log): type=ADD_GROUP msg=audit(1344125549.998:6617): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=adding group to /etc/group id=107 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' type=ADD_GROUP msg=audit(1344125550.188:6618): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=adding group to /etc/gshadow id=107 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' type=ADD_GROUP msg=audit(1344125550.190:6619): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op= id=107 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' type=ADD_GROUP msg=audit(1344125563.213:6620): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=adding group to /etc/group id=36 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' type=ADD_GROUP msg=audit(1344125563.362:6621): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=adding group to /etc/gshadow id=36 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' type=ADD_GROUP msg=audit(1344125563.364:6622): pid=0 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op= id=36 exe="/usr/sbin/groupadd" hostname=? addr=? terminal=pts/0 res=success' I guess it has to do with running as "subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023" (fails) vs. "subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023" (succeeds). Moving to selinux-policy so that policies can be changed to allow groupadd during %pre scripts.
None of these are AVC messages. Those messages are all correct audit messages. ausearch -m avc
Well, I am not able to reproduce it and it works for me. did it work in permissive mode? I would say this is not SELinux issue in this case. Dennis, what does # ausearch -m avc # ls -lZ /etc/group
I'm having the same problem with "screen" and "wireshark" packages. The "screen" package has this %pre script: preinstall scriptlet (using /bin/sh): /usr/sbin/groupadd -g 84 -r -f screen : These dontaudit AVCs appear when installing the package via yum and the group doesn't get created: # semodule -DB # yum install screen ... Running Transaction Installing : screen-4.1.0-0.9.20120314git3c2946.fc17.x86_64 1/1 warning: group screen does not exist - using root warning: group screen does not exist - using root # grep -i avc audit/audit.log type=AVC msg=audit(1344982418.400:148): avc: denied { read } for pid=5725 comm="groupadd" path="/tmp/tmpdH4tic" dev="dm-5" ino=942811 scontext=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:rpm_tmp_t:s0 tclass=file type=AVC msg=audit(1344982418.400:148): avc: denied { read } for pid=5725 comm="groupadd" path="/tmp/tmpdH4tic" dev="dm-5" ino=942811 scontext=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:rpm_tmp_t:s0 tclass=file type=AVC msg=audit(1344982418.445:149): avc: denied { search } for pid=5725 comm="groupadd" name="contexts" dev="dm-5" ino=672610 scontext=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_context_t:s0 tclass=dir type=AVC msg=audit(1344982418.445:150): avc: denied { search } for pid=5725 comm="groupadd" name="contexts" dev="dm-5" ino=672610 scontext=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_context_t:s0 tclass=dir type=AVC msg=audit(1344982418.445:151): avc: denied { search } for pid=5725 comm="groupadd" name="contexts" dev="dm-5" ino=672610 scontext=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:default_context_t:s0 tclass=dir Everything works correctly if I "setenforce 0" first.
In my case, the file contexts look fine: # ls -lZ /etc/passwd* /etc/group* -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/group -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/group- -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/passwd -rw-------. root root system_u:object_r:passwd_file_t:s0 /etc/passwd- -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/passwd.OLD -rw-r--r--. root root system_u:object_r:etc_t:s0 /etc/passwdqc.conf
*** Bug 848148 has been marked as a duplicate of this bug. ***
*** Bug 845671 has been marked as a duplicate of this bug. ***
And the shadow files contexts look fine: # ls -lZ /etc/*shadow* ----------. root root system_u:object_r:shadow_t:s0 /etc/gshadow ----------. root root system_u:object_r:shadow_t:s0 /etc/gshadow- ----------. root root system_u:object_r:shadow_t:s0 /etc/shadow ----------. root root system_u:object_r:shadow_t:s0 /etc/shadow- And there were no relevant changes on a restorecon run: # restorecon -R -v /etc restorecon reset /etc/sysctl.conf~ context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_t:s0 restorecon reset /etc/mail/sendmail.cf context system_u:object_r:etc_aliases_t:s0->system_u:object_r:etc_mail_t:s0
Running the AVCs through "audit2allow -R -M groupadd.pp" and loading the result "semodule -i groupadd.pp" successfully allows the %pre script to run groupadd: policy_module(groupadd, 1.0) require { type groupadd_t; } #============= groupadd_t ============== rpm_manage_tmp_files(groupadd_t) seutil_read_file_contexts(groupadd_t) seutil_search_default_contexts(groupadd_t) # yum install wireshark ... Downloading Packages: wireshark-1.6.9-1.fc17.x86_64.rpm | 10 MB 00:00 Running Transaction Check Running Transaction Test Transaction Test Succeeded Running Transaction Installing : wireshark-1.6.9-1.fc17.x86_64 1/1 Verifying : wireshark-1.6.9-1.fc17.x86_64 1/1 Installed: wireshark.x86_64 0:1.6.9-1.fc17 # getent group wireshark wireshark:x:989:
possibly related to this(?) is this infamous f18 bug: https://bugzilla.redhat.com/show_bug.cgi?id=841451 it seems quite commonly the case that, when yum updating from f17 to f18, the %pre script from the polkit package which should create a user/group named 'polkitd' doesn't work, but subsequently doing 'yum reinstall polkit' fixes it. I don't know if anyone's tried testing with selinux on/off to see if it makes a difference.
In F18 we currently have #============= groupadd_t ============== #!!!! This avc is allowed in the current policy allow groupadd_t default_context_t:dir search; #!!!! This avc has a dontaudit rule in the current policy allow groupadd_t rpm_tmp_t:file read; Which would allow groupadd_t to search through the default_context_t dir to read the file_context file, probably has some builtin labeling going on.
*** Bug 844977 has been marked as a duplicate of this bug. ***
dwalsh: so we need to add that to f17?
I guess so.
Me too. Installs fine in permissive mode. Cannot see any avc denial messages though.
Orion, so it is not working in enforcing mode for you too?
Correct. I had to switch to permissive to get it to install, otherwise it failed.
Only the question is why I am not able to reproduce it.
mgrepl: are you testing with f17? we currently believe that to hit this you need to upgrade from the f17 libvirt package to the f18 one - the %pre script in question was added between f17 and f18 - with f17 selinux-policy and enforcing mode. if you don't meet all those criteria, i wouldn't expect you to see the bug.
I was not upgrading to the f18 libvirt package just f17 updates-testing, and this seems to happen for other packages as well (comment 10 - wireshark). I use ldap/kerberos so perhaps that is a facter too. I couldn't reproduce on a vm of mine though either. Perhaps yum-cron comes into play?
It appears that this also occurs in screen (bug 845671). I use ldap/kerb as well.
Discussed at the 2012-08-22 Fedora 18 alpha go/no-go meeting. This was rejected as a blocker for Fedora 18 alpha because it is an upgrade issue, which is not covered under the Fedora 18 alpha release criteria [1]. Upgrade issues are generally considered for beta and later. This can be reproposed as a blocker for Fedora 18 beta. [1] http://fedoraproject.org/wiki/Fedora_18_Alpha_Release_Criteria
I added rules to selinux-policy-3.10.0-148.fc17 Will build today.
I updated to selinux-policy-3.10.0-148.fc17, and, even after a relabeling, I get: Running Transaction Error in PREIN scriptlet in rpm package libvirt-daemon-0.9.11.5-3.fc17.x86_64 error: %pre(libvirt-daemon-0.9.11.5-3.fc17.x86_64) scriptlet failed, exit status 6 Installing : libvirt-0.9.11.5-3.fc17.x86_64 2/2 error: libvirt-daemon-0.9.11.5-3.fc17.x86_64: install failed Verifying : libvirt-0.9.11.5-3.fc17.x86_64 1/2 Verifying : libvirt-daemon-0.9.11.5-3.fc17.x86_64 2/2 Installed: libvirt.x86_64 0:0.9.11.5-3.fc17 Failed: libvirt-daemon.x86_64 0:0.9.11.5-3.fc17
*** Bug 853100 has been marked as a duplicate of this bug. ***
*** Bug 856548 has been marked as a duplicate of this bug. ***
selinux-policy-3.10.0-149.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/selinux-policy-3.10.0-149.fc17
Package selinux-policy-3.10.0-149.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing selinux-policy-3.10.0-149.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-14301/selinux-policy-3.10.0-149.fc17 then log in and leave karma (feedback).
selinux-policy-3.10.0-149.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
I ran into this problem today. I set SELinux to permissive mode and libvirt-daemon installed without complaint. I have selinux-policy-3.10.0-150.fc17.noarch. I'm not sure whether relabeling took place, though I do know updates took rather a long time a few days ago.
*** Bug 841451 has been marked as a duplicate of this bug. ***
> Tim Flink 2012-08-22 17:30:17 EDT Comment 23 > > Discussed at the 2012-08-22 Fedora 18 alpha go/no-go meeting. This was rejected > as a blocker for Fedora 18 alpha because it is an upgrade issue, which is not > covered under the Fedora 18 alpha release criteria [1]. > > Upgrade issues are generally considered for beta and later. This can be > reproposed as a blocker for Fedora 18 beta. … which I'm hereby doing. :-)
Discussed at 2012-10-04 blocker review meeting: http://meetbot.fedoraproject.org/fedora-qa/2012-10-04/f18-beta-blocker-review-2.1.2012-10-04-16.00.log.txt . As the new upgrade tool still isn't available for testing, we can't decide the status of this yet - it'll be a blocker if fedup hits it, not a blocker if it doesn't (as yum upgrades are 'not supported'). wwoods says there is a chance fedup will hit this, so we will leave it pending until we can test.
Why can't we finally start supporting and recommending the most reliable upgrade method available? (Not that I'd personally hit this bug because I have SELinux disabled, but we also insist on enabling that broken misfeature by default for some reason.)
the new upgrade unicorn will apparently poop rainbows and vomit sparkles, so I'd say we give it a couple of release cycles or so to prove itself. it at least has the merit that we don't have _two_ upgrade methods to maintain. if it turns out to be more trouble than yum on a regular basis, you can sign me up for the yum army.
Bastion did you see any new AVC messages? Orion do you still see the problem with the latest policy?
Should be fixed with selinux-policy-3.10.0-154.fc17
Still fails in selinux-policy-3.10.0-153.fc17, but fixed in -154.
selinux-policy-3.10.0-156.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/selinux-policy-3.10.0-156.fc17
Package selinux-policy-3.10.0-156.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing selinux-policy-3.10.0-156.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-16347/selinux-policy-3.10.0-156.fc17 then log in and leave karma (feedback).
selinux-policy-3.10.0-156.fc17 was pushed stable, closing. Please reopen if you still see the problem.
*** Bug 861565 has been marked as a duplicate of this bug. ***
I installed F17 into 2 VMs and updated to latest F17 with selinux-policy-3.10.0-156.fc17. On first VM, I just upgraded to F18 according to instructions on: https://fedoraproject.org/wiki/Upgrading_Fedora_using_yum?rd=YumUpgradeFaq#Fedora_17_-.3E_Fedora_18 On second VM, I had run "restorecon -R /" before I upgraded to F18. On both VMs polkit.service start fails on boot. See duplicate of this bug for more details: #841451 "yum reinstall polkit" after upgrade to F18 workarounds this issue.
(In reply to comment #44) > I installed F17 into 2 VMs and updated to latest F17 with > selinux-policy-3.10.0-156.fc17. > > On first VM, I just upgraded to F18 according to instructions on: > https://fedoraproject.org/wiki/ > Upgrading_Fedora_using_yum?rd=YumUpgradeFaq#Fedora_17_-.3E_Fedora_18 > > On second VM, I had run "restorecon -R /" before I upgraded to F18. > > On both VMs polkit.service start fails on boot. See duplicate of this bug > for more details: #841451 > > "yum reinstall polkit" after upgrade to F18 workarounds this issue. See these reports with included AVCs: https://bugzilla.redhat.com/show_bug.cgi?id=870090 https://bugzilla.redhat.com/show_bug.cgi?id=870087
So you see these AVC msgs if you are trying to do this?
Well, /etc/passwd is mislabeled.
Could you fix labeling restorecon -R -v /etc/passwd and try to re-test it?
*** Bug 870090 has been marked as a duplicate of this bug. ***
[root@dhcp131-120 test]# ls -Z /etc/passwd -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/passwd [root@dhcp131-120 test]# restorecon -R -v /etc/passwd [root@dhcp131-120 test]# ls -Z /etc/passwd -rw-r--r--. root root system_u:object_r:passwd_file_t:s0 /etc/passwd Anyway, new yum distro-sync is in progress.
/var/log/secure: Oct 25 19:21:04 dhcp131-120 groupadd[2707]: failed to add group polkitd to /etc/gshadow Oct 25 19:21:04 dhcp131-120 groupadd[2707]: failed to add group polkitd to /etc/group Oct 25 19:21:04 dhcp131-120 groupadd[2707]: failed to add group polkitd /var/log/audit/audit.log: type=ADD_GROUP msg=audit(1351185664.524:96): pid=2707 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/gshadow acct="polkitd" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=? res=failed' type=ADD_GROUP msg=audit(1351185664.524:97): pid=2707 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op=adding group to /etc/group acct="polkitd" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=? res=failed' type=ADD_GROUP msg=audit(1351185664.524:98): pid=2707 uid=0 auid=1000 ses=2 subj=unconfined_u:system_r:groupadd_t:s0-s0:c0.c1023 msg='op= acct="polkitd" exe="/usr/sbin/groupadd" hostname=? addr=? terminal=? res=failed' /var/log/messages: Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:04 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:04 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 yum[1885]: Updated: polkit-0.107-4.fc18.x86_64 Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus[615]: [system] Reloaded configuration Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:05 dhcp131-120 dbus-daemon[615]: Unknown username "polkitd" in message bus configuration file Oct 25 19:21:06 dhcp131-120 dbus-daemon[615]: dbus[615]: [system] Reloaded configuration Oct 25 19:21:06 dhcp131-120 dbus[615]: [system] Reloaded configuration
Could it be a shadow-utils issue? Martin, did you test it in permissive mode?
What is the current labelling of /etc/group and /etc/gshadow on the system? It would be necessary to run the update in permissive with dontaudit rules disabled in SELinux policy to see what breaks the groupadd. If it is a bug in shadow-utils, it is related to SELinux anyway.
Yes, I agree with you. I believe this relates to bad labeling on these files and then groupadd does not work as expected. Marting, could you test it with these steps # auditctl -w /etc/shadow -p w # semodule -DB # setenforce 0 an re-test it.
(In reply to comment #53) > What is the current labelling of /etc/group and /etc/gshadow on the system? > It would be necessary to run the update in permissive with dontaudit rules > disabled in SELinux policy to see what breaks the groupadd. > > If it is a bug in shadow-utils, it is related to SELinux anyway. ----------. root root system_u:object_r:shadow_t:s0 /etc/shadow ----------. root root system_u:object_r:shadow_t:s0 /etc/gshadow
Martin, any chance you tried to do steps from the #54.
Also what does # grep invalid /var/log/messages
Oct 22 18:27:50 dhcp131-120 gnome-session[749]: DEBUG(+): Cannot use session '/var/lib/gdm/.config/gnome-session/sessions/gdm-shell.session': non-existing or invalid file. Oct 22 18:27:50 dhcp131-120 gnome-session[749]: DEBUG(+): Cannot use session '/etc/xdg/gnome-session/sessions/gdm-shell.session': non-existing or invalid file. Oct 22 18:27:50 dhcp131-120 gnome-session[749]: DEBUG(+): Cannot use session '/usr/share/gdm/greeter/gnome-session/sessions/gdm-shell.session': non-existing or invalid file. Oct 22 18:27:50 dhcp131-120 gnome-session[749]: DEBUG(+): Cannot use session '/usr/local/share/gnome-session/sessions/gdm-shell.session': non-existing or invalid file. Oct 22 18:48:33 dhcp131-120 kernel: [ 1321.871845] SELinux: Context system_u:unconfined_r:ncftool_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1321.888903] SELinux: Context system_u:unconfined_r:vbetool_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1321.930775] SELinux: Context unconfined_u:unconfined_r:ncftool_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1321.946615] SELinux: Context unconfined_u:unconfined_r:vbetool_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1321.954351] SELinux: Context system_u:unconfined_r:vpnc_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.007169] SELinux: Context unconfined_u:unconfined_r:vpnc_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.008791] SELinux: Context system_u:system_r:obex_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.050254] SELinux: Context unconfined_u:system_r:obex_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.132314] SELinux: Context system_u:unconfined_r:prelink_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.183511] SELinux: Context unconfined_u:unconfined_r:prelink_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.308527] SELinux: Context system_u:unconfined_r:brctl_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 22 18:48:33 dhcp131-120 kernel: [ 1322.360157] SELinux: Context unconfined_u:unconfined_r:brctl_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.302090] SELinux: Context unconfined_u:unconfined_r:telepathy_logger_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.347472] SELinux: Context unconfined_u:unconfined_r:telepathy_stream_engine_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.391174] SELinux: Context system_u:system_r:matahari_sysconfigd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.480242] SELinux: Context unconfined_u:system_r:matahari_sysconfigd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.496280] SELinux: Context system_u:unconfined_r:telepathy_msn_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.538292] SELinux: Context system_u:unconfined_r:telepathy_gabble_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.562883] SELinux: Context system_u:unconfined_r:telepathy_sofiasip_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.568122] SELinux: Context system_u:system_r:matahari_hostd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.584427] SELinux: Context system_u:system_r:matahari_serviced_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.589590] SELinux: Context unconfined_u:unconfined_r:telepathy_msn_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.628699] SELinux: Context unconfined_u:unconfined_r:telepathy_gabble_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.637836] SELinux: Context unconfined_u:system_r:matahari_serviced_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.650735] SELinux: Context unconfined_u:unconfined_r:telepathy_sofiasip_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.655140] SELinux: Context unconfined_u:system_r:matahari_hostd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.677962] SELinux: Context system_u:system_r:rhnsd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.689388] SELinux: Context system_u:unconfined_r:telepathy_idle_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.694624] SELinux: Context system_u:unconfined_r:telepathy_mission_control_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.715042] SELinux: Context system_u:system_r:matahari_netd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.719825] SELinux: Context system_u:system_r:matahari_rpcd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.748052] SELinux: Context unconfined_u:system_r:rhnsd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.769704] SELinux: Context unconfined_u:unconfined_r:telepathy_idle_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.774922] SELinux: Context unconfined_u:unconfined_r:telepathy_mission_control_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.794942] SELinux: Context unconfined_u:system_r:matahari_netd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.799748] SELinux: Context unconfined_u:system_r:matahari_rpcd_t:s0-s0:c0.c1023 became invalid (unmapped). Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.824049] SELinux: Context system_u:unconfined_r:telepathy_salut_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.904649] SELinux: Context unconfined_u:unconfined_r:telepathy_salut_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.907300] SELinux: Context system_u:unconfined_r:telepathy_sunshine_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.928119] SELinux: Context system_u:unconfined_r:telepathy_logger_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.970575] SELinux: Context system_u:unconfined_r:telepathy_stream_engine_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:42:38 dhcp131-120 kernel: [ 2264.988779] SELinux: Context unconfined_u:unconfined_r:telepathy_sunshine_t:s0-s0:c0.c1023 would be invalid if enforcing Oct 30 12:59:30 dhcp131-120 udevd[335]: invalid rule '/usr/lib/udev/rules.d/73-seat-late.rules:15' Oct 30 12:59:30 dhcp131-120 udevd[335]: invalid rule '/usr/lib/udev/rules.d/80-drivers.rules:10'
We added some fixes to selinux-policy-3.11.1-48.fc18 selinux-policy-3.10.0-158.fc17 which I am testing now.
(In reply to comment #59) > We added some fixes to > > selinux-policy-3.11.1-48.fc18 > selinux-policy-3.10.0-158.fc17 > > which I am testing now. Basically if I test it just with a new f17 build the the problem is type=AVC msg=audit(1351690531.433:125): avc: denied { getattr } for pid=446 comm="dbus-daemon" path="/etc/passwd" dev="dm-1" ino=143181 scontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:shadow_t:s0 tclass=file type=PATH msg=audit(1351690531.391:120): item=1 name="/etc/passwd.lock" inode=139249 dev=fd:01 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:shadow_t:s0 The same is for the group file. Which means the {passwd,group} file is labeled as shadow_t which causes this issue. We need to test an upgrade also with the lastest F18 build. Tomas, I guess the label is derived from /etc/{group,passwd}.lock We have in the latest builds # matchpathcon /etc/{passwd,group}\.lock /etc/passwd.lock system_u:object_r:passwd_file_t:s0 /etc/group.lock system_u:object_r:passwd_file_t:s0
Discussed at 2012-10-31 blocker review meeting: http://meetbot.fedoraproject.org/fedora-qa/2012-10-31/f18beta-blocker-review-6.2012-10-31-16.00.log.txt . Once again we punt on this because we need to know if it affects fedup. Tim will attempt to test this once fedup is working well enough.
> Which means the {passwd,group} file is labeled as shadow_t which causes this > issue. Hmmm, wait, wasn't there some glitch/limitation in the kernel's SELinux code where when the same process opens files in the same directory at the same time, they'd always get the same label or something like that? (I have some vague reminiscence of a really funky glitch like that.) If that hasn't been fixed since, it might be what's happening here. (It sounds awfully similar to what's happenning here, in any case.)
Discussed at 2012-11-05 QA meeting acting as a blocker review meeting. tflink is working to check whether this affects fedup, but it's currently too buggy to be sure.
selinux-policy-3.10.0-159.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/selinux-policy-3.10.0-159.fc17
OK, I tried testing this with fedup and while I'm still not certain, I don't think this is a problem right now. I started with an updated F17 install on bare metal that had a working VM in it. I upgraded to F18 using fedup and after working around all the bugs in that process, I have an F18 install that mostly works. I'm seeing some issues with the VM in the upgraded system but none of them are related to missing users/groups or SELinux. At the moment, I suspect the issues I'm seeing are related to the fedup upgrade process.
Discussed 2012-11-07 blocker review meeting: http://meetbot.fedoraproject.org/fedora-qa/2012-11-07/f18beta-blocker-review-7.2012-11-07-17.03.log.txt . As it does not appear to affect fedup, this is rejected as a blocker. Of course, if further testing indicates it does affect fedup upgrades, it can be re-proposed...but we hope not. (Note that the 'upgrading with yum' wiki page specifically mentions this bug and the workaround of upgrading with selinux in permissive, so we have yum upgraders covered.)
Package selinux-policy-3.10.0-159.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing selinux-policy-3.10.0-159.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-17782/selinux-policy-3.10.0-159.fc17 then log in and leave karma (feedback).
The update has not fixed the issue with screen (#5) https://bugzilla.redhat.com/show_bug.cgi?id=844167#c5 $ screen Cannot make directory '/var/run/screen': Permission denied $ yum list installed | grep selinux-policy selinux-policy.noarch 3.10.0-159.fc17 @updates-testing selinux-policy-devel.noarch 3.10.0-156.fc17 @updates-testing selinux-policy-targeted.noarch 3.10.0-159.fc17 @updates-testing Not sure if it was supposed to but the bug specific to the Screen issue was marked as a duplicate of this bug...
Gavin, is the /var/run/screen program owned by the wrong user? ls -ldZ /var/run/screen/ drwxrwxr-x. root screen system_u:object_r:screen_var_run_t:s0 /var/run/screen/
Daniel: That directory doesn't even exist $ ls -ldZ /var/run/screen/ ls: cannot access /var/run/screen/: No such file or directory $ yum list installed | grep screen ... screen.x86_64 4.1.0-0.9.20120314git3c2946.fc17
Gavin, you need to re-install the screen package AFTER updating to the new selinux-policy package: yum reinstall screen See also https://bugzilla.redhat.com/show_bug.cgi?id=845671#c4 and the following comments. You may need to restart systemd-tmpfiles-setup.service AFTER reinstalling screen so that it can create the /var/run/screen directory. (In reply to comment #70) > Daniel: > > That directory doesn't even exist > > $ ls -ldZ /var/run/screen/ > ls: cannot access /var/run/screen/: No such file or directory > > $ yum list installed | grep screen > ... > screen.x86_64 4.1.0-0.9.20120314git3c2946.fc17
Thanks Charles, I did wonder about that but the day job intervened and your reply arrived before I had a chance to try that. Running `screen` now works for me.