Bug 892293 - SELinux is preventing /usr/libexec/gdm-session-worker from write access on the directory /home/<user>/.cache
Summary: SELinux is preventing /usr/libexec/gdm-session-worker from write access on th...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: selinux-policy
Version: 18
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Miroslav Grepl
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-01-06 12:05 UTC by Ralf Corsepius
Modified: 2013-01-08 14:25 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2013-01-08 14:25:25 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
full SEalert (5.10 KB, application/octet-stream)
2013-01-06 12:05 UTC, Ralf Corsepius
no flags Details

Description Ralf Corsepius 2013-01-06 12:05:15 UTC
Created attachment 673324 [details]
full SEalert

Description of problem:

SELinux is issuing the warning from the subject each time after bootup.

Version-Release number of selected component (if applicable):

selinux-policy-targeted-3.11.1-69.fc18.noarch.rpm

How reproducible:
Always.

Steps to Reproduce:
No idea.
  
Actual results:
SEalert, despite the SELinux context of the directory the alert complains about seems correct:
# ls -lZd .cache
drwxr-xr-x. rtems rtems unconfined_u:object_r:cache_home_t:s0 .cache

# restorecon -v -R /home/rtems/.cache
Also doesn't report to be changing anything.


Expected results:
- Function.

Comment 1 Miroslav Grepl 2013-01-07 10:41:59 UTC
# ls -lZd .cache
drwxr-xr-x. rtems rtems unconfined_u:object_r:cache_home_t:s0 .cache

is correct. So you are saying you can get the same alert now if you re-test it?

Comment 2 Ralf Corsepius 2013-01-07 12:21:04 UTC
(In reply to comment #1)
> So you are saying you can get the same alert now if you re-test
> it?

Exactly. 

Let me try to provide the "whole sequence of the story":

1. I boot up.
2. gdm appears
3. I log in into user "user1" (xfce)
4. setroubleshoot's "bulb" appears, reporting the sealert on user "rtems" above.
5. I log in as root;
Check the secontext of /home/rtems/.cache, but can't spot anything special.
Run restorecon -v -R /home/rtems/.cache, which doesn't report anything special, either.
6. In "user1"'s setroubleshoot, I delete the sealert.
7. logout, shutdown.

After rebooting, the story repeats. Another, _new_ sealert on the same issue pops up ... etc.

I am absolutely clueless about what could be going on, especially because I can't spot anything wrong with the secontexts.

Also noteworthy:
- /home is on a separate partition, shared between f17 and f18 in a multiboot configuration.
- /home hosts several other user's homes. For reasons, I do not understand, I am not experiencing such sealerts for them.

Comment 3 Daniel Walsh 2013-01-07 19:52:57 UTC
Have you logged in as the other OS in the mean time?

If you login as root and run restorecon on the directory does anything change?  

What does matchpathcon on ~/.cache on F17 say versus F18

Comment 4 Ralf Corsepius 2013-01-08 06:41:56 UTC
I think I've found the cause.

User "rtems"'s passwd/group are hosted on a remote yp-server. In my f18 installation, for unknown reasons ypbind wasn't running (It runs in my f17 installation), which means the user "rtems" did not have a valid uid/gid. Bringing yp up resolved this issue.

Though I do not fully understand what is going on, my explanation is gdm carries a cache/list of all users/accounts, who ever logged into a system and traverses this list of users when it starts up, for whatever reasons.

Apparently, in cases, a user's home directory is present (/home/<user> [1]), but the uid/gid is invalid (Something which occasionally happens to happen when using yp-based passwd/group maps), this seem to triggers this SEalert.

To conclude, I think this BZ can be closed (for now).

[1] It seems to try to physically access /home/<user> when setting up its log-in screen (I'd guess it tries to access /home/<user>/.cache). This is a horrible behavior, because it triggers automount to mount remote homes when using on-demand automounted homes.


Note You need to log in before you can comment on or make changes to this bug.