Description of problem: It happened adding a key to the ssh-agent. I run as staff_u, should I create the agent socket in ~/.ssh ? I would prefer hold it in a tmpfs. SELinux is preventing /usr/bin/ssh-add from 'connectto' accesses on the unix_stream_socket /tmp/ssh-h6hRzTfjkHPZ/agent.1237. ***** Plugin catchall (100. confidence) suggests *************************** If cree que de manera predeterminada, ssh-add debería permitir acceso connectto sobre agent.1237 unix_stream_socket. Then debería reportar esto como un error. Puede generar un módulo de política local para permitir este acceso. Do permita el acceso momentáneamente executando: # grep ssh-add /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp Additional Information: Source Context staff_u:staff_r:staff_t:s0-s0:c0.c1023 Target Context staff_u:unconfined_r:unconfined_t:s0 Target Objects /tmp/ssh-h6hRzTfjkHPZ/agent.1237 [ unix_stream_socket ] Source ssh-add Source Path /usr/bin/ssh-add Port <Desconocido> Host (removed) Source RPM Packages openssh-clients-6.2p2-5.fc19.x86_64 Target RPM Packages Policy RPM selinux-policy-3.12.1-73.fc19.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name (removed) Platform Linux (removed) 3.10.10-200.fc19.x86_64 #1 SMP Thu Aug 29 19:05:45 UTC 2013 x86_64 x86_64 Alert Count 1 First Seen 2013-09-06 07:39:33 CEST Last Seen 2013-09-06 07:39:33 CEST Local ID 8d75930e-5018-45bd-8c19-01c23c93a872 Raw Audit Messages type=AVC msg=audit(1378445973.487:615): avc: denied { connectto } for pid=2846 comm="ssh-add" path="/tmp/ssh-h6hRzTfjkHPZ/agent.1237" scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=staff_u:unconfined_r:unconfined_t:s0 tclass=unix_stream_socket type=SYSCALL msg=audit(1378445973.487:615): arch=x86_64 syscall=connect success=no exit=EACCES a0=3 a1=7ffffc96f2b0 a2=6e a3=fffffffffffffb47 items=0 ppid=2845 pid=2846 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 ses=4 tty=(none) comm=ssh-add exe=/usr/bin/ssh-add subj=staff_u:staff_r:staff_t:s0-s0:c0.c1023 key=(null) Hash: ssh-add,staff_t,unconfined_t,unix_stream_socket,connectto Additional info: reporter: libreport-2.1.6 hashmarkername: setroubleshoot kernel: 3.10.10-200.fc19.x86_64 type: libreport
What is your output of # ps -efZ |grep unconfined
# ps -efZ |grep unconfined staff_u:unconfined_r:unconfined_t:s0 juan 1240 1 0 07:29 ? 00:00:00 ssh-agent staff_u:unconfined_r:unconfined_t:s0 juan 1241 1 0 07:29 ? 00:00:00 gpg-agent --daemon --write-env-file /home/juan/.gnupg/gpg-agent-info staff_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 14100 14096 0 10:31 pts/7 00:00:00 /bin-bash staff_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 15467 14100 0 11:09 pts/7 00:00:00 ps -efZ staff_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 15468 14100 0 11:09 pts/7 00:00:00 grep --color=auto unconfined I launch the agent at reboot with cron, how can I lauch it to run as staff_t?
Ok, the problem ssh-agent is running as staff_u:unconfined_r:unconfined_t:s0 Any chance to re-run the agent. On my system $ ps -eZ |grep ssh system_u:system_r:sshd_t:s0-s0:c0.c1023 800 ? 00:00:00 sshd staff_u:staff_r:staff_ssh_agent_t:s0 2741 ? 00:00:00 ssh-agent staff_u:staff_r:ssh_t:s0 4481 pts/1 00:00:00 ssh
Now I'm launching ssh-agent from .bashrc and the process is labeled as staff_ssh_agent_t, everything works alright. Why it does not get correctly labeled when launched from the user crontab? Thank you and sorry for the inconveniences.
Ok, so it looks your cronjob is not running in the staff_t domain as expected. What does # grep -r crond_t /etc/selinux/targeted/contexts/users/
# grep -r crond_t /etc/selinux/targeted/contexts/users/ /etc/selinux/targeted/contexts/users/guest_u:system_r:crond_t:s0 guest_r:guest_t:s0 /etc/selinux/targeted/contexts/users/root:system_r:crond_t:s0 unconfined_r:unconfined_t:s0 sysadm_r:sysadm_t:s0 staff_r:staff_t:s0 user_r:user_t:s0 /etc/selinux/targeted/contexts/users/staff_u:system_r:crond_t:s0 staff_r:staff_t:s0 /etc/selinux/targeted/contexts/users/unconfined_u:system_r:crond_t:s0 unconfined_r:unconfined_t:s0 /etc/selinux/targeted/contexts/users/user_u:system_r:crond_t:s0 user_r:user_t:s0 /etc/selinux/targeted/contexts/users/xguest_u:system_r:crond_t:s0 xguest_r:xguest_t:s0
It is OK. How does look your configuration with the user crontab?
I have in the user crontab: # SSH Agent @reboot umask 0077; rm -f ${HOME}/.ssh/environment; pgrep -U $LOGNAME ssh-agent >/dev/null 2>&1 || ssh-agent | sed 's/^echo/#echo/' > ${HOME}/.ssh/environment */5 * * * * umask 0077; pgrep -U $LOGNAME ssh-agent >/dev/null 2>&1 || ssh-agent | sed 's/^echo/#echo/' > ${HOME}/.ssh/environment
This message is a notice that Fedora 19 is now at end of life. Fedora has stopped maintaining and issuing updates for Fedora 19. It is Fedora's policy to close all bug reports from releases that are no longer maintained. Approximately 4 (four) weeks from now this bug will be closed as EOL if it remains open with a Fedora 'version' of '19'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 19 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.