Bug 1569071 - `oc` client not in sudo secure_path when deployed to /usr/local/bin/
Summary: `oc` client not in sudo secure_path when deployed to /usr/local/bin/
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.9.0
Hardware: All
OS: Linux
unspecified
low
Target Milestone: ---
: 3.9.z
Assignee: Scott Dodson
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-18 14:52 UTC by Brian J. Beaudoin
Modified: 2018-05-17 15:09 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-20 19:00:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Brian J. Beaudoin 2018-04-18 14:52:57 UTC
Description of problem:

When OpenShift is deployed on RHEL with `containerized=true` or on RHEL Atomic, the `oc` client is not found in the default sudo path as either `sudo oc <args>` or using an interactive `sudo -i` session.

The reason is the sudo `secure_path` statement in /etc/sudoers:

~~~
[user@master1 ~]$ sudo grep secure_path /etc/sudoers
Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin
~~~

Version-Release number of selected component (if applicable):

openshift-ansible-3.9.14-1.git.3.c62bc34.el7.noarch
ansible-2.4.3.0-1.el7ae.noarch
ansible 2.4.3.0
  config file = /home/bbeaudoin/.ansible.cfg
  configured module search path = [u'/home/bbeaudoin/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

How reproducible:

Deploy OpenShift with `containerized=true` set in the inventory for the Advanced Installation method. Post-install, log into the first master and attempt to run commands using `sudo oc` or in an interactive shell created by `sudo -i`.

Steps to Reproduce:
1. Deploy OpenShift with `containerized=true`
2. Log into the first master
3. Execute `sudo oc`

Actual results:
The binary is not found unless the full path to `oc` is given. 

~~~
[user@master1 ~]$ sudo which oc
which: no oc in (/sbin:/bin:/usr/sbin:/usr/bin)

[user@master1 ~]$ sudo oc version
sudo: oc: command not found

[user@master1 ~]$ which oc
/usr/local/bin/oc

[user@master1 ~]$ sudo -i
[root@master1 ~]# which oc
/usr/bin/which: no oc in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root@master1 ~]# 

~~~

Expected results:

~~~
[user@master1 ~]$ sudo oc version
oc v3.9.14
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master1.example.com
openshift v3.9.14
kubernetes v1.9.1+a0ce1bc657
~~~

Additional info:

If using the `oc` client as installation user, it works as expected. The path /usr/local/bin is compiled into `ssh` so direct `root` logins are unaffected (indicated by the comment in /etc/ssh/sshd_config).

When using commands and command groups in /etc/sudoers, the full path must be used on the sudo commandline but may vary between clusters depending on the installation type.

Workarounds:
1. Adding /usr/local/bin to the secure_path line in /etc/sudoers.
2. Always using the full path to invoke the command (/usr/local/bin/oc)
3. Using `sudo su -` or `sudo su - -c oc` (the '-', '-l', or '--login' option is required to reinitialize the PATH environment variable).
4. Install atomic-openshift-clients using yum or rpm-ostree
5. `sudo ln -sf /usr/local/bin/oc /usr/local/sbin` (only works with sudo when the '-i' or '--login' options are used, regardless of whether the session is interactive.)

Comment 1 Juan Vallejo 2018-04-18 19:39:08 UTC
Moving to the installer team.

Comment 2 Scott Dodson 2018-04-18 19:58:22 UTC
Can you elaborate on why you want to use `oc` via sudo? That shouldn't be necessary. I'm not sure this is something we should fix but I'd like to understand why you think it's necessary.

Comment 3 Brian J. Beaudoin 2018-04-18 21:29:02 UTC
Since 3.7 we've replaced direct calls to `oc` with `{{ openshift_client_binary }}` which has helped with playbook failures (due to ansible_ssh_user != root and ansible_become=yes)

Post-installation, I've seen existing scripts run through Ansible or other third-party automation with sudo privilege escalation failing for the same reason the previous installer failed.

Arguably better options are:

1. Place the 'oc' binary in the secure path (not a mutable path).
2. Update automation to extend the PATH variable or check for the valid client path.
3. Don't log into the nodes to run the `oc` command (run it remotely).

There are downsides to #1 during installation, namely the secure path is immutable and a reboot would be required to use the new layer. The other two just take time for better practices to be communicated, reinforced, and be adopted by users.

Comment 4 Scott Dodson 2018-04-20 19:00:26 UTC
It seems to me that this should be at the admin's discretion. If they'd like to add /usr/local/bin to sudoer's secure_path then they should do so. I don't think we should exert control here.


Note You need to log in before you can comment on or make changes to this bug.