This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2186186 - Leapp fails to find grubby with error "[Errno 2] No such file or directory" when executing the jobs from Satellite WebUI
Summary: Leapp fails to find grubby with error "[Errno 2] No such file or directory" w...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: leapp
Version: 7.9
Hardware: All
OS: All
high
medium
Target Milestone: rc
: 7.9
Assignee: Leapp Notifications Bot
QA Contact: upgrades-and-conversions
URL:
Whiteboard:
: 2240788 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-12 11:12 UTC by Sayan Das
Modified: 2023-10-03 06:28 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-12 12:20:57 UTC
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OAMG-9254 0 None None None 2023-06-06 09:09:37 UTC
Red Hat Issue Tracker   RHEL-3293 0 None Migrated None 2023-09-12 12:19:54 UTC
Red Hat Issue Tracker RHELPLAN-159054 0 None None None 2023-06-06 09:09:23 UTC
Red Hat Issue Tracker SAT-17119 0 None None None 2023-04-13 14:07:58 UTC
Red Hat Knowledge Base (Solution) 7006724 0 None None None 2023-04-12 11:15:55 UTC

Description Sayan Das 2023-04-12 11:12:37 UTC
Description of problem:

When executing the leapp preupgrade or upgrade from Satellite WebUi using the predefined job templates and the remote_execution_ssh_user is a non-root user, Then after running the job for a while, Leapp fails to find grubby command.

Summary : {"details": "Command ['grubby', '--info', 'ALL'] failed with exit code 1.", "stderr": "Process Process-237:\nTraceback (most recent call last):\n  File \"/usr/lib64/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n    self.run()\n  File \"/usr/lib64/python2.7/multiprocessing/process.py\", line 114, in run\n    self._target(*self._args, **self._kwargs)\n  File \"/usr/lib/python2.7/site-packages/leapp/repository/actor_definition.py\", line 72, in _do_run\n    actor_instance.run(*args, **kwargs)\n  File \"/usr/lib/python2.7/site-packages/leapp/actors/__init__.py\", line 289, in run\n    self.process(*args)\n  File \"/usr/share/leapp-repository/repositories/system_upgrade/el7toel8/actors/sourcebootloaderscanner/actor.py\", line 18, in process\n    scan_source_boot_loader_configuration()\n  File \"/usr/share/leapp-repository/repositories/system_upgrade/el7toel8/actors/sourcebootloaderscanner/libraries/sourcebootloaderscanner.py\", line 87, in scan_source_boot_loader_configuration\n    entries=scan_boot_entries()\n  File \"/usr/share/leapp-repository/repositories/system_upgrade/el7toel8/actors/sourcebootloaderscanner/libraries/sourcebootloaderscanner.py\", line 41, in scan_boot_entries\n    grubby_output = run(CMD_GRUBBY_INFO_ALL, split=True)\n  File \"/usr/lib/python2.7/site-packages/leapp/libraries/stdlib/__init__.py\", line 181, in run\n    stdin=stdin, env=env, encoding=encoding)\n  File \"/usr/lib/python2.7/site-packages/leapp/libraries/stdlib/call.py\", line 217, in _call\n    os.execvpe(command[0], command, env=environ)\n  File \"/usr/lib64/python2.7/os.py\", line 353, in execvpe\n    _execvpe(file, args, env)\n  File \"/usr/lib64/python2.7/os.py\", line 380, in _execvpe\n    func(fullname, *argrest)\nOSError: [Errno 2] No such file or directory\n"}


Version-Release number of selected component (if applicable):

Satellite 6.10/6.11/6.12

How reproducible:

In the customer's environment 


Steps to Reproduce: ( This may or may not work )

1. Setup a Satellite 6.12 for REX + Leapp plugin ( prepare it for performing RHEL 7 - 8 conversion on client hosts )

2. On an RHEL 7.9 system, create a non-root user, give it sudo privileges and ensure that it has a PATH env variable set where /usr/sbin/ is not the PATH.

3. Register that system with the satellite.

4. Remote Execution SSH User should be set to that non-root user and the effective user should be root.

5. Run the Preupgrade job on that host from Satellite UI --> Hosts --> All Hosts --> Select Actions dropdown

   If it succeeds, then run the Upgrade job in the same way.


Actual results:

On either the Preupgrade or the Upgrade job, leapp will fail to find grubby binary to list the boot information. 


Expected results:

No such issues.


Additional info:

After doing an strace on sshd of client system, we found that, when the SSH user becomes effective user, It's inheriting the wrong PATH variable in the environment and hence leapp cannot find out grubby command later.

The way we worked it around:

For "Check Leapp" template
--------------------------

export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
if ! command -v leapp > /dev/null
then
  echo "Leapp is not installed."
  exit 1
fi

----------------------------

Above automatically fixes the execution for "Run preupgrade via Leapp " template

For "Run upgrade via Leapp" template
------------------------------------

---
- hosts: all
  tasks:
    - name: Run Leapp Upgrade
      environment:
        PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin'
      command: leapp upgrade
<%- if input('Reboot') == "true" -%>
    - name: Reboot the machine
      reboot:
        reboot_timeout: 1800
<%- end -%>


------------------------------------

The analysis of strace:

The PATH variable used during leapp preupgrade is "PATH=/usr/local/bin:/usr/bin" as a result unable to find grubby binary (available in /usr/sbin) and hence leapp preupgrade fails with error Failed to call grubby 


83547 10:46:18.547227 execve("/usr/bin/leapp", ["leapp", "preupgrade"], ["SMIT_QUOTE=n", "SHELL=/bin/bash", "TERM=xterm", "USER=root", "LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg"..., "SUDO_USER=decoy", "SUDO_UID=1011", "USERNAME=root", "MAIL=/var/mail/root", "PATH=/usr/local/bin:/usr/bin", "SMIT_SEMI_COLON=n", "PWD=/home/decoy", "LANG=en_US.UTF-8", "SHLVL=1", "SUDO_COMMAND=/var/tmp/foreman-ssh-cmd-56a92adc-0cfb-44a2-a49f-f78411d29de0/script", "HOME=/root", "LOGNAME=root", "SMIT_SHELL=n", "SUDO_GID=1011", "_=/usr/bin/leapp"]) = 0 <0.002122>

This is where it all starts:


83463 10:46:16.764980 execve("/bin/bash", ["bash", "-c", "mkdir -p /var/tmp/foreman-ssh-cmd-56a92adc-0cfb-44a2-a49f-f78411d29de0"], ["LANG=en_US.UTF-8", "USER=decoy", "LOGNAME=decoy", "HOME=/home/decoy", "PATH=/usr/local/bin:/usr/bin", "MAIL=/var/mail/decoy", "SHELL=/bin/bash", "SSH_CLIENT=10.13.XX.XX 53502 22", "SSH_CONNECTION=10.13.XX.XX 53502 10.241.XX.XX 22", "XDG_SESSION_ID=1900", "XDG_RUNTIME_DIR=/run/user/1011"]) = 0 <0.001512>



I personally never was able to reproduce the issue but we have seen it happening 100% with two users without the workaround mentioned

Comment 1 Sayan Das 2023-04-12 11:13:43 UTC
Since we probably have no control over the user's OS environment, Can we just simply not force basic environment PATHs in the job templates or that could have a different side-effect here ?

Comment 2 Sayan Das 2023-04-12 11:18:02 UTC
CC'ing 

Dhaval and Pradeep from support and Leos from Sat Engg team

Comment 7 Leos Stejskal 2023-06-06 08:58:14 UTC
We were discussing the issue and we are not sure if this is something
that we should fix in our templates or rather it should be fixed in the leapp tool itself.

Moving to the Leapp component to see what the Leapp team thinks about that.

Comment 11 Petr Stodulka 2023-06-06 11:56:31 UTC
Hi guys, from our point we clearly state, that leapp must be executed by root user. Which should have set /usr/sbin in the PATH variable. So from my POV this does not seem to be something I would consider as a bug on our side. What is blocking you to execute leapp from your templates in a way like this?:
~~~
  PATH=$PATH:/usr/sbin leapp <subcmd> [options]
~~~

I am not saying it's impossible for us to update commands in actors, however I would not expect this to happen any time soon - if it ever happens, that would need to be discussed yet. This is for me like a case when /usr/bin would not be defined in the PATH envar.

Comment 12 Petr Stodulka 2023-06-08 12:50:22 UTC
P.S: just to be more specific, in case of using sudo, we would expect to run sudo in this case with --login option (or -i), as also other environment variables are most likely different for the root user in comparison to a normal user with sudo privileges. try:
~~~
$ sudo env
$ sudo -i env
~~~

Comment 13 Leos Stejskal 2023-06-12 07:19:12 UTC
Hi Sayan,
As per Petr's comment in https://bugzilla.redhat.com/show_bug.cgi?id=2186186#c11,
the fix should be to run the job either as root or as a user with /usr/sbin in the PATH variable.

Can you check and confirm solution with the CU?

Comment 14 Sayan Das 2023-06-12 09:04:49 UTC
Hello Leos,

A) I don't think any support cases are active today where I can ask CU to check the PATH of non-root user.

B) I cannot force a customer to have anything in the PATH var that they don't have, without having the reason documented in RH Docs\KB. So if the proposal is to ensure, That the non-root user has some specific set of path's in the PATH env var, then that should be documented somewhere. And yes, that would solve the issue as well.

C) As we have seen issues like this only when customers are using non-root user as the SSH user, So apart from having the issue with PATH var, Can we confirm that the satellite initiates the sudo in the right manner ( whether it's normal REX or Ansible based REX ) ?


-- Sayan

Comment 15 Leos Stejskal 2023-06-19 12:24:37 UTC
For the documentation, I suggest to create a new BZ for documentation.

For the C) point pinging @

Comment 16 Leos Stejskal 2023-06-19 12:25:28 UTC
Sorry, for the C) point, Adam do you know how do we run it?

Comment 17 Adam Ruzicka 2023-06-19 12:35:23 UTC
For sudo[1] we do "sudo -p '#{LOGIN_PROMPT}' -u #{effective_user} $path_to_script".
For su[2] we do "su - #{effective_user} -c $path_to_script"

I'm not completely sure what ansible does.

> Can we confirm that the satellite initiates the sudo in the right manner

What is the right manner in this case?

[1] - https://github.com/theforeman/smart_proxy_remote_execution_ssh/blob/master/lib/smart_proxy_remote_execution_ssh/runners/script_runner.rb#L49
[2] - https://github.com/theforeman/smart_proxy_remote_execution_ssh/blob/master/lib/smart_proxy_remote_execution_ssh/runners/script_runner.rb#L73

Comment 18 RHEL Program Management 2023-09-12 11:55:07 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 19 RHEL Program Management 2023-09-12 12:20:57 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.

Comment 20 Leos Stejskal 2023-10-03 06:28:15 UTC
*** Bug 2240788 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.