RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1465336 - Segmentation fault when tried to input any command in 7.3.6-4 atomic host of vsphere
Summary: Segmentation fault when tried to input any command in 7.3.6-4 atomic host of ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rhel-server-atomic
Version: 7.3
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: davis phillips
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
: 1464321 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-27 09:09 UTC by Alex Jia
Modified: 2018-07-03 14:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-03 14:41:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
segfault (22.55 KB, image/jpeg)
2017-06-27 09:09 UTC, Alex Jia
no flags Details

Description Alex Jia 2017-06-27 09:09:26 UTC
Created attachment 1292223 [details]
segfault

Description of problem:
To deploy rhel-atomic-vsphere-7.3.6-4.x86_64.vsphere.ova to vCenter by vSphere client and boot then log in, you will get segmentation fault whatever you input any command.

Version-Release number of selected component (if applicable):

rhel-atomic-vsphere-7.3.6-4.x86_64.vsphere.ova
ESX6.0
vShpere Client 6.0.0

How reproducible:
always

Steps to Reproduce:
1. deploy rhel-atomic-vsphere-7.3.6-4.x86_64.vsphere.ova to vCenter by vSphere client and boot then log in
2. ls


Actual results:

Please check attachment

Expected results:


Additional info:

Comment 2 davis phillips 2017-06-27 15:35:16 UTC
Hey Alex, 

I'm attempting to recreate this now. 

Thanks!
Davis

Comment 3 Jonathan Lebon 2017-06-27 15:41:03 UTC
FWIW, I can't reproduce this on the qcow2 image of the same batch. Looks like something specific to vSphere.

Comment 4 Micah Abbott 2017-06-27 15:44:27 UTC
FWIW, I converted the .vmdk file to .qcow2 and created a VM on my local libvirt stack and reproduced the same issue there.

Using non-VMWare 'rhel-atomic-cloud-7.3.6-5.x86_64.qcow2', I did not experience this issue.

Comment 5 Alex Jia 2017-06-27 15:55:46 UTC
Yes, I hasn't met the similar issue in rhel-atomic-cloud-7.3.6-5.x86_64.qcow2           and rhel-atomic-cloud-7.3.6-5.x86_64.vhd.

Comment 6 Colin Walters 2017-06-27 16:39:12 UTC
What does e.g. `ostree fsck` say?

Comment 7 Jonathan Lebon 2017-06-27 16:41:33 UTC
I think the issue is a bad LD_LIBRARY_PATH:

[cloud-user@r7-tmp ~]$ ls -la
Segmentation fault (core dumped)
[cloud-user@r7-tmp ~]$ echo $LD_LIBRARY_PATH 
/var/lib/containers/atomic/open-vm-tools/rootfs/usr/lib64/:
[cloud-user@r7-tmp ~]$ unset LD_LIBRARY_PATH 
[cloud-user@r7-tmp ~]$ ls -la
total 12
drwx------. 3 cloud-user cloud-user  74 Jun 27 16:14 .
drwxr-xr-x. 3 root       root        24 Jun 27 16:14 ..
-rw-r--r--. 1 cloud-user cloud-user  18 Jan  1  1970 .bash_logout
-rw-r--r--. 1 cloud-user cloud-user 193 Jan  1  1970 .bash_profile
-rw-r--r--. 1 cloud-user cloud-user 231 Jan  1  1970 .bashrc
drwx------. 2 cloud-user cloud-user  29 Jun 27 16:14 .ssh

which is due to this bit in the kickstart:

# rhbz 1166465
cat <<EOF > /etc/profile.d/open-vm-tools.sh
export LD_LIBRARY_PATH=/var/lib/containers/atomic/open-vm-tools/rootfs/usr/lib64/:$LD_LIBRARY_PATH
export PATH=/var/lib/containers/atomic/open-vm-tools/rootfs/usr/bin/:$PATH
EOF

Looking at https://bugzilla.redhat.com/show_bug.cgi?id=1166465, I think the goal was to add vmware-toolbox-cmd to the PATH. The issue is that LD_LIBRARY_PATH will be used by *all* processes, which is clearly not what we want here. (Also $PATH at kickstart time includes things like /mnt/sysimage/usr/bin, which doesn't make sense).

How about just dropping a helper in /usr/local/bin that just runs the utility in a container?

Comment 8 Jonathan Lebon 2017-06-27 16:44:51 UTC
In the meantime: https://code.engineering.redhat.com/gerrit/110218

Comment 9 Jonathan Lebon 2017-06-27 16:49:06 UTC
*** Bug 1464321 has been marked as a duplicate of this bug. ***

Comment 10 davis phillips 2017-06-27 16:53:47 UTC
Yes. Im booting to rescue to change on the filesystem.

/var/lib/containers/atomic/open-vm-tools/rootfs/usr/lib64/:

I think the trailing : is causing the issue.

Comment 11 Colin Walters 2017-06-27 16:56:54 UTC
It's not going to work in general to add container content to the host's paths.  As @jlebon says, this needs to be a wrapper script that does an exec into the container.  

Or alternatively we generate host RPMs, but that bit isn't ready.

Comment 12 davis phillips 2017-06-27 17:00:42 UTC
No problem. I'll add the wrapper script into the exec of the container. Thanks for the help!

Comment 13 davis phillips 2017-06-27 18:06:22 UTC
Here we go:

https://code.engineering.redhat.com/gerrit/#/c/110224/

Comment 14 Alex Jia 2017-06-28 04:48:32 UTC
This issue has been fixed in rhel-atomic-vsphere-7.3.6-5.x86_64.vsphere.ova, but the open-vm-tool container can't be started successfully when booting atomic host of vShpere, I will file a separated bug for this.

[cloud-user@atomic-host-test ~]$ atomic host status
State: idle
Deployments:
● rhel-atomic-host:rhel-atomic-host/7/x86_64/standard
             Version: 7.3.6 (2017-06-23 16:20:45)
              Commit: e073a47baa605a99632904e4e05692064302afd8769a15290d8ebe8dbfd3c81b


Note You need to log in before you can comment on or make changes to this bug.