Bug 1661916 - uninstall playbook umounts /var/lib/origin/openshift.local.volumes
Summary: uninstall playbook umounts /var/lib/origin/openshift.local.volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 3.11.z
Assignee: Jeremiah Stuever
QA Contact: sheng.lao
URL:
Whiteboard:
Depends On:
Blocks: 1679760
TreeView+ depends on / blocked
 
Reported: 2018-12-24 12:27 UTC by Ravi Trivedi
Modified: 2019-03-14 02:18 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1679760 (view as bug list)
Environment:
Last Closed: 2019-03-14 02:17:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Ansible uninstall log (5.69 MB, application/gzip)
2018-12-24 12:27 UTC, Ravi Trivedi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0407 0 None None None 2019-03-14 02:18:07 UTC

Description Ravi Trivedi 2018-12-24 12:27:21 UTC
Created attachment 1516541 [details]
Ansible uninstall log

Description of problem:

As per [1], if the mount /var/lib/origin/openshift.local.volumes is created separately before installing Openshift (without setting 'container_runtime_extra_storage' var, the uninstall playbook will umount this mount point as well. When a re-run of the installation will happen, the new pods get created in the parent mount of /var instead.

[1] - https://docs.openshift.com/container-platform/3.11/day_two_guide/environment_health_checks.html#day-two-guide-storage

Version-Release number of the following components:

# rpm -q openshift-ansible
openshift-ansible-3.11.43-1.git.0.fa69a02.el7.noarch
# rpm -q ansible
ansible-2.6.7-1.el7ae.noarch
# ansible --version
ansible 2.6.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

How reproducible:

Always

Steps to Reproduce:
1. Create /var/lib/origin/openshift.local.volumes mount and install Openshift without setting 'container_runtime_extra_storage' var.
2. Run Uninstall playbook 

Actual results:

As per the following playbook and lines:

PLAYBOOK: /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall_openshift.yml

   188    - shell: find /var/lib/origin/openshift.local.volumes -type d -exec umount {} \; 2>/dev/null || true
   189      changed_when: False

If you were to run the below command on the nodes where /var/lib/origin/openshift.local.volumes is mounted, it will unmount the /var/lib/origin/openshift.local.volumes FS too.


Expected results:

The customer expects that if the playbook is not creating any mounts, it should ideally not perform any umounts as well. If the variable 'container_runtime_extra_storage' is defined for the same install and uninstall playbooks then it would perform as expected. 

Not sure if this qualifies as a doc bug to add a note to set 'container_runtime_extra_storage' variable when creating this mount or the playbook should handle this logic. The intention is to have idempotent runs of install and uninstall playbooks with regard to working upon /var/lib/origin/openshift.local.volumes mount.

Comment 1 Scott Dodson 2019-01-02 14:46:25 UTC
Workaround, reboot the host which should restore any mounts defined and ensure the host is fully reset.

Comment 14 sheng.lao 2019-02-27 08:24:03 UTC
PR-11208 isno't in the latest release rpm. So test it again later.

Comment 15 sheng.lao 2019-03-04 05:40:58 UTC
Verify the bug using openshift-ansible-3.11.88-1.git.0.42d1b9a.el7.noarch

After the step of uninstalling OCP, check the mount point:
# mount |grep vol
/dev/vdb on /var/lib/origin/openshift.local.volumes type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

Comment 17 errata-xmlrpc 2019-03-14 02:17:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0407


Note You need to log in before you can comment on or make changes to this bug.