Bug 1720310 - RHV-H post-installation scripts failing, due to existing tags
Summary: RHV-H post-installation scripts failing, due to existing tags
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: imgbased
Version: 4.3.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ovirt-4.3.5
: 4.3.5
Assignee: Yuval Turgeman
QA Contact: Qin Yuan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-13 16:02 UTC by Steffen Froemer
Modified: 2022-07-09 14:14 UTC (History)
13 users (show)

Fixed In Version: imgbased-1.1.8
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-12 11:54:27 UTC
oVirt Team: Infra
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47452 0 None None None 2022-07-09 14:14:07 UTC
Red Hat Product Errata RHSA-2019:2437 0 None None None 2019-08-12 11:54:48 UTC
oVirt gerrit 101480 0 master MERGED init: check if imgbased tags exist target volumes 2020-05-17 12:46:34 UTC
oVirt gerrit 101633 0 ovirt-4.3 MERGED init: check if imgbased tags exist target volumes 2020-05-17 12:46:34 UTC

Description Steffen Froemer 2019-06-13 16:02:18 UTC
Description of problem:
The installation is running into error on post-installation-script process, if there is a disk/logical volume available, which does have the appropriate tags assigned. Even if this disk is not used for installation


Version-Release number of selected component (if applicable):
rhv-h-4.3 installer

How reproducible:
always

Steps to Reproduce:
1. Use a system with two disks
2. Boot rhvh-installer-image and install RHVH successful on disk1
3. Boot rhvh-installer-image and install RHVH on disk2 (leave disk1 untouched)

Actual results:
The installation fails in post-installation-script part

Expected results:
The installation should complete successful or exist with better error-message. The disk and logical volume should mentioned, where the tags were already.

Or better such information could be checked before the installation begin. Running into an error after everything was configured and finalized, does not increase user acceptance.

Additional info:
This issue happened on a system, where installation-disk 'sda' was cleared, but existing LUNs were attached to this host

Comment 1 Qin Yuan 2019-06-18 10:03:55 UTC
Following the steps in comment #0, the issue is reproducible with RHVH-4.3-20190512.3-RHVH-x86_64-dvd1.iso, the error is:

2019-06-18 15:52:27,227 [DEBUG] (MainThread) Version: imgbased-1.1.7
2019-06-18 15:52:27,242 [DEBUG] (MainThread) Arguments: Namespace(bases=False, command='layout', debug=True, experimental=False, free_space=False, init=True, init_nvr=None, layers=False, size=None, source='/', stream='Image', units='m')
2019-06-18 15:52:27,243 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,243 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,342 [DEBUG] (MainThread) Returned: rhvh_dell-per510-01
2019-06-18 15:52:27,344 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,344 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,418 [DEBUG] (MainThread) Returned: rhvh_dell-per510-01/pool00
2019-06-18 15:52:27,419 [DEBUG] (MainThread) Calling binary: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:init', '-o', 'lv_full_name'],) {'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,419 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:init', '-o', 'lv_full_name'],) {'close_fds': True, 'stderr': <open file '/dev/null', mode 'w' at 0x7fb6973a11e0>}
2019-06-18 15:52:27,489 [DEBUG] (MainThread) Returned: rhvh_dell-per510-01/root
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53, in <module>
    CliApplication()
  File "/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82, in CliApplication
    app.hooks.emit("post-arg-parse", args)
  File "/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in emit
    cb(self.context, *args)
  File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 173, in post_argparse
    layout.initialize(args.source, args.init_nvr)
  File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 220, in initialize
    self.app.imgbase.init_layout_from(source, init_nvr)
  File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 261, in init_layout_from
    "Looks like the system already has imgbase working properly.\n"
imgbased.imgbase.ExistingImgbaseWithTags: Looks like the system already has imgbase working properly.
However, imgbase was called with --init. If this was intentional, please untag the existing volumes and try again.


The exception must occur when there is already an imgbase, give QE ack for error message improvement.

Comment 2 Qin Yuan 2019-06-20 00:32:48 UTC
This bug is similar to bug 1376607

Comment 3 Yuval Turgeman 2019-07-03 08:34:56 UTC
This is an interesting problem, we need imgbased to detect the disk that anaconda is installing on, and run init on this disk only.  Ryan, what do you think ?

Comment 4 Yuval Turgeman 2019-07-03 09:20:16 UTC
Nevermind, I'll extend our check on existing tags

Comment 5 Martin Tessun 2019-07-10 09:03:23 UTC
I have a problem here why this is a bug. I don't see a sensible usecase installing RHVH on the same host on 2 different disks.

In case someone can give me a reasonable scenario here that cannot be solved with the existing install/upgrade and disk-mirroring I am happy to hear it.

Comment 6 Sandro Bonazzola 2019-07-10 09:43:25 UTC
(In reply to Martin Tessun from comment #5)
> I have a problem here why this is a bug. I don't see a sensible usecase
> installing RHVH on the same host on 2 different disks.
> 
> In case someone can give me a reasonable scenario here that cannot be solved
> with the existing install/upgrade and disk-mirroring I am happy to hear it.

A possible use case that comes to my mind:
- Disk A installed with RVH-H, start reporting SMART errors and need replacement
- Adding disk B, installing with RHV-H, keeping disk A in the system in case rollback to previous version is needed.

Comment 9 Qin Yuan 2019-07-13 12:43:29 UTC
Tested with redhat-virtualization-host-4.3.5-20190710.2.el7_7, the second installation on disk2 completed successfully. The bug is fixed, move to VERIFIED.

Comment 12 errata-xmlrpc 2019-08-12 11:54:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2437


Note You need to log in before you can comment on or make changes to this bug.