Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1887726

Summary: Provision of RHVH fails
Product: Red Hat Enterprise Virtualization Manager Reporter: Roni <reliezer>
Component: redhat-virtualization-hostAssignee: Nir Levy <nlevy>
Status: CLOSED DUPLICATE QA Contact: cshao <cshao>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.4.2CC: aefrat, cshao, jmacku, khakimi, lsvaty, mavital, mburman, michal.skrivanek, nlevy, peyu, reliezer, sbonazzo, shlei, weiwang, yaniwang
Target Milestone: ovirt-4.4.3Keywords: AutomationBlocker, Regression
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-14 12:12:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Node RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Roni 2020-10-13 08:16:07 UTC
Description of problem:
Provision of RHVH image fails

Version-Release number of selected component (if applicable):
RHVH-4.4-20201008

How reproducible:
100%

Steps to Reproduce:
1. Run the reprovision job
2. Or download the RHVH image: RHVH-4.4-20201008.0-RHVH-x86_64-dvd1.iso
3. And install it on a host

Actual results:
Reproviosn fails, 

node status: DEGRADED
Please check the status manually using `nodectl check`
see additional info below

Expected results:
Reproviosn should success

Additional info:

[root@lynx17 ~]# imgbase w
2020-10-13 09:58:57,323 [ERROR] (MainThread) The root volume does not look like an image
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/imgbased/naming.py", line 278, in parse
    nvrtuple = re.match("^(^.*)-([^-]*)-([^-]*)$", nvr).groups()
AttributeError: 'NoneType' object has no attribute 'groups'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/lib/python3.6/site-packages/imgbased/__main__.py", line 53, in <module>
    CliApplication()
  File "/usr/lib/python3.6/site-packages/imgbased/__init__.py", line 82, in CliApplication
    app.hooks.emit("post-arg-parse", args)
  File "/usr/lib/python3.6/site-packages/imgbased/hooks.py", line 120, in emit
    cb(self.context, *args)
  File "/usr/lib/python3.6/site-packages/imgbased/plugins/core.py", line 164, in post_argparse
    msg = "You are on %s" % app.imgbase.current_layer()
  File "/usr/lib/python3.6/site-packages/imgbased/imgbase.py", line 409, in current_layer
    return self.image_from_path(lv)
  File "/usr/lib/python3.6/site-packages/imgbased/imgbase.py", line 168, in image_from_path
    return Image.from_lv_name(name)
  File "/usr/lib/python3.6/site-packages/imgbased/naming.py", line 340, in from_lv_name
    return cls.from_nvr(lv_name)
  File "/usr/lib/python3.6/site-packages/imgbased/naming.py", line 335, in from_nvr
    return Base(nvr)
  File "/usr/lib/python3.6/site-packages/imgbased/naming.py", line 402, in __init__
    self.nvr = NVR.parse(nvr)  # For convenience: Parse if necessary
  File "/usr/lib/python3.6/site-packages/imgbased/naming.py", line 280, in parse
    raise RuntimeError("Failed to parse NVR: %s" % nvr)
RuntimeError: Failed to parse NVR: root
[root@lynx17 ~]# 
[root@lynx17 ~]# 
[root@lynx17 ~]# nodectl check
Status: FAILED
Bootloader ... FAILED - It looks like there are no valid bootloader entries. Please ensure this is fixed before rebooting.
  Layer boot entries ... FAILED - No bootloader entries which point to imgbased layers
  Valid boot entries ... OK
Mount points ... FAILED - This can happen if the installation was performed incorrectly
  Separate /var ... FAILED - /var got unmounted, or was not setup to use a separate volume
  Discard is used ... FAILED - 'discard' mount option was not added or got removed
Basic storage ... OK
  Initialized VG ... OK
  Initialized Thin Pool ... OK
  Initialized LVs ... OK
Thin storage ... FAILED - It looks like the LVM layout is not correct. The reason could be an incorrect installation.
  Checking available space in thinpool ... OK
  Checking thinpool auto-extend ... FAILED - In order to enable thinpool auto-extend,activation/thin_pool_autoextend_threshold needs to be set below 100 in lvm.conf
vdsmd ... OK
[root@lynx17 ~]# 


Complete image info:
--------------------
{"compose_id": "RHVH-4.4-20201008.0", "compose_label": "RC-20201008.0", "compose_path": "/mnt/redhat/rhel-8/devel/candidate-trees/RHV/RHVH-4.4-20201008.0", "compose_respin": 0, "compose_date": "20201008", "variant": "RHVH", "release_version": "4.4", "location": "/mnt/redhat/rhel-8/devel/candidate-trees/RHV/RHVH-4.4-20201008.0/compose", "file": "/mnt/redhat/rhel-8/devel/candidate-trees/RHV/RHVH-4.4-20201008.0/compose/RHVH/x86_64/iso/RHVH-4.4-20201008.0-RHVH-x86_64-dvd1.iso", "compose_type": "production", "release_is_layered": false, "release_name": "RHVH", "arch": "x86_64", "release_short": "RHVH", "release_type": "ga"}

Comment 2 Michal Skrivanek 2020-10-14 05:40:35 UTC
Please do not open RHV bugs unless it is a RHV specific issue not relevant to oVirt
Please make description public, this way the bug is useless to community

please fix

Comment 3 Sandro Bonazzola 2020-10-14 07:12:12 UTC
(In reply to Michal Skrivanek from comment #2)
> Please do not open RHV bugs unless it is a RHV specific issue not relevant
> to oVirt

This is RHV-H specific, RHV-H image was broken due to build system issues.

> Please make description public, this way the bug is useless to community

done

> please fix

we are looking into it.


Nir, I know the root cause of the error is the broken RHV-H ISO but imgbased could avoid to print the traceback and fail more graciously.
Maybe worth opening a separate bug on imgbased to track this.

Comment 4 Sandro Bonazzola 2020-10-14 12:12:44 UTC
Closing this as duplicate of bug #1886695

*** This bug has been marked as a duplicate of bug 1886695 ***