Bug 1852721 - Installation of node will not quit when mountpoint has existing domain (VMs)
Summary: Installation of node will not quit when mountpoint has existing domain (VMs)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: redhat-virtualization-host
Version: 4.4.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.4.2
: 4.4.2
Assignee: Nir Levy
QA Contact: peyu
URL:
Whiteboard:
Depends On: 1850378 1863045
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-01 07:10 UTC by Nir Levy
Modified: 2022-08-18 08:22 UTC (History)
15 users (show)

Fixed In Version: redhat-virtualization-host-4.4.2-20200812.3.el8_2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1850378
Environment:
Last Closed: 2020-10-05 13:09:40 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-37384 0 None None None 2022-08-18 08:22:56 UTC
Red Hat Product Errata RHSA-2020:4172 0 None None None 2020-10-05 13:10:23 UTC

Comment 17 peyu 2020-08-28 03:22:40 UTC
QE tested this issue and it has been resolved.

Test version:
redhat-virtualization-host-4.4.2-20200812.3.el8_2


Test 1: 

Test steps: Refer to Comment7

Test results:
1. Host upgrade was successfully blocked.
For RHVM side, the host status is "InstallFailed".
For host side, the information is as follows:

~~~~~~
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Red Hat update to latest                                                                             257 kB/s | 1.1 kB     00:00    
Dependencies resolved.
=====================================================================================================================================
 Package                                             Architecture       Version                             Repository          Size
=====================================================================================================================================
Installing:
 redhat-virtualization-host-image-update             noarch             4.4.2-20200812.3.el8_2              update             779 M
     replacing  redhat-virtualization-host-image-update-placeholder.noarch 4.4.1-1.el8ev

Transaction Summary
=====================================================================================================================================
Install  1 Package

Total download size: 779 M
Is this ok [y/N]: y
Downloading Packages:
redhat-virtualization-host-image-update-latest.rpm                                                    81 MB/s | 779 MB     00:09    
-------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                 81 MB/s | 779 MB     00:09     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                             1/1 
  Running scriptlet: redhat-virtualization-host-image-update-4.4.2-20200812.3.el8_2.noarch                                       1/2 
Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs
See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
Storage domains were found in:
	/local-storage/e44a1753-31d8-444b-8053-25b8777713df/dom_md
error: %prein(redhat-virtualization-host-image-update-4.4.2-20200812.3.el8_2.noarch) scriptlet failed, exit status 1

Error in PREIN scriptlet in rpm package redhat-virtualization-host-image-update
  Obsoleting       : redhat-virtualization-host-image-update-placeholder-4.4.1-1.el8ev.noarch                                    2/2 
error: redhat-virtualization-host-image-update-4.4.2-20200812.3.el8_2.noarch: install failed

  Verifying        : redhat-virtualization-host-image-update-4.4.2-20200812.3.el8_2.noarch                                       1/2 
  Verifying        : redhat-virtualization-host-image-update-placeholder-4.4.1-1.el8ev.noarch                                    2/2 
Unpersisting: redhat-virtualization-host-image-update-placeholder-4.4.1-1.el8ev.noarch.rpm
Installed products updated.

Failed:
  redhat-virtualization-host-image-update-4.4.2-20200812.3.el8_2.noarch                                                              

Error: Transaction failed
~~~~~~



Test 2: 

Test steps: 
1. Install redhat-virtualization-host-4.4.1-20200722.0.el8_2
2. Setup local repos in host and point to "redhat-virtualization-host-4.4.2-20200812.3.el8_2"
3. Add host to RHVM
4. Login to host, create local storage directory and mount it
   # mkdir /data
   # lvcreate -L 20G rhvh -n data
   # mkfs.ext4 /dev/mapper/rhvh-data
   # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab
   # mount /data
   # mount -a
   # chown 36:36 /data
   # chmod 0755 /data
5. Add Local Storage via RHVM
6. Create a VM on local storage
7. Upgrade host via RHVM

Test results:
1. Host upgrade is successful, local storage starts normally after upgrade.


When bug status is "ON_QA", QE will move it to "VERIFIED".

Comment 20 cshao 2020-08-30 10:43:55 UTC
verify this bug according #c17.

Comment 22 errata-xmlrpc 2020-10-05 13:09:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Virtualization security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:4172

Comment 23 cshao 2022-08-18 08:17:22 UTC
Due to QE capacity, we are not going to cover this issue in our automation, we handle this case manually.


Note You need to log in before you can comment on or make changes to this bug.