Description of problem: rpm host installing rely on depracated implementation 'pre' install searches for __DIRECT_IO_TEST__ which is no longer set by vdsm gerrit.ovirt.org/#/c/103526/ Version-Release number of selected component (if applicable): ovirt-node-ng-image-update-4.4.1-0.5.rc4.el8.noarch.rpm How reproducible: installation on disk that has pre-existing domain will run over. Steps to Reproduce: 1.use disk with has existing domain 2.install node on this disk 3. Actual results: instalation succeed Expected results: installing will fail with proper massage Additional info: VDSM current implementation rely on existance of mountpoint/*/dom_md we can either mimic this or look for use a stateless non vdsmd api if available or made available easily.
*** Bug 1850278 has been marked as a duplicate of this bug. ***
proposing this as blocker for 4.4.1 since upgrade from 4.4.0 may end up with data loss.
Is that a concern for local SDs? Gluster? Something else?
patch attached to this buug is merged, is it the last one? if so please move bug to MODIFIED
so upgrades with local storage SD is going to fail? Then it should be in Known Issues.
(In reply to Michal Skrivanek from comment #8) > so upgrades with local storage SD is going to fail? Then it should be in > Known Issues. With local mounted storage SD. And yes, should be in Known Issues. Steve, can you handle?
Please review doc_text for the Known Issue. Is there a workaround? Can you detach a locally mounted storage domain before upgrading? Would that work around the problem?
After speaking with Nir, I understand that the issue is that the upgrade does not prevent you from upgrading if you have data on your root (/) LV. This current bug fixes this issue by preventing that. So the Known Issue is not for this current bug, which target 4.4.2. The Known Issue should appear in the RHV 4.4.1 (GA) release notes, so a separate bug is required targeting 4.4.1 GA.
This issue is solved, but an engine bug(Downstream has solved this issue) was found in Test 1. Test version: ovirt-node-ng-installer-4.4.1-2020072310.el8.iso ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch.rpm oVirt Open Virtualization Manager: 4.4.1.4-1.el8 Test 1: Test steps: Refer to Comment7 Test results: 1. Host is not upgraded, but the engine shows success and host status is up after reboot. This issue refer to Bug 1770893 Test 2: Test steps: 1. Install ovirt-node-ng-installer-4.4.1-2020072310.el8.iso 2. Setup local repos in host and point to "ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch.rpm" 3. Add host to upstream engine 4. Login to host, create a local storage directory under / , and modify the permissions and ownership # mkdir /local-storage # chown 36:36 /local-storage # chmod 0755 /local-storage 5. Adding Local Storage via engine 6. Create a VM on local storage 7. Upgrade host from host side # yum update Test results: 1. Host upgrade was successfully blocked. The information is as follows: ~~~~~~ # yum update Red Hat update to latest 164 kB/s | 1.0 kB 00:00 Dependencies resolved. ===================================================================================================================================== Package Architecture Version Repository Size ===================================================================================================================================== Installing: ovirt-node-ng-image-update noarch 4.4.2-0.5.rc5.el8 update 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.4-1.el8 Transaction Summary ===================================================================================================================================== Install 1 Package Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch.rpm 84 MB/s | 782 MB 00:09 ------------------------------------------------------------------------------------------------------------------------------------- Total 84 MB/s | 782 MB 00:09 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch 1/2 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /local-storage/45b21993-bc2f-4c71-a442-c5982d8b3113/dom_md error: %prein(ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch) scriptlet failed, exit status 1 Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Obsoleting : ovirt-node-ng-image-update-placeholder-4.4.1.4-1.el8.noarch 2/2 error: ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch: install failed Verifying : ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch 1/2 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.4-1.el8.noarch 2/2 Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.4-1.el8.noarch.rpm Failed: ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch Error: Transaction failed ~~~~~~ Test 3: Test steps: 1. Install ovirt-node-ng-installer-4.4.1-2020072310.el8.iso 2. Setup local repos in host and point to "ovirt-node-ng-image-update-4.4.2-0.5.rc5.el8.noarch.rpm" 3. Add host to RHVM 4. Login to host, create local storage directory and mount it # mkdir /local-storage # lvcreate -L 20G onn -n data # mkfs.ext4 /dev/mapper/onn-data # echo "/dev/mapper/onn-data /local-storage ext4 defaults,discard 1 2" >> /etc/fstab # mount /local-storage # mount -a # chown 36:36 /local-storage # chmod 0755 /local-storage 5. Add Local Storage via RHVM 6. Create a VM on local storage 7. Upgrade host via RHVM Test results: 1. Host upgrade is successful, local storage starts normally after upgrade. QE will move the bug status to "VERIFIED".
This bugzilla is included in oVirt 4.4.2 release, published on September 17th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days