currently the host-deploy upgrade of host's packages only updates vdsm and vdsm-cli packages (and their dependencies). however, there are a couple more packages needed for oVirt to work correctly, e.g. mom, ovirt-hosted-engine-ha, ioprocess, etc. We need to make sure that when you upgrade the host from the UI all related packages are updated This will also help to minimize bump ups in vdsm spec file
The assumption was that indeed the vdsm spec has it all. I guess ovirt hosted engine ha isn't there, but what about mom? Also, in the future please don't set the target milestone.
(In reply to Oved Ourfali from comment #1) > The assumption was that indeed the vdsm spec has it all. > I guess ovirt hosted engine ha isn't there, but what about mom? It's widespread. Currently we mix it ad hoc, some packages we bump up, some we ignore and rely on people doing yum update from time to time. Which is wrong. This comprehensive approach would actualy make our lives easier too, as we don't need to update spec file so often then. > Also, in the future please don't set the target milestone. This is a global bugzilla policy, submitter is supposed to propose target release via setting release flag, then our bot will add target milestone anyway;) Note this issue is severe, as we are getting outdated and supposedly buggy setups using outdated ovirt components like mom, while we claim a bug there is fixed. We can perhaps "solve" that via documentation for the timebeing and decrease severity/priority then
So just to answer the spec file question: We do bump dependency in vdsm when a new API is introduced or something significant was changed. The meaning of Requires is for example "this package (vdsm) won't work at all without mom >= 0.5.3". On the other hand, bugfixes or performance improvements in MOM or hosted engine that do not affect compatibility are not incorporated to vdsm spec file because vdsm will happily work with the older version. We still expect the packages to be updated though.
The dependency should be part of VDSM. You can upgrade through the machine itself. However, we will increase the list to give a better experience in case VDSM isn't updated. It isn't high severity.
(In reply to Oved Ourfali from comment #4) > The dependency should be part of VDSM. The absolutely required version to achieve compatibility is. A possible minor bug fix or enhancement that does not affect VDSM compatibility isn't. That is the only way this has ever worked for the whole RPM ecosystem. Think about QEMU for example. Nobody from the qemu team (or kernel, or libvirt) will be updating VDSM spec file (or even notifying us) when a small bug is fixed or performance is improved. They will just release a new version of their package and expect it to be updated when the user wishes so. > You can upgrade through the machine itself. Which is not really a good user experience when we do not expose the fact that we only call yum update vdsm to the user. I (as an user) would expect that Upgrade host button updates all (reasonable) packages the same as my desktop does. Normal desktop flow just informs you whether an update requires a reboot or not. > However, we will increase the list to give a better experience in case VDSM > isn't updated. Thanks. > It isn't high severity. But it should be at least documented.
(In reply to Martin Sivák from comment #5) > > It isn't high severity. > > But it should be at least documented. that's what I asked for in comment #2, please provide some explanation if you think this is not a severe situation. One random example - RHBA-2016:23718
I actually think we should fix it asap and backport to 3.6.z. We are constantly getting cases from customers with mixed versions where VMs cannot migrate and others, due to the mixed versions, that were never tested together.
So let's check for upgrade/upgrade all packages that we currently ship in our own repos and which are relevant for hosts. If I haven't missed anything, following we ship in our upstream repositories: imgbased ioprocess libcacard-ev libcacard-tools-ev mom ovirt-imageio-common ovirt-imageio-daemon ovirt-vmconsole ovirt-vmconsole-host python-ioprocess qemu-img-ev qemu-kvm-common-ev qemu-kvm-ev qemu-kvm-tools-ev vdsm vdsm-cli vhostmd vm-dump-metrics
Plus possibly ovirt-hosted-engine-ha and -setup.
(In reply to Martin Sivák from comment #9) > Plus possibly ovirt-hosted-engine-ha and -setup. Those are always installed? Not only by demand?
How do we do with libvirt and sanlock?
We depend on those in vdsm spec file. It might be problematic to upgrade libvirt that vdsm wasn't tested with. Same for Sanlock
*** Bug 1352610 has been marked as a duplicate of this bug. ***
Ravi, make sure to test ones mentioned in bug 1352610, if possible.
Uploaded a patch that checks for the following packages ioprocess,mom,libvirt,ovirt-imageio-common, ovirt-imageio-daemon,ovirt-vmconsole, ovirt-vmconsole-host,python-ioprocess,sanlock, vdsm,vdsm-cli
(In reply to Ravi Nori from comment #15) > Uploaded a patch that checks for the following packages > > ioprocess,mom,libvirt,ovirt-imageio-common, > ovirt-imageio-daemon,ovirt-vmconsole, > ovirt-vmconsole-host,python-ioprocess,sanlock, vdsm,vdsm-cli qemu-kvm, lvm2 are missing, both of which are more important than almost all the above... Please add them.
Updated patch and verified for the following packages ioprocess,mom,libvirt,lvm2,ovirt-imageio-common, ovirt-imageio-daemon,ovirt-vmconsole, ovirt-vmconsole-host,python-ioprocess, qemu-kvm,qemu-img,sanlock,vdsm,vdsm-cli
Moving back to POST, as we need backport to ovirt-engine-4.0
Moving back to POST again :-) as we need to backport to ovirt-engine-4.0.2
(In reply to Ravi Nori from comment #19) > Updated patch and verified for the following packages > > ioprocess,mom,libvirt,lvm2,ovirt-imageio-common, > ovirt-imageio-daemon,ovirt-vmconsole, ovirt-vmconsole-host,python-ioprocess, > qemu-kvm,qemu-img,sanlock,vdsm,vdsm-cli sorry for jumping here late, multipath?
(In reply to Moran Goldboim from comment #22) > (In reply to Ravi Nori from comment #19) > > Updated patch and verified for the following packages > > > > ioprocess,mom,libvirt,lvm2,ovirt-imageio-common, > > ovirt-imageio-daemon,ovirt-vmconsole, ovirt-vmconsole-host,python-ioprocess, > > qemu-kvm,qemu-img,sanlock,vdsm,vdsm-cli > > sorry for jumping here late, multipath? VDSM depends on ~80 other packages. Why multipath and not lvm or numactl for that matter?
Currently we can check for upgrade/upgrade only packages which fulfil following criteria: 1. Package is always installed on the host 2. Package name or provides name it the same on all supported platforms (Fedora and Centos 7 upstream and RHEL 7 downstream) 3. We want to upgrade only our packages or platform packages which are absolutely necessary for our functionality That's why above list were selected and tested.
Moving to 4.0.4 as agreed to be more thoroughly tested
Accidentally we forgot to remove patch from ovirt-engine-4.0.2 branch and the patch was included into 4.0.2 release because of high number of 4.0.2 rebuilds we had :-( Fix included in 4.0.2 release, so retargeting again and moving to ON_QA
Moving back to Assigned, although bug is included in 4.0.2 it was not properly tested, so let's do that in 4.0.4. Also we have found one issue, we have added libvirt into package list, but VDSM doesn't depend on libvirt directly, but only on some of its subpackages: libvirt-daemon-config-nwfilter libvirt-daemon-kvm libvirt-lock-sanlock libvirt-client libvirt-python So we will include this fix into 4.0.4
Hi Tahlia, we have changed package list as described in Comment 27, but I forgot to update Doc Text, so doing that now. Thanks Martin
Thanks for updating that, Martin. I'm also going to update the relevant section in the Upgrade Guide[1], which currently says "On Red Hat Enterprise Linux hosts, the upgrade manager checks for updates to the vdsm and vdsm-cli packages", to say "On Red Hat Enterprise Linux hosts, the upgrade manager checks for updates to Red Hat Virtualization packages". [1] https://access.redhat.com/documentation/en/red-hat-virtualization/4.0/paged/upgrade-guide/22-updating-virtualization-hosts
verified in ovirt-engine-4.0.4.1-0.1.el7ev.noarch
We got back to this bug within our team today again and we raised the same concerns we have raised earlier that are not addressed by the fix in this bug. And specifically: - how about new kernel? - security fixes for packages that are not on the list? - etc It was agreed by all of us on that meeting with PM and Eng, that the host needs to be upgraded as a whole, running `yum update` and not by picking separate packages, which can lead to very unexpected results. We still recommend the following solution: The upgrade manager should list the packages to upgrade (output of `yum list updates` on the host) and let the user know that those are the packages that will be updated once he/she clicks on "Upgrade the host" button. If the user is not interested in this automated process, that RHEV-M offers, he may go and run manual update of the host himself. But he needs to understand that this is not recommended and it may lead to unexpected results, since it was tested in combination with all the updated packages altogether. [This last part requires a better wording, but this is the idea.] Yanivs?
(In reply to Marina from comment #31) > We got back to this bug within our team today again and we raised the same > concerns we have raised earlier that are not addressed by the fix in this > bug. And specifically: > - how about new kernel? > - security fixes for packages that are not on the list? > - etc > It was agreed by all of us on that meeting with PM and Eng, that the host > needs to be upgraded as a whole, running `yum update` and not by picking > separate packages, which can lead to very unexpected results. > > We still recommend the following solution: > The upgrade manager should list the packages to upgrade (output of `yum list > updates` on the host) and let the user know that those are the packages that > will be updated once he/she clicks on "Upgrade the host" button. If the user > is not interested in this automated process, that RHEV-M offers, he may go > and run manual update of the host himself. But he needs to understand that > this is not recommended and it may lead to unexpected results, since it was > tested in combination with all the updated packages altogether. [This last > part requires a better wording, but this is the idea.] > > Yanivs? You are confusing RHEV with Satellite.
(In reply to Yaniv Kaul from comment #32) > (In reply to Marina from comment #31) > > We got back to this bug within our team today again and we raised the same > > concerns we have raised earlier that are not addressed by the fix in this > > bug. And specifically: > > - how about new kernel? > > - security fixes for packages that are not on the list? > > - etc > > It was agreed by all of us on that meeting with PM and Eng, that the host > > needs to be upgraded as a whole, running `yum update` and not by picking > > separate packages, which can lead to very unexpected results. > > > > We still recommend the following solution: > > The upgrade manager should list the packages to upgrade (output of `yum list > > updates` on the host) and let the user know that those are the packages that > > will be updated once he/she clicks on "Upgrade the host" button. If the user > > is not interested in this automated process, that RHEV-M offers, he may go > > and run manual update of the host himself. But he needs to understand that > > this is not recommended and it may lead to unexpected results, since it was > > tested in combination with all the updated packages altogether. [This last > > part requires a better wording, but this is the idea.] > > > > Yanivs? > > You are confusing RHEV with Satellite. How is that? Is this bug for Satellite? To handle the upgrade of RHEV hosts from the Satellite? I was thinking this bug is for hosts upgrades from RHEV Maanager.
And if this is not going to fully update the host, we should work on documentation statement, probably to have this bug closed: https://bugzilla.redhat.com/show_bug.cgi?id=1349149 Just think of the following scenario: a host might end up having latest vdsm with older kernel and eventually, will have like vdsm from 7.6 and 7.2 kernel, for instance.
(In reply to Marina from comment #36) > And if this is not going to fully update the host, we should work on > documentation statement, probably to have this bug closed: > https://bugzilla.redhat.com/show_bug.cgi?id=1349149 > > > Just think of the following scenario: a host might end up having latest vdsm > with older kernel and eventually, will have like vdsm from 7.6 and 7.2 > kernel, for instance. It's a theoretical scenario. Completely.
https://bugzilla.redhat.com/show_bug.cgi?id=1380498