Bug 1033587 - Glusterfs RPM dependencies broken for VDSM 4.13.0-11 in RHEL 6.5 prevent updates
Summary: Glusterfs RPM dependencies broken for VDSM 4.13.0-11 in RHEL 6.5 prevent updates
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-installer
Version: 3.3
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 3.3.2
Assignee: Sandro Bonazzola
QA Contact:
URL:
Whiteboard: integration
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-22 12:48 UTC by Bob Doolittle
Modified: 2013-12-19 14:24 UTC (History)
17 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2013-12-19 14:24:54 UTC


Attachments (Terms of Use)
output of 'yum repolist' (758 bytes, text/plain)
2013-11-22 12:48 UTC, Bob Doolittle
no flags Details
Output of 'yum update' (48.44 KB, text/plain)
2013-11-22 12:48 UTC, Bob Doolittle
no flags Details
Output of 'yum update --skip-broken' (97.92 KB, text/plain)
2013-11-22 12:49 UTC, Bob Doolittle
no flags Details
Output of "rpm -qa" (40.42 KB, text/plain)
2013-11-22 17:12 UTC, Bob Doolittle
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 21794 None None None Never

Description Bob Doolittle 2013-11-22 12:48:01 UTC
Created attachment 827733 [details]
output of 'yum repolist'

Description of problem:

Since the oVirt 3.3.1 release and the RHEL repo update on 11/21, users running RHEL 6 on their KVM Host are unable to execute a package update, due to dependency conflicts.


Version-Release number of selected component (if applicable):
vdsm-4.13.0-11.el6


How reproducible:
100%


Steps to Reproduce:
1. Install RH 6.4 on host, prior to Nov 21
2. Add host to oVirt 3.3.0 engine
3. Execute yum update on host

Actual results:
See the attached 3 files, which include a repolist, output from "yum update", and output from "yum update --skip-broken". Note in particular that the update is trying to update glusterfs-cli, but there is no version to match the latest glusterfs provided by the current RHEL 6 repository. vdsm requires glusterfs-cli.

Expected results:
Clean update

Additional info:

Comment 1 Bob Doolittle 2013-11-22 12:48:44 UTC
Created attachment 827734 [details]
Output of 'yum update'

Comment 2 Bob Doolittle 2013-11-22 12:49:17 UTC
Created attachment 827735 [details]
Output of 'yum update --skip-broken'

Comment 3 Dan Kenigsberg 2013-11-22 16:07:44 UTC
ovirt should not ship vdsm packages for i686. They only create confusion.
According to the logs

         Protected multilib versions: vdsm-python-4.12.1-4.el6.i686 != vdsm-python-4.13.0-11.el6.x86_64

an i686 blocks the requested update of the x86_64 one.

Similarly, and installed glusterfs-libs-3.2.7-1.el6.i686 from epel blocks the update of glusterfs-libs.

Error: Package: glusterfs-cli-3.4.0-8.el6.x86_64 (@glusterfs-epel)
           Requires: glusterfs-libs = 3.4.0-8.el6
           Removing: glusterfs-3.4.0-8.el6.x86_64 (@glusterfs-epel)
               glusterfs-libs = 3.4.0-8.el6
           Updated By: glusterfs-3.4.0.36rhs-1.el6.x86_64 (rhel-x86_64-server-6)
               Not found
           Removing: glusterfs-libs-3.4.0-8.el6.x86_64 (@glusterfs-epel)
               glusterfs-libs = 3.4.0-8.el6
           Updated By: glusterfs-libs-3.4.0.36rhs-1.el6.x86_64 (rhel-x86_64-server-6)
               glusterfs-libs = 3.4.0.36rhs-1.el6
           Available: glusterfs-3.2.7-1.el6.i686 (epel)
               glusterfs-libs = 3.2.7-1.el6

Bottom line, it seems a yum configuration issue more than a vdsm dependency bug. Please verify that, Bob.

Comment 4 Bob Doolittle 2013-11-22 17:12:38 UTC
Created attachment 827895 [details]
Output of "rpm -qa"

Comment 5 Bob Doolittle 2013-11-22 17:13:15 UTC
I have no vdsm*.i686 packages on my Host system. I have attached output of "rpm -qa".

Comment 6 Bob Doolittle 2013-11-22 20:42:52 UTC
At Vijay Bellur's suggestion, I edited the glusterfs-epel repo to point to release 3.4.1 instead of 3.4.0 (e.g. http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.1/EPEL.repo/epel-$releasever/$basearch/), and after that yum was able to update cleanly, and things appear to be working (of course I am not using GlusterFS).

I still have questions, however:
1. Why/how did glusterfs-cli disappear from the 3.4.0 repository? If due to refactoring, are they missing a Provides in their RPMs? Or is it oversight?
2. Why is glusterfs-cli in the 3.4.1 repository, while not in 3.4.0?
3. How can we address this in oVirt, in order to avoid admins having to edit their yum config files on every Host?

It has been suggested that #3 can be addressed by the glusterfs team providing a "stable" repository pointer that can be consumed by the oVirt Host configuration. This would avoid similar issues in future (assuming the glusterfs team's plan is to continue to branch repositories for every minor release).

Although this problem showed up first in the RHEL repositories, as part of the upgrade to RHEL 6.5, it is likely to show up for CentOS hosts as well, as soon as CentOS produces 6.5. So it's best for us to try and address this within oVirt quickly so that the pain to CentOS users can be minimized.

Comment 7 Kaleb KEITHLEY 2013-11-25 13:58:36 UTC
> 1. Why/how did glusterfs-cli disappear from the 3.4.0 repository? If due to 
> refactoring, are they missing a Provides in their RPMs? Or is it oversight?

If you're referring to gluster RPMs that were in the EPEL repository, those were withdrawn in concert with the release of RHEL 6.5, which has RHS client-side RPMs. RHS client-side does not include the glusterfs-cli RPMs. N.B. that Fedora/EPEL policy does not allow shipping packages in EPEL that are in RHEL.

> 2. Why is glusterfs-cli in the 3.4.1 repository, while not in 3.4.0?

The glusterfs-cli RPM has not disappeared from the 3.4.0 repository on download.gluster.org.

> 3. How can we address this in oVirt, in order to avoid admins having to edit > their yum config files on every Host?

CentOS is doing their own packaging of community GlusterFS — clones of the download.gluster.org packages now that GlusterFS packages are no longer in EPEL. Their GlusterFS packages will supplant the RHS packages in RHEL 6.5.

Independent of that, the sample repo file that's at http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo has baseurl lines in the form of http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/.... which, if I'm not mistaken, should simplify things for most people. (I hope all our releases going forward will be backwards compatible.)

Comment 8 Bob Doolittle 2013-11-25 21:46:16 UTC
I have determined that the problem creeps in here:

I just followed the Quick Start Guide: http://www.ovirt.org/Quick_Start_Guide#Install_oVirt_Node
That involves installing an RPM on the host (the Quick Start lists the Fedora variant, so I adapted to use the EL RPM). It contains:

% rpm -qlp http://ovirt.org/releases/ovirt-release-el6-8-1.noarch.rpm
/etc/yum.repos.d/el6-ovirt.repo
/etc/yum.repos.d/glusterfs-epel.repo
/usr/share/doc/ovirt-release-el6-8
/usr/share/doc/ovirt-release-el6-8/ASL 

So there's your culprit. At the moment the delivered glusterfs-epel.repo configures the host/node explicitly to use 3.4.0. One solution is to change it to use LATEST.

Comment 9 Fabian Deutsch 2013-11-26 14:10:46 UTC
Dan,

so how can I help here? glusterfs is outside of my natural habitat.

Comment 10 Dan Kenigsberg 2013-11-26 15:33:26 UTC
Oh, sorry. Who owns ovirt-release.rpm? I like the idea of changing it to point to a newer glusterfs-epel (either 3.4.1 for stability or LATEST for easy updates).

Comment 11 Kapetanakis Giannis 2013-12-04 13:19:37 UTC
I was able to update by doing this:

# yum update ovirt\*
this one updates ovirt-release to ovirt-release-el6-9-1

# yum clean all

# yum update glusterfs\*

# yum update

Comment 12 Sandro Bonazzola 2013-12-05 14:01:00 UTC
(In reply to Kapetanakis Giannis from comment #11)

Marking as verified

Comment 13 Sandro Bonazzola 2013-12-19 14:24:54 UTC
oVirt 3.3.2 has been released resolving the problem described in this bug report.


Note You need to log in before you can comment on or make changes to this bug.