Bug 889132 - 3.1.z only - 'yum install vdsm' fails - libvirt-lock-sanlock: version 0.9.10-21.el6_3.7 is not delivered into the virt channel
Summary: 3.1.z only - 'yum install vdsm' fails - libvirt-lock-sanlock: version 0.9.10-...
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: distribution
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.1.2
Assignee: Jiri Denemark
QA Contact: Yaniv Kaul
Whiteboard: external
Depends On:
TreeView+ depends on / blocked
Reported: 2012-12-20 10:12 UTC by Yaniv Kaul
Modified: 2016-02-10 18:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2013-01-07 17:54:07 UTC
oVirt Team: External
Target Upstream Version:

Attachments (Terms of Use)

Description Yaniv Kaul 2012-12-20 10:12:47 UTC
Description of problem:
[root@pluto yum.repos.d]# yum clean metadata ; yum install vdsm

Error: Package: libvirt-lock-sanlock-0.9.10-21.el6_3.6.x86_64 (rhel-x86_64-rhev-mgmt-agent-6)
           Requires: libvirt = 0.9.10-21.el6_3.6
           Available: libvirt-0.8.1-27.el6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.1-27.el6
           Available: libvirt-0.8.1-27.el6_0.3.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.1-27.el6_0.3
           Available: libvirt-0.8.1-27.el6_0.5.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.1-27.el6_0.5
           Available: libvirt-0.8.1-27.el6_0.6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.1-27.el6_0.6
           Available: libvirt-0.8.7-18.el6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.7-18.el6
           Available: libvirt-0.8.7-18.el6_1.1.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.7-18.el6_1.1
           Available: libvirt-0.8.7-18.el6_1.4.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.8.7-18.el6_1.4
           Available: libvirt-0.9.4-23.el6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6
           Available: libvirt-0.9.4-23.el6_2.1.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.1
           Available: libvirt-0.9.4-23.el6_2.4.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.4
           Available: libvirt-0.9.4-23.el6_2.6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.6
           Available: libvirt-0.9.4-23.el6_2.7.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.7
           Available: libvirt-0.9.4-23.el6_2.8.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.8
           Available: libvirt-0.9.4-23.el6_2.9.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.4-23.el6_2.9
           Available: libvirt-0.9.10-21.el6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6
           Available: libvirt-0.9.10-21.el6_3.1.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.1
           Available: libvirt-0.9.10-21.el6_3.3.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.3
           Available: libvirt-0.9.10-21.el6_3.4.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.4
           Available: libvirt-0.9.10-21.el6_3.5.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.5
           Available: libvirt-0.9.10-21.el6_3.6.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.6
           Installing: libvirt-0.9.10-21.el6_3.7.x86_64 (rhel-x86_64-server-6)
               libvirt = 0.9.10-21.el6_3.7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

[root@pluto yum.repos.d]# rhn-channel -l

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
Actual results:

Expected results:

Additional info:

Comment 2 Jiri Denemark 2013-01-07 17:54:07 UTC
This issue was fixed manually. RHEV errata for libvirt will be released only for future libvirt packages.

Note You need to log in before you can comment on or make changes to this bug.