Bug 504888 - No module named yum
Summary: No module named yum
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: yum
Version: 11
Hardware: i386
OS: Linux
low
high
Target Milestone: ---
Assignee: Seth Vidal
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-06-09 22:31 UTC by pagina_secunda
Modified: 2014-01-21 23:09 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-06-10 13:53:39 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description pagina_secunda 2009-06-09 22:31:12 UTC
Description of problem: I updated to Fedora 11 from Fedora 10 via installation DVD today.  Yum fails to execute, and package manager won't work.


Version-Release number of selected component (if applicable):
yum-3.2.23-3.fc10.noarch



How reproducible: type "yum check-update"


Steps to Reproduce:
1.Open terminal
2.Become superuser
3.Type "yum check-update"
  
Actual results:

yum check-update
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

   No module named yum

Please install a package which provides this module, or
verify that the module is installed correctly.

It's possible that the above module doesn't match the
current version of Python, which is:
2.6 (r26:66714, Mar 17 2009, 11:44:21) 
[GCC 4.4.0 20090313 (Red Hat 4.4.0-0.26)]

If you cannot solve this problem yourself, please go to 
the yum faq at:
  http://wiki.linux.duke.edu/YumFaq



Expected results: yum searches for available updates


Additional info:

Comment 1 pagina_secunda 2009-06-09 22:55:00 UTC
I discovered that it thought that yum-3.2.23-3.fc10.noarch was the newest version.  I followed the following steps:
1. downloaded yum-3.2.22-4.fc11.noarch.rpm and
2. ran 'rpm -Uvh yum-3.2.22-4.fc11.noarch.rpm --oldpackage'

It seems to be working fine now.

Comment 2 James Antill 2009-06-10 13:53:28 UTC
Yes, if you get the latest yum from Fed-10 updates-testing, you should upgrade to the latest yum in Fed-11 updates-testing.

Comment 3 pagina_secunda 2009-06-10 22:16:22 UTC
Alright, thanks.

Comment 4 Ray Todd Stevens 2009-07-13 02:24:19 UTC
I am wondering if this needs to be reopened as a continuing problem in that I have a couple servers which are experiencing this after they were upgraded fc10 to fc11.   So something is causing this?

The good news is that for one of these stations I have a more or less complete copy of the before and after picture of the server system.

Comment 5 Ray Todd Stevens 2009-07-13 03:05:51 UTC
OK I have been going over files.   Here appears to be the issue.   Basically the current net install upgrade only has an older yum in it for the fc11 stuff.   So the fc10 yum stays.   This has been a problem I have reported before on I believe fc8-fc9 there was a similar issue with rpm.

IF you do a preupgrade upgrade you are fine.  But since preupgrade doesn't support /dev/md0 drives certainly as the boot and largely not at alll   many of our servers have to be iso netinstall upgraded.   Those that I have to iso upgrade seem to all have this problem.

I guess this is an echo of the /dev/md0 bugs in fedora.

Comment 6 James Antill 2009-07-13 14:39:59 UTC
 Yeh, it's a known issue. However it only happens when you do an iso _only_ update. I've done both a preupgrade update and a netinst update, both worked fine (but both had the update repo. for Fed-11 enabled).

 Also worth nothing is that the preupgrade I did was LVM on top of md0, and it worked fine (is your info. out of date ... or does the addition of LVM make it happier?)

Comment 7 James Antill 2009-07-13 15:10:33 UTC
 Also this is the main BZ for this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=506685

...esp. note comment #15, which gives the workaround to make the old yum work until you update it.

Comment 8 Ray Todd Stevens 2009-07-13 18:19:15 UTC
I have avoided LVM so I don't know about LVM on md0.   I avoid this as I RAID for system redundancy.   I find that all to often with LVM in various configuration that even if you RAID1 you end up with an actual storage of

DISK 1                 DISK 2

block1                 block 2
duplicate B1           duplicate of b2
block3                 block 4
duplicate of b3        duplicate of b4

etc.

Without LVM it stores properly.

I used a netinst and still got the problem.   That kind of was my point.

Comment 9 Ray Todd Stevens 2009-07-13 18:23:00 UTC
And yes most of my md0 reported bug are still active and occurring in fc11 production.


Note You need to log in before you can comment on or make changes to this bug.