Red Hat Bugzilla – Full Text Bug Listing
|Summary:||yum clean all --noplugins leaves corrupt data in cache.|
|Product:||Red Hat Enterprise Linux 5||Reporter:||wdc|
|Component:||yum||Assignee:||James Antill <james.antill>|
|Status:||CLOSED DUPLICATE||QA Contact:|
|Fixed In Version:||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|Last Closed:||2008-08-21 20:01:46 EDT||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description wdc 2008-08-20 18:10:50 EDT
Created attachment 314664 [details] Script output demonstrating the failure Description of problem: Background: RHN Satellite Server had a nasty bug that put corrupt data into both the server and the client yum caches. A hot fix rememdied the server side. Customers were told to do: yum clean all to fix up the client-side corruption. Because of BZ 448012, "yum clean all" will not run. So now we are told to do: yum clean all --noplugins Well, that doesn't work either. I happen to have a VM laying around that I can roll back to the epoch of the corrupt cache. Attached to this BZ is script output demonstrating the client cache corruption that the hot fix fixed, and demonstrating that "yum clean all --noplugins" didn't help. Also attached is a tarball of the cache directory after execution of "yum clean all --noplugins". This should give you enough information to figure out exactly what is happening. Version-Release number of selected component (if applicable): How reproducible: Always. Steps to Reproduce: 1. Get a system served by the pre-hotfix Satellite Server 4.4.2 2. Attempt "yum update cups" and watch it blow out as per BZ 447867 3. perform "yum clean all" and watch it blow out as per BZ 448012. 4. perform "yum clean all --noplugins" and watch it blow out too. Actual results: Inability to update cups because of corrupt yum cache. Expected results: "yum clean all --noplugins" should have cleaned up the corruption. Additional info:
Comment 1 wdc 2008-08-20 18:20:36 EDT
Created attachment 314665 [details] Snapshot of yum cache after "yum clean all --noplugins" left all the relevant corruption in place. Original tgz file was too big. Concatinate yum-tgz-00 and yum-tgz-01 to create the yum.tgz file containing the corrupt cache.
Comment 2 wdc 2008-08-20 18:21:26 EDT
Created attachment 314666 [details] part two of the yum cache tgz file Original tgz file was too big. Concatinate yum-tgz-00 and yum-tgz-01 to create the yum.tgz file containing the corrupt cache.
Comment 3 Michael Kearey 2008-08-20 19:27:36 EDT
Greetings, Just a couple of points to help clarify the issue here: - The 'cache corruption' is not really corruption in the strict meaning of the term. The cache files have not been damaged, the cache is in fact intact and error free. The data that has been cached is just bad. This is no fault of yum cache mechanism - The problem is that with a third party repository enabled, and using yum-rhn-plugin, yum clean all fails like so: # yum clean all Loading "rhnplugin" plugin Repository epel-debuginfo is listed more than once in the configuration - In an attempt to avoid that problem we can disable plugins: yum clean all --noplugins The yum cache is not cleaned in this case because yum clean all works on _enebled_ repository files in the cache. It leaves any files in the cache that don't belong to enabled repo's untouched.
Comment 5 wdc 2008-08-20 19:52:04 EDT
Actually the epel repo CANNOT appear more than once unless someone worked very hard to create a file epel.repo that can somehow appear twice in a directory. In BZ 448012 I opine that the failure is that the rhn-yum-plugin is inappropriately scanning the list of repositories twice twice. I think the bottom line for what you're saying is, though that telling people to perform yum clean all --noplugins cannot work, because the rhn-yum-plugin is required to enable the repo.
Comment 6 wdc 2008-08-21 18:55:55 EDT
Never mind. You don't have to answer. I figured out how to incorporate the patch in bug 448012 myself, and confirmed that indeed "--noplugins" CANT work. This bug is a duplicate of 448012. Y'all should close this bug with that status.