Bug 245013
Summary: | RHN locks out after a series of lists | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Mike McGrath <mmcgrath> |
Component: | yum-rhn-plugin | Assignee: | John Matthews <jmatthew> |
Status: | CLOSED ERRATA | QA Contact: | Red Hat Satellite QA List <satqe-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 5.0 | CC: | bkearney, mspevack, rhn-bugs, vanmeeuwen+fedora, wtogami |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | RHBA-2008-0360 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2008-05-21 14:27:14 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 201286 | ||
Bug Blocks: |
Description
Mike McGrath
2007-06-20 15:22:47 UTC
I suspect it's caused by the virtualization poller. At least this seems to be the case for my systems. I've disabled all the virtualization stuff (and some of these weren't virtualized) and we're still seeing the issue. I don't think we were having these sorts of issues a few weeks back. One of our servers hasn't been able to update for over 4 days now. I can remove the host and add it again for now. (In reply to comment #2) > I've disabled all the virtualization stuff (and some of these weren't > virtualized) and we're still seeing the issue. I don't think we were having > these sorts of issues a few weeks back. One of our servers hasn't been able to > update for over 4 days now. I can remove the host and add it again for now. Yeah, I hijacked this one. I've opened a different but about the virt stuff (#245594). Anyone have any more words on this? My machines can't install / update software and my logs are filling up with messages about it. Isn't this a taskomatic bug? I seem to remember code that would lock your system if you had too much activity that was independent of anything virt-related. (In reply to comment #5) > Isn't this a taskomatic bug? I seem to remember code that would lock your > system if you had too much activity that was independent of anything virt-related. Yeah, this one isn't virt-related. But no, taskomatic involved :) This particular bug is for anything that can be done client side (auth caching or whatnot). Also related are: 245794 - Create a better 'abuse' metric 201286 - Provide a means to re-enable 'abuse' systems within RHN Support Tools As a suggested fix, how about pickling loginInfo in up2dateAuth.py in between runs? I'd have to check with some of the puppet guys how to do that. I literally just have: package { httpd: ensure => present } In my configs. puppet does the rest for me. (In reply to comment #8) > I'd have to check with some of the puppet guys how to do that. I literally just I meant to be implemented for the yum plugin :) This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release. This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release. Implemented client side cache of the loginInfo in up2dateAuth.py Checked in svn rev: 135236. I no longer get the abuse error. An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2008-0360.html I'm still experiencing this problem, exactly the same way mmcgrath describes. Putting an RHN proxy in between does not help, and apparently with a Puppet run interval of 30 minutes RHN still is contacted over a hundred times a day. Also, how can this bug be closed while it is dependent on a bug in status NEW? I cannot examine the other bug 'cause of permission errors. |