Bug 245013

Summary: RHN locks out after a series of lists
Product: Red Hat Enterprise Linux 5 Reporter: Mike McGrath <mmcgrath>
Component: yum-rhn-pluginAssignee: John Matthews <jmatthew>
Status: CLOSED ERRATA QA Contact: Red Hat Satellite QA List <satqe-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.0CC: bkearney, mspevack, rhn-bugs, vanmeeuwen+fedora, wtogami
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: RHBA-2008-0360 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-05-21 14:27:14 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 201286    
Bug Blocks:    

Description Mike McGrath 2007-06-20 15:22:47 UTC
We've got a puppet setup that has machines map to RHN, part of a puppet checkin
includes a series of yum lists like:


/usr/bin/yum -d 0 -e 0 list available logrotate


This causes the following error:
===================================================
Error Message:
    Abuse of Service detected for server app2.fedora.phx.redhat.com (1007544728)
Error Class Code: 49
Error Class Info:
     You are getting this error because RHN has detected an abuse of
     service from this system and account. This error is triggered when
     your system makes too many connections to Red Hat Network. This
     error can not be triggered under a normal use of the Red Hat Network
     service as configured by default on Red Hat Linux.

     The Red Hat Network services for this system will remain disabled
     until you will reduce the RHN network traffic from your system to
     acceptable limits.

     Please log into RHN and visit https://rhn.redhat.com/help/contact.pxt
     to contact technical support if you think you have received this
     message in error.

======================================================

Comment 1 James Bowes 2007-06-21 17:44:03 UTC
I suspect it's caused by the virtualization poller. At least this seems to be
the case for my systems.

Comment 2 Mike McGrath 2007-06-25 16:14:50 UTC
I've disabled all the virtualization stuff (and some of these weren't
virtualized) and we're still seeing the issue.  I don't think we were having
these sorts of issues a few weeks back.  One of our servers hasn't been able to
update for over 4 days now.  I can remove the host and add it again for now.

Comment 3 James Bowes 2007-06-25 16:25:25 UTC
(In reply to comment #2)
> I've disabled all the virtualization stuff (and some of these weren't
> virtualized) and we're still seeing the issue.  I don't think we were having
> these sorts of issues a few weeks back.  One of our servers hasn't been able to
> update for over 4 days now.  I can remove the host and add it again for now.

Yeah, I hijacked this one. I've opened a different but about the virt stuff
(#245594).

Comment 4 Mike McGrath 2007-07-02 13:51:58 UTC
Anyone have any more words on this?  My machines can't install / update software
and my logs are filling up with messages about it.

Comment 5 Max Spevack 2007-07-02 14:12:40 UTC
Isn't this a taskomatic bug?  I seem to remember code that would lock your
system if you had too much activity that was independent of anything virt-related.

Comment 6 James Bowes 2007-07-02 14:24:29 UTC
(In reply to comment #5)
> Isn't this a taskomatic bug?  I seem to remember code that would lock your
> system if you had too much activity that was independent of anything virt-related.

Yeah, this one isn't virt-related. But no, taskomatic involved :)

This particular bug is for anything that can be done client side (auth caching
or whatnot). Also related are:

245794 - Create a better 'abuse' metric
201286 -  Provide a means to re-enable 'abuse' systems within RHN Support Tools

Comment 7 James Bowes 2007-07-02 14:38:30 UTC
As a suggested fix, how about pickling loginInfo in up2dateAuth.py in between runs?

Comment 8 Mike McGrath 2007-07-02 14:48:38 UTC
I'd have to check with some of the puppet guys how to do that.  I literally just
have:

package { httpd:
         ensure => present
}

In my configs.  puppet does the rest for me.

Comment 9 James Bowes 2007-07-02 14:51:03 UTC
(In reply to comment #8)
> I'd have to check with some of the puppet guys how to do that.  I literally just

I meant to be implemented for the yum plugin :)

Comment 10 RHEL Program Management 2007-10-16 03:57:01 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 11 RHEL Program Management 2007-11-03 01:35:53 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 12 John Matthews 2008-01-04 20:50:47 UTC
Implemented client side cache of the loginInfo in up2dateAuth.py

Checked in svn rev: 135236.


Comment 14 Cameron Meadors 2008-04-22 14:50:47 UTC
I no longer get the abuse error.

Comment 16 errata-xmlrpc 2008-05-21 14:27:14 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0360.html


Comment 17 Jeroen van Meeuwen 2009-11-22 13:41:11 UTC
I'm still experiencing this problem, exactly the same way mmcgrath describes. Putting an RHN proxy in between does not help, and apparently with a Puppet run interval of 30 minutes RHN still is contacted over a hundred times a day.

Comment 18 Jeroen van Meeuwen 2009-11-22 13:42:26 UTC
Also, how can this bug be closed while it is dependent on a bug in status NEW? I cannot examine the other bug 'cause of permission errors.