Description of problem:
proxy doesn't handle errors gracefully
Version-Release number of selected component (if applicable):
Errors or announcements posted to RHN site
Steps to Reproduce:
1.Wait for Red Hat to post announcement on RHN site
Time: Tue Jan 20 06:27:47 2004Exception type xmlrpclib.FaultException
while handling function ProxyAuth.login (xmlrpclib.Fault)Exception
Handler InformationTraceback (innermost last): File
"/var/www/rhns/proxy/broker/rhnProxyAuth.py", line 191, in login
raise eFault: <Fault -1 """IMPORTANT MESSAGE FOLLOWS:Red Hat Network
will be offline for scheduled maintenancefor approximately 1 hour(s),
beginning at 6:30 am EST.Red Hat Network will be brought back online
no later than 8 am EST.We apologize for any inconvenience this outage
may cause.Thank you for using Red Hat Network.--the RHN team""">Local
variables by frameFrame login in
/var/www/rhns/proxy/broker/rhnProxyAuth.py at line 193
What precisely did you see on the client side?
Adding to 310 proxy tracking bug.
The errors I saw were the ones I posted, though it mundged the spacing
The error you showed be was taken from the RHN Proxy log file. I am
curious as to what an up2date client saw.
I will, of course, duplicate this here when I get a chance to set up
the scenario. Just trying to pump as much info from you as possible.
Unfortunately I was not around when these errors have happened now and
in the past, so not sure what the up2date clients saw. I will try
I will try to reproduce, regardless. Thanks for the heads up.
Fixed. In newer version... coming soon to a channel near you!
RHN Proxy's own login borked during scheduled RHN downtimes. This is
fine if it nicely forwarded the message. It does... but in an ugly way.
I made it a bit better. I.e., the client will see what the client
expects to see. An outage message. The proxy merely proxies.
TEST PLAN (for QA guy):
o point RHN Proxy (new version) at an RHN Satellite.
o outage your RHN Satellite:
- echo "BOO!" > /etc/rhn/message_to_all.txt
- add this to /etc/rhn/rhn.conf:
server.send_message_to_all = 1
o make sure you blow away the rhn_auth_cache:
rm -v /var/up2date/rhn_auth_cache
service rhn_auth_cache restart
o run up2date of some client against that RHN Proxy
* should see the outage message *