Bug 887170 - Conga enables cluster services even if disabled explicitly
Summary: Conga enables cluster services even if disabled explicitly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: conga
Version: 5.8
Hardware: Unspecified
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Ryan McCabe
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 1000522
TreeView+ depends on / blocked
 
Reported: 2012-12-14 08:49 UTC by Josef Zimek
Modified: 2018-12-03 18:03 UTC (History)
4 users (show)

Fixed In Version: conga-0.12.2-68.el5
Doc Type: Bug Fix
Doc Text:
Proposed text: Each time luci, the web-based frontend of conga cluster management, was used to start or restart a cluster, or to have previously inactivated node rejoin it, it would make cluster services such as cman, rgmanager or clvmd enabled at boot on the respective cluster nodes. This can interfere with user's preferences, for instance when running 2-nodes cluster without quorum disk and having the services disabled on purpose on one of the nodes (to prevent fence races). To avoid this, conga was modified so that it no longer modifies the current service runlevel settings in the mentioned cases, while still enabling the services when the cluster is created or a new node is added as before.
Clone Of:
: 1000522 (view as bug list)
Environment:
Last Closed: 2013-10-01 00:40:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1358 0 normal SHIPPED_LIVE conga bug fix update 2013-09-30 21:12:28 UTC

Description Josef Zimek 2012-12-14 08:49:41 UTC
Description of problem:

In 2 node cluster without qdisk RH recommends to use fencing delay to prevent fence races and split-brain situations. But if cluster is started via conga, cluster services (cman, clvmd, rgmanager, scsi_reserve) are enabled on boot again which is in contradiction with what we recommend in following knowledgebase article:

Delaying Fencing in a Two Node Cluster to Prevent Fence Races or "Fence Death" Scenarios:
https://access.redhat.com/knowledge/solutions/54829

Version-Release number of selected component (if applicable):
ricci-0.12.2-51.el5
luci-0.12.2-51.el5
cman-2.0.115-96.el5

How reproducible:
Always

Steps to Reproduce:
1. Disable cluster services on boot on the node which has no fencing delay set
2. Start cluster using luci
3. Check if services from step #1 are enabled on boot again
  
Actual results:
Conga enables explicitly disabled cluster services

Expected results:
Conga doesn't enable cluster services when explicitly disabled

Additional info:

A). Executed the following as requested

xsccs3:root > date
Fri Dec 14 16:00:57 EST 2012
xsccs3:root > service ricci stop
Shutting down ricci: [  OK  ]
xsccs3:root > rm -rf /var/lib/ricci/queue/*
xsccs3:root > pkill -9 ricci
xsccs3:root > service ricci start
Starting ricci: [  OK  ]
xsccs3:root > 
xsccs3:root > 
xsccs3:root > chkconfig scsi_reserve --list
scsi_reserve    0:off   1:off   2:off   3:off   4:off   5:off   6:off
xsccs3:root > chkconfig scsi_reserve off

xsccs3:root > chkconfig scsi_reserve --list
scsi_reserve    0:off   1:off   2:off   3:off   4:off   5:off   6:off
xsccs3:root > chkconfig cman --list
cman            0:off   1:off   2:off   3:off   4:off   5:off   6:off
xsccs3:root > chkconfig clvmd --list
clvmd           0:off   1:off   2:off   3:off   4:off   5:off   6:off
xsccs3:root > chkconfig rgmanager --list
rgmanager       0:off   1:off   2:off   3:off   4:off   5:off   6:off

B) Cluster started

Dec 14 16:03:16 xsccs3 ricci[31443]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/948698712'
Dec 14 16:03:16 xsccs3 ricci[31447]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1394570707'
Dec 14 16:03:17 xsccs3 ccsd[31468]: Starting ccsd 2.0.115: 
Dec 14 16:03:17 xsccs3 ccsd[31468]:  Built: Jan 11 2012 09:38:30 
Dec 14 16:03:17 xsccs3 ccsd[31468]:  Copyright (C) Red Hat, Inc.  2004  All rights reserved. 
Dec 14 16:03:17 xsccs3 ccsd[31468]: cluster.conf (cluster name = SCCSCluster2, version = 136) found. 
Dec 14 16:03:17 xsccs3 luci[6558]: Unable to retrieve batch 1394570707 status from xsccs3:11111: module scheduled for execution
Dec 14 16:03:17 xsccs3 luci[6558]: Unable to retrieve batch 399735174 status from xsccs4:11111: module scheduled for execution
Dec 14 16:03:17 xsccs3 ricci[31476]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/408398302'
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] AIS Executive Service RELEASE 'subrev 1887 version 0.80.6' 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors. 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] Copyright (C) 2006 Red Hat, Inc. 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] AIS Executive Service: started and ready to provide service. 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] Using default multicast address of 239.192.213.112 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Token Timeout (10000 ms) retransmit timeout (495 ms) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] token hold (386 ms) retransmits before loss (20 retrans) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] join (60 ms) send_join (0 ms) consensus (2000 ms) merge (200 ms) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] downcheck (1000 ms) fail to recv const (2500 msgs) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1402 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] window size per rotation (50 messages) maximum messages per rotation (17 messages) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] missed count const (5 messages) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] send threads (0 threads) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] RRP token expired timeout (495 ms) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] RRP token problem counter (2000 ms) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] RRP threshold (10 problem count) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] RRP mode set to none. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] heartbeat_failures_allowed (0) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] max_network_delay (50 ms) 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Receive multicast socket recv buffer size (320000 bytes). 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes). 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] The network interface [192.168.3.153] is now up. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Created or loaded sequence id 416.192.168.3.153 for this ring. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering GATHER state from 15. 
Dec 14 16:03:19 xsccs3 openais[31478]: [CMAN ] CMAN 2.0.115 (built Jan 11 2012 09:38:32) started 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] Service initialized 'openais CMAN membership service 2.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais extended virtual synchrony service' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais cluster membership service B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais availability management framework B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais checkpoint service B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais event service B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais distributed locking service B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais message service B.01.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais configuration service' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais cluster closed process group service v1.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SERV ] Service initialized 'openais cluster config database access v1.01' 
Dec 14 16:03:19 xsccs3 openais[31478]: [SYNC ] Not using a virtual synchrony filter. 
Dec 14 16:03:19 xsccs3 openais[31478]: [MAIN ] Publishing socket for client connections. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Creating commit token because I am the rep. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Storing new sequence id for ring 1a4 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering COMMIT state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering RECOVERY state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] position [0] member 192.168.3.153: 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] previous ring seq 416 rep 192.168.3.153 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] aru 0 high delivered 0 received flag 1 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Did not need to originate any messages in recovery. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Sending initial ORF token 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] CLM CONFIGURATION CHANGE 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] New Configuration: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Left: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Joined: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] CLM CONFIGURATION CHANGE 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] New Configuration: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.153)  
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Left: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Joined: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.153)  
Dec 14 16:03:19 xsccs3 openais[31478]: [SYNC ] This node is within the primary component and will provide service. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering OPERATIONAL state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [CMAN ] quorum regained, resuming activity 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] got nodejoin message 192.168.3.153 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering GATHER state from 11. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Creating commit token because I am the rep. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Storing new sequence id for ring 1ac 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering COMMIT state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering RECOVERY state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] position [0] member 192.168.3.153: 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] previous ring seq 420 rep 192.168.3.153 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] aru c high delivered c received flag 1 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] position [1] member 192.168.3.154: 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] previous ring seq 424 rep 192.168.3.154 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] aru c high delivered c received flag 1 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Did not need to originate any messages in recovery. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] Sending initial ORF token 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] CLM CONFIGURATION CHANGE 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] New Configuration: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.153)  
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Left: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Joined: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] CLM CONFIGURATION CHANGE 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] New Configuration: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.153)  
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.154)  
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Left: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] Members Joined: 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ]  r(0) ip(192.168.3.154)  
Dec 14 16:03:19 xsccs3 openais[31478]: [SYNC ] This node is within the primary component and will provide service. 
Dec 14 16:03:19 xsccs3 openais[31478]: [TOTEM] entering OPERATIONAL state. 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] got nodejoin message 192.168.3.153 
Dec 14 16:03:19 xsccs3 openais[31478]: [CLM  ] got nodejoin message 192.168.3.154 
Dec 14 16:03:20 xsccs3 ccsd[31468]: Initial status:: Quorate 
Dec 14 16:03:23 xsccs3 luci[6558]: Unable to retrieve batch 1394570707 status from xsccs3:11111: module scheduled for execution
Dec 14 16:03:23 xsccs3 kernel: dlm: Using TCP for communications
Dec 14 16:03:23 xsccs3 clvmd: Cluster LVM daemon started - connected to CMAN
Dec 14 16:03:23 xsccs3 luci[6558]: Unable to retrieve batch 399735174 status from xsccs4:11111: module scheduled for execution
Dec 14 16:03:23 xsccs3 ricci[31551]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/2061422992'
Dec 14 16:03:23 xsccs3 kernel: dlm: connecting to 2
Dec 14 16:03:23 xsccs3 kernel: dlm: got connection from 2
Dec 14 16:03:25 xsccs3 multipathd: dm-2: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-3: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-4: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-5: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-6: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-7: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-8: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-9: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-10: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-11: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-12: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-13: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-14: add map (uevent) 
Dec 14 16:03:25 xsccs3 multipathd: dm-15: add map (uevent) 
Dec 14 16:03:25 xsccs3 scsi_reserve: [error] cluster not configured for scsi reservations
Dec 14 16:03:26 xsccs3 clurgmgrd[31728]: <notice> Resource Group Manager Starting 
Dec 14 16:03:27 xsccs3 multipathd: dm-2: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-2: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-3: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-3: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-4: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-4: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-5: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-5: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-6: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-6: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-6: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-9: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-9: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-7: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-7: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-10: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-10: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-11: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-11: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-12: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-12: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-8: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-8: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-13: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-13: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-14: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-14: devmap not registered, can't remove 
Dec 14 16:03:27 xsccs3 multipathd: dm-15: remove map (uevent) 
Dec 14 16:03:27 xsccs3 multipathd: dm-15: devmap not registered, can't remove 
Dec 14 16:03:29 xsccs3 ricci[32554]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1064862770'
Dec 14 16:03:29 xsccs3 ricci[32599]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1030438132'
Dec 14 16:03:32 xsccs3 clurgmgrd[31728]: <notice> Starting stopped service service:ins3 
Dec 14 16:03:32 xsccs3 multipathd: dm-2: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-3: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-4: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-5: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-6: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-7: add map (uevent) 
Dec 14 16:03:32 xsccs3 multipathd: dm-8: add map (uevent) 
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-2): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-3): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-3): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-4): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-4): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-5): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-5): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-6): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-6): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-7): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-7): mounted filesystem with ordered data mode
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-8): barriers disabled
Dec 14 16:03:33 xsccs3 kernel: EXT4-fs (dm-8): mounted filesystem with ordered data mode
Dec 14 16:03:36 xsccs3 clurgmgrd[31728]: <notice> Service service:ins3 started


C) Services were enabled including scsi_reserve

xsccs3:root > chkconfig scsi_reserve --list
scsi_reserve    0:off   1:off   2:on    3:on    4:on    5:on    6:off
xsccs3:root > chkconfig cman --list
cman            0:off   1:off   2:on    3:on    4:on    5:on    6:off
xsccs3:root > chkconfig clvmd --list
clvmd           0:off   1:off   2:on    3:on    4:on    5:on    6:off
xsccs3:root > chkconfig rgmanager --list
rgmanager       0:off   1:off   2:on    3:on    4:on    5:on    6:off

Comment 3 RHEL Program Management 2013-04-04 12:36:39 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux release.  Product Management has
requested further review of this request by Red Hat Engineering, for
potential inclusion in a Red Hat Enterprise Linux release for currently
deployed products.  This request is not yet committed for inclusion in
a release.

Comment 13 errata-xmlrpc 2013-10-01 00:40:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1358.html


Note You need to log in before you can comment on or make changes to this bug.