Bug 802847 - provider ordering effects the value of hwp1
provider ordering effects the value of hwp1
Status: CLOSED ERRATA
Product: CloudForms Cloud Engine
Classification: Red Hat
Component: aeolus-configure (Show other bugs)
1.0.0
Unspecified Unspecified
unspecified Severity urgent
: beta6
: ---
Assigned To: Richard Su
dgao
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-03-13 12:04 EDT by dgao
Modified: 2012-05-15 18:53 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
In conductor, four properties are considered during hardware profile matching, in this order: memory, cpu, storage, and architecture. A front end hardware profile and a back end hardware profile are deemed a match if all four properties are in agreement. If a property is numeric, then match is found if the front end value is less than or equal to the back-end value or the maximum back-end value if it is specified as a range. If nil is specified as a property for a front-end hardware profile, then it matches with everything. If nil is specified as a property for a back-end hardware profile, then it matches only if the front-end property was also nil. When there are multiple matches, the backend hardware profile with the least amount of memory is chosen. Previously a single architecture was listed for each EC2 provider hardware profile. In a few cases, m1.small and c1.medium, only i386 was listed and x86_64 was omitted. Conductor also assumed a single architecture for each hardware profile. To allow us to start using smaller sized instances for EC2 for the x86_64 architecture, Deltacloud was changed so that it listed both i386 and x86_64 as architectures for m1.small and c1.medium. And Conductor was changed so that it considered all architectures that are listed in a provider hardware profile. Unchanged, Conductor would only see the first architecture that is listed by Deltacloud. Now Conductor will see x64_64 as an architecture for m1.small during matching. And will pick m1.small (1.7GB memory) over c1.xlarge (7GB memory) and m1.large (7.5GB memory).
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-05-15 18:53:05 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
default_7500_memory (188.16 KB, image/png)
2012-03-21 12:48 EDT, dgao
no flags Details

  None (edit)
Description dgao 2012-03-13 12:04:58 EDT
The default hwp1 memory value is different depending on the ordering of providers that's used in aeolus-configure.

If aeolus-configure -p rhevm,ec2,vsphere, then hwp1 would have the memory of 512.

If aeolus-configure -p ec2,rhevm,vsphere, then hwp1 would have the memory of 7500. This is incorrect and would cause a mismatch to vsphere hardware. 


[root@dell-pe2900-01 ~]# rpm -qa| grep "aeolus" | sort
aeolus-all-0.8.0-41.el6.noarch
aeolus-conductor-0.8.0-41.el6.noarch
aeolus-conductor-daemons-0.8.0-41.el6.noarch
aeolus-conductor-doc-0.8.0-41.el6.noarch
aeolus-configure-2.5.0-18.el6.noarch
rubygem-aeolus-cli-0.3.0-13.el6.noarch
rubygem-aeolus-image-0.3.0-12.el6.noarch
Comment 1 wes hayutin 2012-03-13 12:30:56 EDT
HWP should work in the following ...

for any combination of providers..

hwp1 memory = 7500
hwp2 memory = 512

Thanks!
Comment 2 Mike Orazi 2012-03-14 09:55:54 EDT
This is related to the change to offer an hwp that matches a more cost-effective ec2 type.  As requested in the original change, this was only applied to ec2 but doing had the side effect of making it order dependent.

Upon testing out hwp1 with 7500 for all providers, we ran into several issues.  rhevm eventually worked, but there appeared to be a non-related failure.  Additional vsphere did not seem to correctly match anything based on hwp1 with memory = 7500.  This needs some further investigation.

I also noticed that the existing definition for small is limited to i386 & 500 M of memory, so there were matching issues across the board.
Comment 3 Richard Su 2012-03-16 21:25:58 EDT
So there are a few issues going on here. In summary, to fix this properly I propose we patch deltacloud-core, conductor, and configure. There are three patches we need to pull in and they are discussed below.

First, setting memory to 7500 does not work with vsphere. vsphere's provider hwp memory range is 512-2048. We should stay with 512m as the default.

The reason we aren't able to use the small instance with ec2 is currently deltacloud only lists i386 as the architecture in the ec2 driver.

To use a smaller instance with ec2, I've added the small i386 hardware profile to all provider configs. 

In order for us to use the small x86_64 instance. We need to pull in a recent deltacloud patch. This patch adds x86_64 to m1.small and c1.medium and adds a m1.medium: 

https://git-wip-us.apache.org/repos/asf?p=deltacloud.git;a=commitdiff;h=f5b4c017f6d681594d6c7c79c039278825b67d4c

And we also need to fix the hardware profile matching logic in conductor. Currently it only uses the first architecture value if there are multiple values listed. I've posted a patch for this.

https://fedorahosted.org/pipermail/aeolus-devel/2012-March/009611.html

For configure I've posted a patch to revert the change to 7500. It also makes the small i386 512m profile available for all configs. I also renamed the hardware profiles to make them more meaningful. Now we have two:

small-86_64, memory = 512m, cpu = 1, storage = any
small-i386, memory=512m, cpu = 1, storage, any

The configure patch is: 

https://fedorahosted.org/pipermail/aeolus-devel/2012-March/009612.html
Comment 4 Hugh Brock 2012-03-19 14:03:49 EDT
Richard, can you write a rel note (in "Technical Notes" field above) describing exactly how the EC2 matching will work with this fix implemented (and the rest of the matching for that matter)?

Thanks.
Comment 5 Richard Su 2012-03-19 18:18:52 EDT
Posted revised patch for configure to remove i386 hwp.

Reviewer will need to consider these two patches:
conductor -> https://fedorahosted.org/pipermail/aeolus-devel/2012-March/009611.html
configure -> https://fedorahosted.org/pipermail/aeolus-devel/2012-March/009635.html

Updated technical notes.
Comment 6 Richard Su 2012-03-19 18:18:52 EDT
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
In conductor, four properties are considered during hardware profile matching, in this order: memory, cpu, storage, and architecture. A front end hardware profile and a back end hardware profile are deemed a match if all four properties are in agreement. If a property is numeric, then match is found if the front end value is less than or equal to the back-end value or the maximum back-end value if it is specified as a range. If nil is specified as a property for a front-end hardware profile, then it matches with everything. If nil is specified as a property for a back-end hardware profile, then it matches only if the front-end property was also nil. When there are multiple matches, the backend hardware profile with the least amount of memory is chosen.

Previously a single architecture was listed for each EC2 provider hardware profile. In a few cases, m1.small and c1.medium, only i386 was listed and x86_64 was omitted. Conductor also assumed a single architecture for each hardware profile. 

To allow us to start using smaller sized instances for EC2 for the x86_64 architecture, Deltacloud was changed so that it listed both i386 and x86_64 as architectures for m1.small and c1.medium.
And Conductor was changed so that it considered all architectures that are listed in a provider hardware profile. 

Unchanged, Conductor would only see the first architecture that is listed by Deltacloud. Now Conductor will see x64_64 as an architecture for m1.small during matching. And will pick m1.small (1.7GB memory) over c1.xlarge (7GB memory) and m1.large (7.5GB memory).
Comment 7 Richard Su 2012-03-20 19:07:43 EDT
Available on master
configure -> commit 0a6ef1c0ed88333a32a7cd12b97302237c3eed98
conductor -> commit 37764942498d1d1acf9f9e4c93499f9f668d956c
Comment 9 dgao 2012-03-21 12:48:22 EDT
Created attachment 571770 [details]
default_7500_memory

FAILS_QA

The default hwp1 defaults to 7500 for the memory. This does not match to any vsphere hardware.



[root@dell-pem600-01 nodes]# rpm -qa | grep "aeolus"
rubygem-aeolus-cli-0.3.0-14.el6.noarch
aeolus-conductor-doc-0.8.0-43.el6.noarch
aeolus-conductor-0.8.0-43.el6.noarch
rubygem-aeolus-image-0.3.0-12.el6.noarch
aeolus-configure-2.5.0-18.el6.noarch
aeolus-conductor-daemons-0.8.0-43.el6.noarch
aeolus-all-0.8.0-43.el6.noarch
Comment 10 Scott Seago 2012-03-21 13:38:47 EDT
Hmm, you must not have the fix that was pushed. With rwsu's fix, the generated HWP has 512 MB of memory. Also, it's now called "small-x86_64" instead of "hwp1", so if you have hwp1 you're probably still on the older codebase.
Comment 11 Richard Su 2012-03-21 13:59:32 EDT
David, you have old rpms. The changes are available in brew as:

aeolus-configure-2.5.2-1.el6.noarch
aeolus-conductor-0.8.3-1.el6.noarch
deltacloud-core-0.5.0-8.el6.noarch
deltacloud-core-ec2-0.5.0-8.el6.noarch
Comment 12 Richard Su 2012-03-21 14:35:50 EDT
Moving status to back to MODIFIED until changes are in puddle.
Comment 14 dgao 2012-03-30 15:13:10 EDT
[root@dell-pem600-01 ~]# rpm -qa | grep "aeolus"
aeolus-conductor-doc-0.8.7-1.el6.noarch
aeolus-all-0.8.7-1.el6.noarch
rubygem-aeolus-image-0.3.0-12.el6.noarch
aeolus-configure-2.5.2-1.el6.noarch
aeolus-conductor-daemons-0.8.7-1.el6.noarch
aeolus-conductor-0.8.7-1.el6.noarch
rubygem-aeolus-cli-0.3.1-1.el6.noarch

[root@dell-pem600-01 ~]# aeolus-configure -p ec2,rhevm,vsphere
Launching aeolus configuration recipe...
notice: /File[/etc/ntp.conf]/content: content changed '{md5}9d90841a5f98018c6f1bd1e67dc81ccf' to '{md5}2dfb9420ebf32c7d97f86526fe6c21e8'
notice: /Stage[main]/Ntp::Client/Service[ntpd]: Triggered 'refresh' from 1 events
notice: /Stage[main]/Aeolus::Iwhd/Service[mongod]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Iwhd/Service[iwhd]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0create-bucket-aeolus]/returns: 
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Postgres::User[aeolus]/Exec[create_aeolus_postgres_user]/returns: executed successfully
notice: /Stage[main]/Apache/Exec[permit-http-networking]/returns: executed successfully
notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Deltacloud::Core/Service[deltacloud-core]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Deltacloud::Core/Exec[deltacloud-core-startup-wait]/returns: executed successfully
notice: /File[/etc/rsyslog.d/aeolus.conf]/ensure: defined content as '{md5}2d45434a072b4f9d1518ce026b92c547'
notice: /Stage[main]/Aeolus::Conductor/Service[rsyslog]: Triggered 'refresh' from 1 events
notice: /File[/usr/share/aeolus-conductor/config/initializers/secret_token.rb]/content: content changed '{md5}a88a27291fc877ed2ca9d6faeba115ac' to '{md5}7f9dc5eecd100405a767b1bdb2e3018a'
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Rails::Migrate::Db[migrate_aeolus_database]/Exec[migrate_rails_database]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Rails::Seed::Db[seed_aeolus_database]/Exec[seed_rails_database]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Conductor/Rails::Seed::Db[seed_aeolus_database]/Exec[seed_rails_database]/returns: executed successfully
notice: /File[/var/lib/aeolus-conductor]/ensure: created
notice: /File[/var/lib/aeolus-conductor/production.seed]/ensure: created
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[create_site_admin_user]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[create_site_admin_user]/returns: User admin registered
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[create_site_admin_user]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: Granting administrator privileges for admin...
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Site_admin[admin]/Exec[grant_site_admin_privs]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Service[conductor-dbomatic]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Conductor/Service[aeolus-conductor]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Login[admin]/Web_request[admin-conductor-login]/post: post changed '' to 'https://localhost/conductor/user_session'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Login counter for user admin updated
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Hwp[small-x86_64]/Web_request[hwp-small-x86_64]/post: post changed '' to 'https://localhost/conductor/hardware_profiles'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-ap-southeast-1]/Web_request[provider-ec2-ap-southeast-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-sa-east-1]/Web_request[provider-ec2-sa-east-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-us-west-2]/Web_request[provider-ec2-us-west-2]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-us-west-1]/Web_request[provider-ec2-us-west-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-us-east-1]/Web_request[provider-ec2-us-east-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-ap-northeast-1]/Web_request[provider-ec2-ap-northeast-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Provider[ec2-eu-west-1]/Web_request[provider-ec2-eu-west-1]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Ec2/Aeolus::Conductor::Logout[admin]/Web_request[admin-conductor-logout]/post: post changed '' to 'https://localhost/conductor/logout'
notice: /Stage[main]/Aeolus::Image-factory/Service[imagefactory]/ensure: ensure changed 'stopped' to 'running'
notice: Finished catalog run in 83.26 seconds


notice: /Stage[main]/Apache/Exec[permit-http-networking]/returns: executed successfully
notice: /Stage[main]/Aeolus::Deltacloud::Core/Exec[deltacloud-core-startup-wait]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: conductor already exists
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Rails::Migrate::Db[migrate_aeolus_database]/Exec[migrate_rails_database]/returns: executed successfully
notice: the RHEV NFS export is on the correct storage domain and has type 'export' => true
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Profiles::Rhevm::Instance[default]/Aeolus::Rhevm::Validate[RHEV NFS export validation for default]/Notify[RHEV NFS export validation for default]/message: defined 'message' as 'the RHEV NFS export is on the correct storage domain and has type 'export' => true'
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Login[admin]/Web_request[admin-conductor-login]/post: post changed '' to 'https://localhost/conductor/user_session'
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Login counter for user admin updated
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Hwp[small-x86_64]/Web_request[hwp-small-x86_64]/post: post changed '' to 'https://localhost/conductor/hardware_profiles'
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Profiles::Rhevm::Instance[default]/Aeolus::Conductor::Provider[rhevm-default]/Web_request[provider-rhevm-default]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Conductor::Logout[admin]/Web_request[admin-conductor-logout]/post: post changed '' to 'https://localhost/conductor/logout'
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0c[create-bucket-aeolus]/returns: 
notice: /Stage[main]/Aeolus::Profiles::Rhevm/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: executed successfully
notice: Finished catalog run in 39.04 seconds
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: conductor already exists
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Conductor/Rails::Create::Db[create_aeolus_database]/Exec[create_rails_database]/returns: executed successfully
notice: /Stage[main]/Aeolus::Conductor/Rails::Migrate::Db[migrate_aeolus_database]/Exec[migrate_rails_database]/returns: executed successfully
notice: /Stage[main]/Apache/Exec[permit-http-networking]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns:                                  Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0xec[create-bucket-aeolus]/returns: 
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Create_bucket[aeolus]/Exec[create-bucket-aeolus]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Login[admin]/Web_request[admin-conductor-login]/post: post changed '' to 'https://localhost/conductor/user_session'
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Using gem require instead of bundler
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: Login counter for user admin updated
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Login[admin]/Exec[decrement_login_counter]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Hwp[small-x86_64]/Web_request[hwp-small-x86_64]/post: post changed '' to 'https://localhost/conductor/hardware_profiles'
notice: /Stage[main]/Aeolus::Deltacloud::Core/Exec[deltacloud-core-startup-wait]/returns: executed successfully
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Profiles::Vsphere::Instance[default]/Aeolus::Conductor::Provider[vsphere-default]/Web_request[provider-vsphere-default]/post: post changed '' to 'https://localhost/conductor/providers'
notice: /Stage[main]/Aeolus::Profiles::Vsphere/Aeolus::Conductor::Logout[admin]/Web_request[admin-conductor-logout]/post: post changed '' to 'https://localhost/conductor/logout'
notice: Finished catalog run in 40.32 seconds
Comment 18 errata-xmlrpc 2012-05-15 18:53:05 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2012-0583.html

Note You need to log in before you can comment on or make changes to this bug.