Bug 765901 - Pool overview always shows $x/$x w/ 0 instances in at least one pool
Summary: Pool overview always shows $x/$x w/ 0 instances in at least one pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: CloudForms Cloud Engine
Classification: Retired
Component: aeolus-conductor
Version: 1.0.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
Assignee: Scott Seago
QA Contact: wes hayutin
URL: https://qeblade31.rhq.lab.eng.bos.red...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-12-09 16:30 UTC by wes hayutin
Modified: 2012-05-15 21:28 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-05-15 21:28:36 UTC


Attachments (Terms of Use)
ss (412.43 KB, image/png)
2011-12-09 16:30 UTC, wes hayutin
no flags Details
err (225.32 KB, image/png)
2011-12-30 09:56 UTC, Shveta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2012:0583 0 normal SHIPPED_LIVE new packages: aeolus-conductor 2012-05-15 22:31:59 UTC

Description wes hayutin 2011-12-09 16:30:38 UTC
Created attachment 544623 [details]
ss

Description of problem:

according to https://bugzilla.redhat.com/show_bug.cgi?id=761509

Hugh Brock 2011-12-08 12:45:24 EST
OK, we (engineering) had not come to our own conclusion about what these
numbers mean. Here is that conclusion:

* The "Overview" section is always specific to the user viewing it. This is the
guiding principle of that area of the UI. Several things follow from that:

1. QUOTA USED should be the number of running instances owned by the user, and
the QUOTA itself should be the user's personal quota.

2. POOLS should be the number of pools the user has permission to launch
instances in


The overview for pools is not working as designed

see screenshot

Comment 1 wes hayutin 2011-12-09 16:36:05 UTC
also does not meet Scott's def..
Pools:
    before: number of pools the user has view permissions on
    after: number of pools the user has permission to launch deployments in

Comment 2 wes hayutin 2011-12-09 16:36:22 UTC
[root@qeblade31 yum.repos.d]# rpm -qa | grep aeolus
aeolus-conductor-daemons-0.7.0-4.el6.noarch
aeolus-conductor-doc-0.7.0-4.el6.noarch
rubygem-aeolus-cli-0.2.0-3.el6.noarch
rubygem-aeolus-image-0.2.0-1.el6.noarch
aeolus-all-0.7.0-4.el6.noarch
aeolus-configure-2.4.0-3.el6.noarch
aeolus-conductor-0.7.0-4.el6.noarch

Comment 3 wes hayutin 2011-12-09 16:40:21 UTC
recreated on https://dhcp231-79.rdu.redhat.com/conductor/pools/2 where quota patch is

Comment 4 Scott Seago 2011-12-13 16:00:55 UTC
See comment at https://bugzilla.redhat.com/show_bug.cgi?id=765893#c2

I suspect the same thing is going on here (user has global pool access, so all are showing up)

Comment 5 Scott Seago 2011-12-13 16:09:03 UTC
Forgot the needinfo flag...

Comment 6 wes hayutin 2011-12-13 19:38:46 UTC
<sseago> i.e. "I have 1 pool available, one of which is in use"
<sseago> I'm not clear how useful that "in use" stat is, but that's how it's defined now
<weshay_hm> oh.. so  in use.. is whether or not there are instances in the pool?
<sseago> well yes, "in use" == "at least one instance"
* bbuckingham-mtg has quit (Quit: Leaving)
<weshay_hm> so any empty pool should not be counted
<sseago> weshay_hm, right.
<weshay_hm> sseago, thats a bug.. 
<weshay_hm> er..
<weshay_hm> there is a bug
<weshay_hm> sorry
<sseago> so you're seeing the pool showing up as being used even though there are no instances in it?
<weshay_hm> yes
<weshay_hm> I always see 2/2 or 3/3
<weshay_hm> even w/o instances

Comment 7 wes hayutin 2011-12-13 19:44:05 UTC
To clarify...

After speaking w/ Scott.. On the monitor tab you will see pools $x/$x in the overview

the first number = number of pools w/ running instances or pending or new
the second number = total number of pools<weshay_hm> sseago, to clarify.. do the instances need to be running ?


<sseago> well, running or potentially running -- i.e. not stopped/error
<weshay_hm> k
<sseago> specificall new, pending, running
<sseago> i.e. consuming resources

Comment 8 Scott Seago 2011-12-13 23:39:21 UTC
Wes -- almost right. You had it reversed. The first number is total number of pools the current user has permission to launch in. The second number (which is the one which is wrong in the current codebase) is supposed to be the total number with at least one instance that's running or about to start (state in [new, pending, running). Instead of this, what's happening prior to the bugfix is that the second number is showing up as a repeat of the total number of pools the user can launch in.


Fix is here: https://fedorahosted.org/pipermail/aeolus-devel/2011-December/007405.html

Comment 9 Scott Seago 2011-12-20 18:26:01 UTC
Fix is pushed to master.

Comment 10 Shveta 2011-12-30 09:55:59 UTC
The pool count now shows :
total number of pools/pools with running instance


Verified in 
rpm -qa|grep aeolus
aeolus-conductor-doc-0.8.0-0.20111222233342gitd98cb57.el6.noarch
rubygem-aeolus-image-0.3.0-0.20111222173411gitc13b654.el6.noarch
rubygem-aeolus-cli-0.3.0-0.20111222173356git3cd6277.el6.noarch
aeolus-conductor-0.8.0-0.20111222233342gitd98cb57.el6.noarch
aeolus-configure-2.5.0-0.20111222173430git17b704a.el6.noarch
aeolus-all-0.8.0-0.20111222233342gitd98cb57.el6.noarch
aeolus-conductor-daemons-0.8.0-0.20111222233342gitd98cb57.el6.noarch


Screenshot attached :
I have total 4 pools and 2 of them have running instances

Comment 11 Shveta 2011-12-30 09:56:36 UTC
Created attachment 550047 [details]
err

Comment 12 wes hayutin 2012-01-03 14:30:09 UTC
These bugs are verified, removing from ce-sprint

Comment 14 errata-xmlrpc 2012-05-15 21:28:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2012-0583.html


Note You need to log in before you can comment on or make changes to this bug.