Hide Forgot
Created attachment 544623 [details] ss Description of problem: according to https://bugzilla.redhat.com/show_bug.cgi?id=761509 Hugh Brock 2011-12-08 12:45:24 EST OK, we (engineering) had not come to our own conclusion about what these numbers mean. Here is that conclusion: * The "Overview" section is always specific to the user viewing it. This is the guiding principle of that area of the UI. Several things follow from that: 1. QUOTA USED should be the number of running instances owned by the user, and the QUOTA itself should be the user's personal quota. 2. POOLS should be the number of pools the user has permission to launch instances in The overview for pools is not working as designed see screenshot
also does not meet Scott's def.. Pools: before: number of pools the user has view permissions on after: number of pools the user has permission to launch deployments in
[root@qeblade31 yum.repos.d]# rpm -qa | grep aeolus aeolus-conductor-daemons-0.7.0-4.el6.noarch aeolus-conductor-doc-0.7.0-4.el6.noarch rubygem-aeolus-cli-0.2.0-3.el6.noarch rubygem-aeolus-image-0.2.0-1.el6.noarch aeolus-all-0.7.0-4.el6.noarch aeolus-configure-2.4.0-3.el6.noarch aeolus-conductor-0.7.0-4.el6.noarch
recreated on https://dhcp231-79.rdu.redhat.com/conductor/pools/2 where quota patch is
See comment at https://bugzilla.redhat.com/show_bug.cgi?id=765893#c2 I suspect the same thing is going on here (user has global pool access, so all are showing up)
Forgot the needinfo flag...
<sseago> i.e. "I have 1 pool available, one of which is in use" <sseago> I'm not clear how useful that "in use" stat is, but that's how it's defined now <weshay_hm> oh.. so in use.. is whether or not there are instances in the pool? <sseago> well yes, "in use" == "at least one instance" * bbuckingham-mtg has quit (Quit: Leaving) <weshay_hm> so any empty pool should not be counted <sseago> weshay_hm, right. <weshay_hm> sseago, thats a bug.. <weshay_hm> er.. <weshay_hm> there is a bug <weshay_hm> sorry <sseago> so you're seeing the pool showing up as being used even though there are no instances in it? <weshay_hm> yes <weshay_hm> I always see 2/2 or 3/3 <weshay_hm> even w/o instances
To clarify... After speaking w/ Scott.. On the monitor tab you will see pools $x/$x in the overview the first number = number of pools w/ running instances or pending or new the second number = total number of pools<weshay_hm> sseago, to clarify.. do the instances need to be running ? <sseago> well, running or potentially running -- i.e. not stopped/error <weshay_hm> k <sseago> specificall new, pending, running <sseago> i.e. consuming resources
Wes -- almost right. You had it reversed. The first number is total number of pools the current user has permission to launch in. The second number (which is the one which is wrong in the current codebase) is supposed to be the total number with at least one instance that's running or about to start (state in [new, pending, running). Instead of this, what's happening prior to the bugfix is that the second number is showing up as a repeat of the total number of pools the user can launch in. Fix is here: https://fedorahosted.org/pipermail/aeolus-devel/2011-December/007405.html
Fix is pushed to master.
The pool count now shows : total number of pools/pools with running instance Verified in rpm -qa|grep aeolus aeolus-conductor-doc-0.8.0-0.20111222233342gitd98cb57.el6.noarch rubygem-aeolus-image-0.3.0-0.20111222173411gitc13b654.el6.noarch rubygem-aeolus-cli-0.3.0-0.20111222173356git3cd6277.el6.noarch aeolus-conductor-0.8.0-0.20111222233342gitd98cb57.el6.noarch aeolus-configure-2.5.0-0.20111222173430git17b704a.el6.noarch aeolus-all-0.8.0-0.20111222233342gitd98cb57.el6.noarch aeolus-conductor-daemons-0.8.0-0.20111222233342gitd98cb57.el6.noarch Screenshot attached : I have total 4 pools and 2 of them have running instances
Created attachment 550047 [details] err
These bugs are verified, removing from ce-sprint
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2012-0583.html