Bug 997347 - ''oo-stats" can not display correct gears info.
''oo-stats" can not display correct gears info.
Status: CLOSED NOTABUG
Product: OpenShift Online
Classification: Red Hat
Component: Pod (Show other bugs)
2.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Luke Meyer
libra bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-15 04:31 EDT by Yujie Zhang
Modified: 2015-05-14 20:19 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-08-22 05:33:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Yujie Zhang 2013-08-15 04:31:50 EDT
Description of problem:

Created a multi node env, with small  , medium and c9 nodes, created 3 small applciations , and using "oo-stats" to check, found the result is 2 small gears used in  small node, and another 1 gear used in medium node, it's not correct.Checked on the medium node, no gears are used on that node.

Version-Release number of selected component (if applicable):
fork_ami_origin_ui_69_761

How reproducible:
always

Steps to Reproduce:
1.Create a multi node env, small, medium and c9 node
2.Create 3 small gear applications
3.Using "oo-stats" to check

Actual results:

[root@ip-10-152-148-116 ~]# oo-stats node
-------------------------
Profile 'medium' summary:
-------------------------
            District count : 1
         District capacity : 6000
       Dist avail capacity : 5999
           Dist avail uids : 5999
     Lowest dist usage pct : 0.01666666666666572
    Highest dist usage pct : 0.01666666666666572
        Avg dist usage pct : 0.01666666666666572
               Nodes count : 1
              Nodes active : 1
         Gears total count : 1
        Gears active count : 1
    Available active gears : 39
 Effective available gears : 39
Districts:
 Name Nodes DistAvailCapacity GearsActive EffectiveAvailGears AvgActiveUsagePct
----- ----- ----------------- ----------- ------------------- -----------------
dist2     1              5999           1                  39               2.5
------------------------
Profile 'small' summary:
------------------------
            District count : 1
         District capacity : 6000
       Dist avail capacity : 5998
           Dist avail uids : 5998
     Lowest dist usage pct : 0.03333333333333144
    Highest dist usage pct : 0.03333333333333144
        Avg dist usage pct : 0.03333333333333144
               Nodes count : 1
              Nodes active : 1
         Gears total count : 2
        Gears active count : 2
    Available active gears : 88
 Effective available gears : 88
Districts:
 Name Nodes DistAvailCapacity GearsActive EffectiveAvailGears  AvgActiveUsagePct
----- ----- ----------------- ----------- ------------------- ------------------
dist1     1              5998           2                  88 2.2222222222222223
------------------------
Summary for all systems:
------------------------
 Districts : 3
     Nodes : 3
  Profiles : 3

Expected results:

The info should be 3 small gears used on broker.

Additional info:
Comment 1 Luke Meyer 2013-08-19 15:37:18 EDT
Is this specific to the fork_ami somehow?

I *suspect* that what you are seeing is that oo-stats bases its stats on mcollective facts, which in are only calculated on the node once per minute. In other words, when you add a gear (or make another change), there is up to a minute of lag time before oo-stats can read the new gear statistics. This should resolve if you wait a minute and run it again. Does it?
Comment 2 Yujie Zhang 2013-08-20 06:30:55 EDT
Tried on fork_ami_origin_ui_69_772, created 3 small appliction on small node, but oo-stats result is 2 medium and 1 c9, waited for more than 1 minute and did the oo-admin-broker-cache -c, but still got the same result, can you check this please? Thanks!
Comment 3 Luke Meyer 2013-08-20 16:29:15 EDT
I think I was misunderstanding what you are saying. I thought you were saying that oo-stats was misreporting the gear numbers (oo-stats bug). Actually, it sounds like your broker is simply not putting the gears where you expect. I am sorry for missing your meaning.

But this is probably not a bug, just a surprising result when using a devenv. In a devenv, matching node profiles is actually disabled, to enable user-capability tests that require multiple profiles. So gears will just be put on any node according to capacity.

The setting is in the mcollective conf and is called NODE_PROFILE_ENABLED. In my launch of this fork_ami, /etc/openshift/development exists (making -dev config in force) and /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective-dev.conf has NODE_PROFILE_ENABLED=false. Set that to true and restart the broker; you should be getting gears landing where you expect now.

Let me know if this helps.
Comment 5 Luke Meyer 2013-08-20 16:32:24 EDT
BTW, you can check the gear numbers yourself; on each node, do "grep gear /etc/mcollective/facts.yaml" or just ls /var/lib/openshift. oo-stats just works from the node facts, so that's the source to check if something is not what you expect.
Comment 6 Luke Meyer 2013-08-20 18:15:38 EDT
Just in case you hit this - at least for now, if a user needs to have multiple gear profiles, you'll need to recreate the user's domain after adding gear capabilities to them; oo-admin-ctl-user just adds the gear capability to the user, not their domain, so the user ends up being unable to create anything but small gears. I mentioned this to Clayton, master will probably address this before a new fork_ami does.
Comment 7 Jianwei Hou 2013-08-22 05:33:22 EDT
Found why this is happening: 
here is the env: one medium and one small disctrict, each district has a corresponding node with same profile.
As you see, according to mongo, there are 7 medium apps and 7 small apps, but what oo-status shows is different. The reason is because node profile is not enabled(Possibly rhc-broker isn't restarted when she is trying to enable node profile). Therefore, the medium gears and small gears are not restricted to their nodes.
This is not a bug, I'm closing it.

Actual gears:

libra_rs:PRIMARY> db.applications.count({default_gear_size:"small"})
7
libra_rs:PRIMARY> db.applications.count({default_gear_size:"medium"})
7

oo-stats shows:
[root@ip-10-167-4-155 ~]# oo-stats 
-------------------------
Profile 'medium' summary:
-------------------------
            District count : 1
         District capacity : 6,000
       Dist avail capacity : 5,994
           Dist avail uids : 5,994
     Lowest dist usage pct : 0.1
    Highest dist usage pct : 0.1
        Avg dist usage pct : 0.1
               Nodes count : 1
              Nodes active : 1
         Gears total count : 6
        Gears active count : 6
    Available active gears : 34
 Effective available gears : 34

Districts:
      Name Nodes DistAvailCap GearsActv EffAvailGears LoActvUsgPct AvgActvUsgPct
---------- ----- ------------ --------- ------------- ------------ -------------
mediumdist     1        5,994         6            34         15.0          15.0


------------------------
Profile 'small' summary:
------------------------
            District count : 1
         District capacity : 6,000
       Dist avail capacity : 5,992
           Dist avail uids : 5,992
     Lowest dist usage pct : 0.1
    Highest dist usage pct : 0.1
        Avg dist usage pct : 0.1
               Nodes count : 1
              Nodes active : 1
         Gears total count : 8
        Gears active count : 8
    Available active gears : 82
 Effective available gears : 82

Districts:
     Name Nodes DistAvailCap GearsActv EffAvailGears LoActvUsgPct AvgActvUsgPct
--------- ----- ------------ --------- ------------- ------------ -------------
smalldist     1        5,992         8            82          8.9           8.9


------------------------
Summary for all systems:
------------------------
 Districts : 2
     Nodes : 2
  Profiles : 2

Note You need to log in before you can comment on or make changes to this bug.