RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1158500 - add support for utilization attributes
Summary: add support for utilization attributes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-29 14:00 UTC by Tomas Jelinek
Modified: 2017-06-28 12:14 UTC (History)
11 users (show)

Fixed In Version: pcs-0.9.152-9.el7
Doc Type: Enhancement
Doc Text:
Support added for configuring Pacemaker utilization attributes You can now configure Pacemaker utilization attributes with the `pcs` command and the `pcsd` Web UI. This allows you to configure the capacity a particular node provides, the capacity a particular resource requires, and an overall strategy for placement of resources. For information on utilization and placement strategy see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/index.html.
Clone Of:
Environment:
Last Closed: 2016-11-03 20:53:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (62.13 KB, patch)
2015-11-27 07:52 UTC, Ondrej Mular
no flags Details | Diff
proposed fix 2 (1.04 KB, patch)
2016-01-22 15:44 UTC, Tomas Jelinek
no flags Details | Diff
proposed fix 3 (13.73 KB, patch)
2016-09-12 11:22 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2596 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2016-11-03 12:11:34 UTC

Description Tomas Jelinek 2014-10-29 14:00:18 UTC
Add support for utilization attributes to CLI and GUI.
See pacemaker documentation: http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Pacemaker_Explained/_utilization_attributes.html

github issue: https://github.com/feist/pcs/issues/41

Comment 3 Ondrej Mular 2015-11-27 07:52:04 UTC
Created attachment 1099616 [details]
proposed fix

Comment 4 Ondrej Mular 2015-11-27 08:18:22 UTC
Test:
[root@node1 ~]# pcs cluster setup --name testcluster node1 node2 --start
Destroying cluster on nodes: node1, node2...
node2: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (pacemaker)...
node2: Successfully destroyed cluster
node1: Successfully destroyed cluster

Sending cluster config files to the nodes...
node1: Succeeded
node2: Succeeded

Starting cluster on nodes: node1, node2...
node1: Starting Cluster...
node2: Starting Cluster...

Synchronizing pcsd certificates on nodes node1, node2...
node2: Success
node1: Success

Restarting pcsd on the nodes in order to reload the certificates...
node2: Success
node1: Success
[root@node1 ~]# pcs resource create dummy Dummy

Before fix:
In pcs, there was no way to set or show utilization attributes.

After fix:
[root@node1 ~]# pcs node utilization node1 cpu=8 mem=8192
[root@node1 ~]# pcs node utilization node1
Node Utilization:
 node1: cpu=8 mem=8192
[root@node1 ~]# pcs node utilization node2 cpu=4
[root@node1 ~]# pcs node utilization node2
Node Utilization:
 node2: cpu=4
[root@node1 ~]# pcs node utilization
Node Utilization:
 node1: cpu=8 mem=8192
 node2: cpu=4
[root@node1 ~]# pcs resource utilization dummy cpu=1 mam=1024 net=10
[root@node1 ~]# pcs resource utilization 
Resource Utilization:
 dummy: cpu=1 mam=1024 net=10
[root@node1 ~]# pcs resource utilization dummy net=
[root@node1 ~]# pcs resource utilization dummy
Resource Utilization:
 dummy: cpu=1 mam=1024

It is possible to manage utilization attributes also in web UI. For resource in resource detail view and for node in node detail view.

Comment 5 Tomas Jelinek 2016-01-22 15:44:09 UTC
Created attachment 1117244 [details]
proposed fix 2

Comment 7 Mike McCune 2016-03-28 23:15:27 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 8 Ivan Devat 2016-05-31 12:19:41 UTC
Setup:
[vm-rhel72-1 ~] $ pcs status|grep Online
Online: [ vm-rhel72-1 vm-rhel72-3 ]

Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.143-15.el7.x86_64

In pcs, there was no way to set or show utilization attributes.


After Fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.151-1.el7.x86_64

[vm-rhel72-1 ~] $ pcs node utilization
Node Utilization:
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1
Node Utilization:
 vm-rhel72-1:

[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1 cpu=8 mem=8192
[vm-rhel72-1 ~] $ pcs node utilization
Node Utilization:
 vm-rhel72-1: cpu=8 mem=8192
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1
Node Utilization:
 vm-rhel72-1: cpu=8 mem=8192

[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-3
Node Utilization:
 vm-rhel72-3:
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-3 cpu=4
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-3
Node Utilization:
 vm-rhel72-3: cpu=4
[vm-rhel72-1 ~] $ pcs node utilization
Node Utilization:
 vm-rhel72-1: cpu=8 mem=8192
 vm-rhel72-3: cpu=4

[vm-rhel72-1 ~] $ pcs resource create dummy Dummy
[vm-rhel72-1 ~] $ pcs resource utilization dummy cpu=1 mam=1024 net=10
[vm-rhel72-1 ~] $ pcs resource utilization
Resource Utilization:
 dummy: cpu=1 mam=1024 net=10
[vm-rhel72-1 ~] $ pcs resource utilization dummy net=
[vm-rhel72-1 ~] $ pcs resource utilization dummy
 Resource Utilization:
  dummy: cpu=1 mam=1024


It is possible to manage utilization attributes also in web UI. For resource in resource detail view and for node in node detail view.

Comment 13 Tomas Jelinek 2016-09-12 11:22:08 UTC
Created attachment 1200191 [details]
proposed fix 3

This patch does not fix the name validation issue. We suppose it is not utilization attributes specific and it may be quite common throughout whole pcsd UI. Moreover we do not know which characters are "forbidden". The issue should be filed as a separate bz and planned, fixed and tested accordingly.

Comment 14 Ivan Devat 2016-09-14 15:50:45 UTC
Before Fix:

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.152-8.el7.x86_64

1)
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1 \=1 something
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1
Node Utilization:
 vm-rhel72-1: =1

2)
[vm-rhel72-1 ~] $ pcs resource create remote-node ocf:pacemaker:remote server="vm-rhel72-2"
[vm-rhel72-1 ~] $ pcs cluster cib | grep "<node "
      <node id="1" uname="vm-rhel72-1"/>
      <node id="2" uname="vm-rhel72-3"/>
[vm-rhel72-1 ~] $ pcs node utilization remote-node a=1
Error: Unable to find a node: remote-node

3)
[vm-rhel72-1 ~] $ pcs node utilization no-node
Node Utilization:


After Fix:

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.152-9.el7.x86_64

1)
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1 \=1
Error: missing key in '=1' option
[vm-rhel72-1 ~] $ pcs node utilization vm-rhel72-1 something
Error: missing value of 'something' option

2)
[vm-rhel72-1 ~] $ pcs resource create remote-node ocf:pacemaker:remote server="vm-rhel72-2"
[vm-rhel72-1 ~] $ pcs cluster cib | grep "<node "
      <node id="1" uname="vm-rhel72-1"/>
      <node id="2" uname="vm-rhel72-3"/>
[vm-rhel72-1 ~] $ pcs node utilization remote-node a=1
[vm-rhel72-1 ~] $ pcs node utilization remote-node
Node Utilization:
 remote-node: a=1

3)
[vm-rhel72-1 ~] $ pcs node utilization no-node
Error: Unable to find a node: no-node

# newly setup cluster there
[vm-rhel72-1 ~] $ pcs resource create remote-node ocf:pacemaker:remote server="vm-rhel72-2"
[vm-rhel72-1 ~] $ pcs cluster cib | grep "<node "
      <node id="1" uname="vm-rhel72-1"/>
      <node id="2" uname="vm-rhel72-3"/>
[vm-rhel72-1 ~] $ pcs node utilization remote-node
Node Utilization:

Comment 18 errata-xmlrpc 2016-11-03 20:53:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2596.html

Comment 19 Steven J. Levine 2017-05-02 21:57:17 UTC
Oyvind:

Could you look at the description for the 7.3 release notes I put in the doc text field for this BZ?  Once you ok this I can have this backported to the 7.3 release notes on the Portal.

Thanks.

Steven

Comment 20 Tomas Jelinek 2017-05-03 08:13:04 UTC
There is a typo: missing i in utlization. Otherwise the doc text is ok.

Comment 21 Steven J. Levine 2017-05-04 13:33:18 UTC
This is now in the release notes on the Portal:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/new_features_clustering.html


Note You need to log in before you can comment on or make changes to this bug.