RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1970508 - New web UI - add more functionalities to the cluster management
Summary: New web UI - add more functionalities to the cluster management
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pcs
Version: 8.5
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 8.6
Assignee: Ivan Devat
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks: 1552470 1996067 1999014 2044409
TreeView+ depends on / blocked
 
Reported: 2021-06-10 15:29 UTC by Ivan Devat
Modified: 2022-05-10 15:24 UTC (History)
8 users (show)

Fixed In Version: pcs-0.10.12-4.el8
Doc Type: Technology Preview
Doc Text:
The release note for this feature will be part of the general release note for the new web UI, with the Doc Text from BZ#1552470.
Clone Of:
: 2044409 (view as bug list)
Environment:
Last Closed: 2022-05-10 14:50:42 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2022:1978 0 None None None 2022-05-10 14:51:15 UTC

Description Ivan Devat 2021-06-10 15:29:58 UTC
Preliminary list of tasks:
* edit cluster properties
* setup cluster
* add primitive resource to group
* remove primitive resource from group
* remove cluster from web ui
* destroy cluster
* view permissions
* add permissions
* remove permissions

Comment 2 Ivan Devat 2021-10-26 08:46:40 UTC
Implemented features:
* edit cluster properties (press button "Edit properties" in cluster properties)
* add primitive resource to group (press button "Change group" in primitive resource detail)
* remove primitive resource from group (press button "Change group" in primitive resource detail)
* remove cluster from web ui (select "Remove" in clusters drop-down menu in dashboard)
* destroy cluster (select "Destroy" in clusters drop-down menu in dashboard)

Comment 3 Miroslav Lisik 2021-11-02 09:25:42 UTC
DevTestResults:

[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.11-1.el8.x86_64

* edit cluster properties (press button "Edit properties" in cluster properties) - [OK]
* add primitive resource to group (press button "Change group" in primitive resource detail) - [OK]
* remove primitive resource from group (press button "Change group" in primitive resource detail) - [OK]
* remove cluster from web ui (select "Remove" in clusters drop-down menu in dashboard) - [OK]
* destroy cluster (select "Destroy" in clusters drop-down menu in dashboard) - [OK]

Comment 8 Michal Mazourek 2022-01-24 14:08:19 UTC
AFTER:
======

## in cli

[root@virt-011 ~]# rpm -q pcs
pcs-0.10.12-3.el8.x86_64


## Basic cluster setting with few resources

[root@virt-011 ~]# pcs status
Cluster name: STSRHTS29225
Cluster Summary:
  * Stack: corosync
  * Current DC: virt-012 (version 2.1.2-2.el8-ada5c3b36e2) - partition with quorum
  * Last updated: Thu Jan 20 17:17:39 2022
  * Last change:  Thu Jan 20 17:17:19 2022 by root via cibadmin on virt-011
  * 2 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ virt-011 virt-012 ]

Full List of Resources:
  * fence-virt-011	(stonith:fence_xvm):	 Started virt-011
  * fence-virt-012	(stonith:fence_xvm):	 Started virt-012
  * d1	(ocf::heartbeat:Dummy):	 Started virt-011
  * d2	(ocf::heartbeat:Dummy):	 Started virt-012
  * d3	(ocf::heartbeat:Dummy):	 Started virt-011

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


## in web UI

- open new web UI (https://nodename:2224/ui)
- log in
- Click 'Add existing cluster' button and proceed with the wizard
- Click on the added cluster

1. Edit cluster properties
---------------------------

- Click on 'Properties' tab
- Click on 'Edit Properties' button
	- properties can be changed one or more at time
- Click on 'Save properties', alternative is 'Cancel', which will erase the changes
	- "Success alert:Succesfully done: update cluster properties"

> OK: The properties are changed (checked both in web UI and CLI)


2. Add and remove primitive resource to/from group 
---------------------------------------------------

- Click on 'Resources' tab
- Click on a primitive resource detail
	- Button 'Change group' is unclickable when no group exists in the cluster
- Click on 'Create group' and proceed with creating group g1 with resources d1 and d2
- Click on a d3 resource detail and click 'Change group' button 
"Change group of primitive resource "d3"?
Group *
Position in group *"
	- Selecting g1 group and position after resource d2
	- OK: All three resources are in group g1, in web UI and in CLI
in web UI:
"g1
Type Group
d1
Type Dummy (ocf:heartbeat)
d2
Type Dummy (ocf:heartbeat)
d3
Type Dummy (ocf:heartbeat) 
"

in CLI:
[root@virt-011 ~]# pcs resource
  * Resource Group: g1:
    * d1	(ocf::heartbeat:Dummy):	 Started virt-012
    * d2	(ocf::heartbeat:Dummy):	 Started virt-012
    * d3	(ocf::heartbeat:Dummy):	 Started virt-012

- By clicking "Change group" button with resource already in group without any other group left in the cluster, the dialog will ask if you want to remove resource from the group and remove it.
"Change group of primitive resource "d1"?
Remove resource "d1" from group "g1""
- Removing d1 and d2 from the group

[root@virt-011 ~]# pcs resource
  * Resource Group: g1:
    * d3	(ocf::heartbeat:Dummy):	 Started virt-012
  * d1	(ocf::heartbeat:Dummy):	 Started virt-011
  * d2	(ocf::heartbeat:Dummy):	 Started virt-012

- Creating new group g2 with d1 and d2 
- Clicking on 'Change group' of d3 (which is already in g1), dialog let me choose if I want to Remove resource from the group or move it to another:
"Change group of primitive resource "d3"?
Remove resource "d3" from the group "g1"
Move resource "d3" to a group
"
- Selecting Move reousrce d3 to a group and choosing group g2 with position before d1
- OK: As the resource was last one in group g1, the group got deleted and all resources are in group g2 now:
in web UI:
"g2
Type Group
d3
Type Dummy (ocf:heartbeat)
d1
Type Dummy (ocf:heartbeat)
d2
Type Dummy (ocf:heartbeat)"

in CLI:
[root@virt-011 ~]# pcs resource
  * Resource Group: g2:
    * d3	(ocf::heartbeat:Dummy):	 Started virt-012
    * d1	(ocf::heartbeat:Dummy):	 Started virt-012
    * d2	(ocf::heartbeat:Dummy):	 Started virt-012

Note: If the moved resource is not last in the group, the group is preserved

- Creating clone and promotable resources
[root@virt-011 ~]# pcs resource
  * Resource Group: g2:
    * d3	(ocf::heartbeat:Dummy):	 Started virt-012
    * d1	(ocf::heartbeat:Dummy):	 Started virt-012
    * d2	(ocf::heartbeat:Dummy):	 Started virt-012
  * Clone Set: clone-clone [clone]:
    * Started: [ virt-011 virt-012 ]
  * Clone Set: promo-clone [promo] (promotable):
    * Slaves: [ virt-011 virt-012 ]

- OK: Group/Clone/Promotable type of resource doesn't have the 'Change group' button present
- Adding primitive instance 'clone' to group g2
"Changing group of primitive resource "clone" failed
Unable to add resource 'clone' to group 'g2': Error: 'clone' cannot be put into a group because its parent 'clone-clone' is a clone resource Error: Errors have occurred, therefore pcs is unable to continue "
> OK: The same goes for promotable resources


3. Removing cluster from web UI
--------------------------------

- Go back to the dashboard ("HA CLuster Management" button at the top)
- Click at the three dots within the selected cluster
- Click on "Remove"
- Dialog window will show:
"Remove the cluster "STSRHTS29225"?
This only removes the cluster from the Web UI, it does not stop the cluster from running."
- Proceed by clicking 'Remove'
- OK: The cluster is removed

Comment 9 Michal Mazourek 2022-01-24 14:09:12 UTC
4. Destroying cluster
----------------------

- Click at the three dots within the selected cluster in the dashboard 
- Click 'Destroy'
- Dialog window will show"
"Destroy the cluster "STSRHTS29225"?
The cluster will be stopped and all its configuration files will be deleted. This action cannot be undone."
- Proceed by clicking "Destroy"

> FAILED: Cluster is always destroyed only on the first node and remains on the others, it doesn't matter what node is used to control the web UI and with what node is the cluster added with. When the cluster is added again via some of the remaining node, destroy is no longer have any effect. The first (destroyed) node is shown in CLI status as Offline.

[root@virt-542 ~]# pcs status | grep "Node List" -A 3
Node List:
  * Online: [ virt-536 virt-538 virt-542 ]
  * OFFLINE: [ virt-533 ]

Comment 12 Miroslav Lisik 2022-01-26 12:48:53 UTC
DevTestResults:

[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.12-4.el8.x86_64

1. Create or add cluster to the web ui
2. Click on the clusters' kebab menu and click on "Destroy" and acknowledge destroying of the cluster by clicking on "Destroy" button.
3. Check that cluster has been destroyed on all nodes.

[root@r8-node-01 ~]# for node in r8-node-0{1..3}; do ssh root@$node 'pcs status; ls /etc/corosync/corosync.conf'; done
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
ls: cannot access '/etc/corosync/corosync.conf': No such file or directory
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
ls: cannot access '/etc/corosync/corosync.conf': No such file or directory
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
ls: cannot access '/etc/corosync/corosync.conf': No such file or directory

Comment 15 Michal Mazourek 2022-01-27 14:55:47 UTC
4. Destroying cluster
----------------------

- Click at the three dots within the selected cluster in the dashboard 
- Click 'Destroy'
- Dialog window will show"
"Destroy the cluster "STSRHTS17763"?
The cluster will be stopped and all its configuration files will be deleted. This action cannot be undone."
- Proceed by clicking "Destroy"
- Cluster is removed from the web UI

## In cli

[root@virt-533 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
[root@virt-536 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
[root@virt-538 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node
[root@virt-542 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  crm_mon: Error: cluster is not available on this node

> OK: The issue with destroying cluster was repaired.


Rest of the verification is in comment 8. The functionality is preserved with new patch. Marking as VERIFIED for pcs-0.10.12-4.el8.

Comment 17 errata-xmlrpc 2022-05-10 14:50:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pcs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:1978


Note You need to log in before you can comment on or make changes to this bug.