RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1845470 - New web UI - add more features for the cluster parts management
Summary: New web UI - add more features for the cluster parts management
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pcs
Version: 8.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 8.4
Assignee: Ivan Devat
QA Contact: cluster-qe@redhat.com
Steven J. Levine
URL:
Whiteboard:
Depends On:
Blocks: 1552470 1996067 1999014
TreeView+ depends on / blocked
 
Reported: 2020-06-09 10:29 UTC by Ivan Devat
Modified: 2021-08-30 09:10 UTC (History)
9 users (show)

Fixed In Version: pcs-0.10.8-1.el8
Doc Type: Technology Preview
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:12:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ivan Devat 2020-06-09 10:29:06 UTC
Add more features for the cluster management to the new web UI.

Preliminary list of tasks:
* basic information of fence device
* display all constraints of cluster
* filtering instance attributes of resource
* meta attributes of resource
* utilization attributes of resource
* node utilization view
* node attributes view

Comment 3 Ivan Devat 2021-02-01 09:24:32 UTC
Features
* improved cluster view
  * added view of all constraints
  * added view of cluster properties with filtering and help
  * added ability to fix broken cluster authentication
* improved resource detail
  * added filtering instance attributes
  * added view of utilization attributes
  * added view of meta attributes
  * added actions manage, unmanage, disable, enable, refresh, cleanup,
    clone, unclone, delete
  * added group create wizard
* improved node detail
  * added node attributes view
  * added view of utilization attributes
  * added actions start, stop, standby, unstandby, maintenance,
    unmaintenance, remove
  * added node add wizard
* improved fence device detail
  * added description of fence agent
  * added display arguments with filtering and help
  * added actions refresh, cleanup, delete

Following bugs will be included in bz1922996

Comment 7 Michal Mazourek 2021-02-10 17:50:19 UTC
AFTER:
======

[root@virt-058 ~]# rpm -q pcs
pcs-0.10.8-1.el8.x86_64


[root@virt-058 ~]# pcs status
Cluster name: STSRHTS11962
Cluster Summary:
  * Stack: corosync
  * Current DC: virt-059 (version 2.0.5-5.el8-ba59be7122) - partition with quorum
  * Last updated: Fri Feb  5 13:17:08 2021
  * Last change:  Fri Feb  5 12:21:32 2021 by root via cibadmin on virt-058
  * 3 nodes configured
  * 9 resource instances configured

Node List:
  * Online: [ virt-058 virt-059 virt-060 ]

Full List of Resources:
  * fence-virt-058	(stonith:fence_xvm):	 Started virt-058
  * fence-virt-059	(stonith:fence_xvm):	 Started virt-059
  * fence-virt-060	(stonith:fence_xvm):	 Started virt-060
  * Clone Set: locking-clone [locking]:
    * Started: [ virt-058 virt-059 virt-060 ]

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled


## New web ui steps

- Open the web ui, e.g.: https://virt-058.cluster-qe.lab.eng.brq.redhat.com:2224/ui/
- Log in
- Add existing cluster via specifying one of the cluster nodes
- Click on the cluster name in the dashboard


1. Improved cluster view
-------------------------

1.1. view of all constraints

- Click on 'Constraints' in cluster menu

"No constraint is configured.
You don't have any configured constraint here."

## Creating constraints

[root@virt-058 ~]# pcs resource create r1 ocf:heartbeat:Dummy
[root@virt-058 ~]# pcs resource create r2 ocf:pacemaker:Dummy

# Colocation

[root@virt-058 ~]# pcs constraint colocation add Started r1 with Started r2

In web UI: 
Type Colocation
Resource r1 in role Started together with resource r2 in role Started
Score INFINITY

> OK: Constraints can be also filtered, constraint above is shown correctly only under "Colocation" filter

[root@virt-058 ~]# pcs resource create r3 ocf:heartbeat:Dummy
[root@virt-058 ~]# pcs resource create r4 ocf:pacemaker:Dummy

# Location

[root@virt-058 ~]# pcs constraint location r3 prefers virt-058=-INFINITY

In web UI:
Type Location
Resource r3 in role Started on node virt-058
Score -INFINITY

> OK

# Location rule

[root@virt-058 ~]# pcs constraint location r4 rule id=rule1 resource-discovery=never score=-INFINITY defined node_attr and node_attr lt integer 5 and date gt 2020-01-01 and date in_range 2018-05-01 to 2019-05-01 and date-spec months=7-9

In web UI:
Type Location (rule)
Resource r4 in role Started according to the rule defined node_attr and node_attr lt integer 5 and date gt 2020-01-01 and date in_range 2018-05-01 to 2019-05-01 and date-spec months=7-9
Score -INFINITY

> OK

# Colocation set

[root@virt-058 ~]# pcs constraint colocation set r1 r2 r3 

In web UI:
Type Colocation (set)
Resources r1 r2 r3 in role Started together
Score INFINITY
Sequential

> OK

# Order

[root@virt-058 ~]# pcs constraint order start r1 then start r4
Adding r1 r4 (kind: Mandatory) (Options: first-action=start then-action=start)

In web UI:
Type Order
Resource r1 starts before resource r4 starts

> OK: So far all the resources are shown in the table and can be filtered. Destroying current constraints, so order set can be created.

[root@virt-058 ~]# pcs constraint remove location-r3-virt-058--INFINITY rule1 order-r1-r4-mandatory colocation-r1-r2-INFINITY colocation_set_r1r2r3

In web UI:
No constraint is configured.
You don't have any configured constraint here.

> OK

# Order set

[root@virt-058 ~]# pcs constraint order set r1 r2 r3

In web UI:
Type Order (set)
Resources r1 r2 r3 start in given order
Sequential true
Require all true

> OK

[root@virt-058 ~]# pcs constraint remove order_set_r1r2r3


1.2. view of cluster properties

- Click on Properties in cluster menu

- There is list of all set cluster properties and their value. If the value is default, it is written there
e.g.:
Symmetric
    true
    Default value

- Also every property have help message
e.g. for Stonith Action:
Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off")
Default value: reboot

- There is search field, that dynamically filtering options after every letter typed

- There is filtering by importance, showing either Basic or Advanced or both

## Changing property

concurrent-fencing
    true
    Default value

[root@virt-058 ~]# pcs property set concurrent-fencing=false

After refresh:
concurrent-fencing false

> OK


1.3. ability to fix broken cluster authentication

- Verified by another bz: bz1762816 comment 14


2. Improved resource detail
---------------------------

2.1. filtering instance attributes

- Click on 'Resources' in cluster menu
- Choose a resource - r1 in this case
- Click on 'Attributes'
- Page shows resource attributes with searching for specific attribute and filtering by Importance (Required, Optional and Advanced)
- Attributes are shown like this:
state
    /run/resource-agents/Dummy-Dummy.state
    Default value
fake
    dummy
    Default value
trace_ra
trace_file 

- Every attribute has help message, e.g.:
State file
Location to store the resource state in.
Default value: /run/resource-agents/Dummy-Dummy.state

## Changing resource attribute

[root@virt-058 ~]# pcs resource update r1 state=/tmp/Dummy.state

In web UI:
state /tmp/Dummy.state

> OK

[root@virt-058 ~]# pcs resource update r2 passwd=1234

In web UI (r2 is ocf:pacemaker:Dummy, which has more attributes): 
state
    /var/run/Dummy-Dummy.state
    Default value
passwd
    1234
fake
    dummy
    Default value
op_sleep
    0
    Default value
fail_start_on
envfile
trace_ra
trace_file 

> OK

Filtering only Advanced attributes:
trace_ra
trace_file 

> OK

## Adding new resource with required attributes

[root@virt-058 ~]# pcs resource create r5 ocf:heartbeat:IPaddr2 ip=192.168.1.58

In web UI filtering only Required attributes:
ip 192.168.1.58

> OK

In web UI listing all attributes:
ip
    192.168.1.58
nic
cidr_netmask
broadcast
iflabel
lvs_support
    false
    Default value
lvs_ipv6_addrlabel
    false
    Default value
lvs_ipv6_addrlabel_value
    99
    Default value
mac
clusterip_hash
    sourceip-sourceport
    Default value
unique_clone_address
    false
    Default value
arp_interval
    200
    Default value
arp_count
    5
    Default value
arp_count_refresh
    0
    Default value
arp_bg
    true
    Default value
arp_sender
send_arp_opts
flush_routes
    false
    Default value
run_arping
    false
    Default value
noprefixroute
    false
    Default value
preferred_lft
    forever
    Default value
monitor_retries
trace_ra
trace_file 

> OK

In web UI searching "lvs":
lvs_support
    false
    Default value
lvs_ipv6_addrlabel
    false
    Default value
lvs_ipv6_addrlabel_value
    99
    Default value

> OK

## Clicking on 'Edit Attributes' button

- All attributes can be changed in the new web UI


2.2. view of utilization attributes

- Click on 'Utilization' in resource menu, message appears:
Utilization attributes
To configure the capacity that a node provides or a resource requires, you can use utilization attributes in node and resource objects. A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource’s requirements
No attribute here.
No attribute has been added.

> OK

## Setting utilization attributes for a resource

[root@virt-058 ~]# pcs resource utilization r5 cpu=2 memory=2048

In web UI:
cpu
    2
memory
    2048

> OK


2.3. view meta attributes

- Click on 'Meta' in resource menu:
No attribute here.
No attribute has been added.

> OK

## Adding meta attributes for a resource

[root@virt-058 ~]# pcs resource meta r5 failure-timeout=20s resource-stickiness=100

In web UI:
failure-timeout
    20s
resource-stickiness
    100

> OK

[root@virt-058 ~]# pcs resource meta r5 failure-timeout=15s 

In web UI:
failure-timeout
    15s
resource-stickiness
    100

> OK


2.4. actions manage, unmanage, disable, enable, refresh, cleanup, clone, unclone, delete

- With every resource, there are buttons on the top "Unmanage", "Disable" and after clicking three dots to show more, "Refresh", "Cleanup", "Clone" and "Delete".

## Unamaging resource

- Clicking on 'Unmanage':
Unmanage resource?
This disallows the cluster to start and stop the resource
- Confirming by clicking on "Unamange", alternative option is "Cancel"
- In resource list, there is orange warning alert instead of green running sign at the unmanaged resource
- In resource Detail, there is message:
Warning alert:Resource is unmanaged

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Started virt-060 (unmanaged)

> OK

## Managing resource

- Clicking on 'Manage' at unmanaged resource
- Confirming by clicking on "Manage", alternative option is "Cancel"
- In resource list, green running sign is again at the managed resource
- In resource Detail, no message about unmanaged resource

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Started virt-060

> OK

## Disabling resource

- Clicking on 'Disable'
- Confirming by clicking on "Disable", alternative option is "Cancel"
- In resource list, there are red and orange warning alerts
- In resource Detail:
Warning alert:Resource is disabled
Danger alert:Resource is blocked

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Stopped (disabled)

> OK

## Enabling resource

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Stopped (disabled)

- Clicking on 'Enable' at disabled resource
- Confirming by clicking on "Enable", alternative option is "Cancel"
- In resource list, green running sign is again at the enabled resource
- In resource Detail, no message about disabled resource

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Started virt-059

> OK

## Refresh & Cleanup

# Failing a resource

[root@virt-058 ~]# pcs resource update r5 ip=1.1.1.1

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Stopped

[root@virt-058 ~]# pcs status | grep 'Failed Resource Actions' -A 4
Failed Resource Actions:
  * r5_start_0 on virt-059 'error' (1): call=157, status='complete', exitreason='[findif] failed', last-rc-change='2021-02-08 15:10:59 +01:00', queued=0ms, exec=142ms
  * r5_start_0 on virt-058 'error' (1): call=136, status='complete', exitreason='[findif] failed', last-rc-change='2021-02-08 15:11:03 +01:00', queued=0ms, exec=454ms
  * r5_start_0 on virt-060 'error' (1): call=135, status='complete', exitreason='[findif] failed', last-rc-change='2021-02-08 15:11:01 +01:00', queued=0ms, exec=101ms


In web UI:
- In resource list, there is red warning alert at r5 resource
- In r5 resource Detail:
 Issues
Danger alert:Failed to start r5 on Mon Feb 8 15:09:55 2021 on node virt-059: [findif] failed
Danger alert:Failed to start r5 on Mon Feb 8 15:09:58 2021 on node virt-058: [findif] failed
Danger alert:Failed to start r5 on Mon Feb 8 15:09:56 2021 on node virt-060: [findif] failed
Danger alert:Resource failed 

- Clicking on 'Refresh' under the three dots menu at the top
- Confirming by clicking 'Refresh', alternative is 'Cancel' with the message:
This makes the cluster forget the complete operation history (including failures) of the resource and re-detects its current state.
- In resource Detail:
 Issues
Danger alert:Failed to start r5 on Mon Feb 8 15:17:42 2021 on node virt-059: [findif] failed
Danger alert:Failed to start r5 on Mon Feb 8 15:17:44 2021 on node virt-058: [findif] failed
Danger alert:Failed to start r5 on Mon Feb 8 15:17:43 2021 on node virt-060: [findif] failed
Danger alert:Resource failed

> The failed actions are still there, because refresh automatically re-detected its current state, as hinted in the message above (the resource is still failed with wrong IP), but the original history (from ~15:09:55) is deleted, the shown logs are new (from ~15:17:42)

- Clicking on 'Cleanup' under the three dots menu at the top
- Confirming by clicking 'Cleanup', alternative is 'Cancel'.

> Test failed:
Error message:
Danger alert:Communication error while: cleanup resource "r5". Details in the browser console.
New bz was created for this case: bz1927394


# Restoring the failed resource

[root@virt-058 ~]# pcs resource update r5 ip=192.168.1.58

[root@virt-058 ~]# pcs resource | grep r5
  * r5	(ocf::heartbeat:IPaddr2):	 Started virt-059

- In web UI the failed messages have been erased, resource is running without any alerts

## Cloning

- Clicking on 'Clone' at specified resource (r1)
- Confirming by clicking on "Clone", alternative option is "Cancel"
- In resource list, the resource is now shown as r1-clone
- In resource Detail, there are Member status of all resources in the clone, also the resource itself (r1) is referred here and has its own page with status on each node

[root@virt-058 ~]# pcs resource | grep r1 -A 1
  * Clone Set: r1-clone [r1]:
    * Started: [ virt-058 virt-059 virt-060 ]

> OK

## Uncloning

- Clicking on 'Unclone' at specified resource (r1)
- Confirming by clicking on "Uncone", alternative option is "Cancel"
- In resource list, the resource is now shown again as r1
- In resource Detail, resource has again status only on one node

[root@virt-058 ~]# pcs resource | grep r1 
  * r1	(ocf::heartbeat:Dummy):	 Started virt-058

> OK

## Deleting

- Clicking on 'Delete' at specified resource (r1)
- Confirming by clicking on "Delete", alternative option is "Cancel"
- In resource list, the resource is no longer present

[root@virt-058 ~]# pcs resource | grep r1
[root@virt-058 ~]# echo $?
1

> OK


2.4. group create wizard

- At 'Resources' page, click 'Create Group' button
- There are "Group name", "Available resources" (which is list with search filtering) and "Choosen resources" (with same search filtering as Available resources)

> There is a typo in word 'Choosen', which will be fixed in the following new web UI bz: bz1922996

- Creating group G1 that consists of r4 and r5 resources
- Resources are possible to move between Available resources and Choosen resources, more (or all) resources can be moved at the same time
- Clicking 'Create group':
Group "G1" created successfully
- Group G1 is present at Resources list

> OK

- Creating group without name
- "Create group" button is non-active

> OK

- Creating group without Choosen resources
- "Create group" button is non-active, even after moving resources to Choosen resources and then moving them back.

> OK


3. Improved node detail
-----------------------

3.1. node attributes view

- Click on 'Nodes' in cluster menu
- Click on one of the nodes
- Click on 'Attributes' in node menu:
No attribute here.
No attribute has been added.

[root@virt-058 corosync]# pcs node attribute 
Node Attributes:
[root@virt-058 corosync]# echo $?
0

> OK

## Adding node attributes

[root@virt-058 corosync]# pcs node attribute virt-058 attr1=val1 attr2=val2
[root@virt-058 corosync]# pcs node attribute
Node Attributes:
 virt-058: attr1=val1 attr2=val2

in web UI:
attr1
    val1
attr2
    val2

> OK

[root@virt-058 corosync]# pcs node attribute virt-058 attr3=new_value
[root@virt-058 corosync]# pcs node attribute
Node Attributes:
 virt-058: attr1=val1 attr2=val2 attr3=new_value

In web UI:
attr1
    val1
attr2
    val2
attr3
    new_value

> OK


3.2. view of utilization attributes

- Click on 'Utilization' in node menu:
Info alert:Utilization attributes
To configure the capacity that a node provides or a resource requires, you can use utilization attributes in node and resource objects. A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource’s requirements
No attribute here.
No attribute has been added.

[root@virt-058 corosync]# pcs node utilization
Node Utilization:
[root@virt-058 corosync]# echo $?
0

> OK

## Adding utilization attributes

[root@virt-058 corosync]# pcs node utilization virt-058 cpu=1 memory=2048
[root@virt-058 corosync]# pcs node utilization
Node Utilization:
 virt-058: cpu=1 memory=2048

In web UI:
cpu
    1
memory
    2048

> OK

[root@virt-058 corosync]# pcs node utilization virt-058 cpu=2
[root@virt-058 corosync]# pcs node utilization
Node Utilization:
 virt-058: cpu=2 memory=2048

In web UI:
cpu
    2
memory
    2048

> OK


3.3. actions start, stop, standby, unstandby, maintenance, unmaintenance, remove

## stopping a node

[root@virt-058 corosync]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-058 virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

In web UI:
- In node menu, click 'Stop' button at specific node
- Confirming by clicking on "Stop", alternative option is "Cancel"
- In list of nodes, the stopped node is marked as "Offline" and "Does not have quorum"
- In the stopped node Detail, there is still info about node daemons, e.g. corosync:
Daemon | Installed | Enabled | Running
corosync | Installed | Not enabled | Not running

[root@virt-058 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  Could not connect to the CIB: Transport endpoint is not connected
  crm_mon: Error: cluster is not available on this node

[root@virt-059 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline: virt-058

> OK

# stopping a stopped node

- The action can be done and is successful, functionally nothing will changed, the node remains stopped. 

## starting a node

- In node menu, click 'Start' button at stopped node
- Confirming by clicking on "Start", alternative option is "Cancel"
- After a minute, the node is again "Online" and "Has quorum" in list of nodes
- In resource detail, there is again info about Resource status and Node Daemons are now running, e.g.:
Daemon | Installed | Enabled | Running
corosync | Installed | Not enabled | Running

[root@virt-058 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-058 virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK

# starting a started node

- The action can be done and is successful, functionally nothing will changed, the node remains started.

## node standby

- In node menu, click 'Standby' button under three dots at specific node
- Confirming by clicking on "Standby", alternative option is "Cancel"
- In list of nodes, the node is marked as "Standby" and "Has quorum"
- In node Detail, there is still info about Node Daemons

[root@virt-058 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-059 virt-060
 Standby: virt-058
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK

## node unstandby

- In node menu, click 'Unstandby' button under three dots at specific node
- Confirming by clicking on "Unstandby", alternative option is "Cancel"
- In list of nodes, the node is marked again as "Online" and "Has quorum"

[root@virt-058 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-058 virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK

## node maintenance

- In node menu, click 'Maintenance' button under three dots at specific node
- Confirming by clicking on "Maintenance", alternative option is "Cancel"
- No changes in list of nodes

[root@virt-058 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance: virt-058
 Offline:

> OK

## node unmaintenance

- In node menu, click 'Unmaintenance' button under three dots at specific node
- Confirming by clicking on "Unmaintenance", alternative option is "Cancel"
- No changes in list of nodes

[root@virt-058 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-058 virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK

## removing a node

- In node menu, click 'Remove' button under three dots at specific node
- Confirming by clicking on "Remove", alternative option is "Cancel"
- The node will disappear fron node list

[root@virt-058 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
  Could not connect to the CIB: Transport endpoint is not connected
  crm_mon: Error: cluster is not available on this node

[root@virt-059 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK


3.4. node add wizard

- Clicking 'Add node' at 'Nodes'
- Enter Node name (virt-058 in this case). Clicking 'Next'
- The authentication is checked: 'Success alert:The node is prepared for adding to the cluster.' (Checked also with non-authenticated node and authentication form was shown). Clicking 'Next'
- The next step is for specifying node addresses, which can be optional. Clicking 'Next'
- Configuring sbd in the next step, the cluster has not sbd set: 'Info alert:Sbd has not been detected on the cluster.'. Clicking 'Next'
- Review:
Review new resource configuration
Node name
    virt-058
Node addresses
    No address configured
- Clicking 'Finish':
Add node "virt-058" progress
Adding node
- Messages from adding node to a cluster are present
- Start cluster on the node

[root@virt-059 ~]# pcs status nodes | grep "Pacemaker Nodes" -A 5
Pacemaker Nodes:
 Online: virt-058 virt-059 virt-060
 Standby:
 Standby with resource(s) running:
 Maintenance:
 Offline:

> OK: node was added


4. Improved fence device detail
-------------------------------

4.1. description of fence agent

- In cluster menu, click on 'Fence Devices'
- Choose one of the fence devices - fence-virt-058 in this case
- in the fence device Detail, there is Description present:
 Description
Type
    fence_xvm (stonith)
Description
    Fence agent for virtual machines
Full description 

> OK

- Clicking on 'Full description':
fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

> There is typo 'withvirtual'. Will be resolved in fence-device component bz: bz1927171


4.2. display arguments with filtering and help

- In fence device menu, click on 'Arguments'
- There is list of all set fence device arguments and their value. If the value is default, it is written there
e.g.:
ipport
    1229
    Default value

- Also every argument have help message e.g. for ipport:
TCP, Multicast, VMChannel, or VM socket port (default=1229)

- There is search field, that dynamically filtering options after every letter typed
- There is filtering by importance, showing either Required or Optional or Advanced or any combination of them

## Changing argument

[root@virt-058 ~]# pcs stonith update fence-virt-058 timeout=25

In web UI:
timeout 25

> OK


4.3. actions refresh, cleanup, delete

## refresh and cleanup

# Generating failures

[root@virt-058 ~]# pcs stonith update fence-virt-058 ip_family=error
[root@virt-058 ~]# pcs status | grep fence-virt-058
  * fence-virt-058	(stonith:fence_xvm):	 Stopped
  * fence-virt-058_start_0 on virt-060 'error' (1): call=62, status='complete', exitreason='', last-rc-change='2021-02-09 16:44:54 +01:00', queued=0ms, exec=1078ms
  * fence-virt-058_start_0 on virt-058 'error' (1): call=70, status='complete', exitreason='', last-rc-change='2021-02-09 16:44:52 +01:00', queued=0ms, exec=1071ms
  * fence-virt-058_start_0 on virt-059 'error' (1): call=66, status='complete', exitreason='', last-rc-change='2021-02-09 16:44:53 +01:00', queued=0ms, exec=1091ms

# In web UI:
- Click on the fence device (fence-virt-058 in this case)
- There is red warning alert at the fence devices list
- There are Issues in the fence device Detail:
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:44:54 2021 on node virt-060:
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:44:52 2021 on node virt-058:
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:44:53 2021 on node virt-059: 

- Clicking on "Refresh" at the fence device page
- Confirming by clicking on "Refresh", alternative option is "Cancel":
This makes the cluster forget the complete operation history (including failures) of the fence device and re-detects its current state.

- The operation history (failures) was erased, current state was re-detected (as the fence device is still failed, new failures were generated):
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:49:08 2021 on node virt-060:
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:49:06 2021 on node virt-058:
Danger alert:Failed to start fence-virt-058 on Tue Feb 9 16:49:07 2021 on node virt-059: 

> OK: The original failures (from ~16:44:54) was erased, the failures shown above are new ones generated after re-detection (from ~16:49:08)

- Click on 'Cleanup' at the fence device page
- Confirming by clicking 'Cleanup', alternative is 'Cancel'.

> Test failed:
Error message:
Danger alert:Communication error while: cleanup fence device "fence-virt-058". Details in the browser console.
New bz was created for this case: bz1927394


## delete a fence agent

- Click on 'Delete' under the three dots at fence device page
- Confirming by clicking on "Delete", alternative option is "Cancel":
Fence devicce "fence-virt-058" does not exist.
Fence device "fence-virt-058" does not exists in cluster STSRHTS11962.
- The fence device is no longer present in the fence devices list

> There is a typo in "devicce", which will be fixed in the following new web UI bz: bz1922996

[root@virt-058 ~]# pcs stonith status
  * fence-virt-059	(stonith:fence_xvm):	 Started virt-058
  * fence-virt-060	(stonith:fence_xvm):	 Started virt-059

> OK: The fence device is deleted


Result:
=======
An issue was found.
It is not possible to cleanup failed operations from history of resource and fence device.
Typos will be fixed in following new web UI bz (bz1922996).


Marking as VERIFIED for pcs-0.10.8-1.el8 with new bz created, solving the the issue with broken cleanup: bz1927394

Comment 10 errata-xmlrpc 2021-05-18 15:12:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pcs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:1737


Note You need to log in before you can comment on or make changes to this bug.