Bug 1667058
Summary: | provide commands for changing corosync links in an existing cluster | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Tomas Jelinek <tojeline> | ||||||
Component: | pcs | Assignee: | Tomas Jelinek <tojeline> | ||||||
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | high | ||||||||
Version: | 8.0 | CC: | cfeist, cluster-maint, idevat, mlisik, mmazoure, nhostako, omular, slevine, tojeline | ||||||
Target Milestone: | rc | Keywords: | FutureFeature | ||||||
Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | pcs-0.10.1-8.el8 | Doc Type: | Enhancement | ||||||
Doc Text: |
.Commands for adding, changing, and removing corosync links have been added to `pcs`
The Kronosnet (knet) protocol now allows you to add and remove knet links in running clusters. To support this feature, the `pcs` command now provides commands to add, change, and remove knet links and to change a upd/udpu link in an existing cluster. For information on adding and modifying links in an existing cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index#proc_changing-links-in-multiple-ip-cluster-clusternode-management[Adding and modifying links in an existing cluster].
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2019-11-05 20:39:40 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1667090, 1682129 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Description
Tomas Jelinek
2019-01-17 10:43:11 UTC
*** Bug 1615780 has been marked as a duplicate of this bug. *** Created attachment 1559809 [details]
proposed fix
New commands have been added to pcs:
* link add <node_name>=<node_address>... [options <link options>]
* link delete <linknumber> [<linknumber>]...
* link remove <linknumber> [<linknumber>]...
After fix: [root@rhel81-node1 ~]# rpm -q pcs pcs-0.10.1-6.el8.x86_64 [root@rhel81-node1 ~]# pcs cluster link add rh81-1=192.168.121.1 rh81-2=192.168.121.2 options link_priority=10 linknumber=3 Sending updated corosync.conf to nodes... rh81-2: Succeeded rh81-1: Succeeded Corosync configuration reloaded [root@rhel81-node1 ~]# pcs cluster corosync totem { version: 2 cluster_name: rhel-8.1-cluster transport: knet crypto_cipher: aes256 crypto_hash: sha256 interface { knet_link_priority: 10 linknumber: 3 } } nodelist { node { ring0_addr: rh81-1 name: rh81-1 nodeid: 1 ring3_addr: 192.168.121.1 } node { ring0_addr: rh81-2 name: rh81-2 nodeid: 2 ring3_addr: 192.168.121.2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes timestamp: on } [root@rhel81-node1 ~]# pcs cluster link remove 3 Sending updated corosync.conf to nodes... rh81-2: Succeeded rh81-1: Succeeded Corosync configuration reloaded [root@rhel81-node1 ~]# pcs cluster corosync totem { version: 2 cluster_name: rhel-8.1-cluster transport: knet crypto_cipher: aes256 crypto_hash: sha256 } nodelist { node { ring0_addr: rh81-1 name: rh81-1 nodeid: 1 } node { ring0_addr: rh81-2 name: rh81-2 nodeid: 2 } } quorum { provider: corosync_votequorum two_node: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes timestamp: on } Created attachment 1569012 [details]
proposed fix 2
New command has been added to pcs:
* pcs cluster link update <linknumber> [<node_name>=<node_address>...] [options <link options>]
[ant8 ~] $ rpm -q pcs pcs-0.10.1-8.el8.x86_64 [ant8 ~] $ pcs cluster corosync | grep nodelist -A12 && pcs cluster corosync | grep interface nodelist { node { ring0_addr: ant8 name: ant8 nodeid: 1 } node { ring0_addr: bee8 name: bee8 nodeid: 2 } } > add link [ant8 ~] $ pcs cluster link add ant8=192.168.121.1 bee8=192.168.121.2 options link_priority=10 linknumber=3 Sending updated corosync.conf to nodes... bee8: Succeeded ant8: Succeeded ant8: Corosync configuration reloaded [ant8 ~] $ pcs cluster corosync | grep nodelist -A14 && pcs cluster corosync | grep interface -A3 nodelist { node { ring0_addr: ant8 name: ant8 nodeid: 1 ring3_addr: 192.168.121.1 } node { ring0_addr: bee8 name: bee8 nodeid: 2 ring3_addr: 192.168.121.2 } } interface { knet_link_priority: 10 linknumber: 3 } > update link [ant8 ~] $ pcs cluster stop --all ant8: Stopping Cluster (pacemaker)... bee8: Stopping Cluster (pacemaker)... ant8: Stopping Cluster (corosync)... bee8: Stopping Cluster (corosync)... [ant8 ~] $ pcs cluster link update 3 ant8=192.168.121.3 options link_priority=11 Checking corosync is not running on nodes... ant8: corosync is not running bee8: corosync is not running Sending updated corosync.conf to nodes... ant8: Succeeded bee8: Succeeded [ant8 ~] $ pcs cluster corosync | grep nodelist -A14 && pcs cluster corosync | grep interface -A3 nodelist { node { ring0_addr: ant8 name: ant8 nodeid: 1 ring3_addr: 192.168.121.3 } node { ring0_addr: bee8 name: bee8 nodeid: 2 ring3_addr: 192.168.121.2 } } interface { knet_link_priority: 11 linknumber: 3 } [ant8 ~] $ pcs cluster start --all ant8: Starting Cluster... bee8: Starting Cluster... > remove link [ant8 ~] $ pcs cluster link remove 3 Sending updated corosync.conf to nodes... ant8: Succeeded bee8: Succeeded ant8: Corosync configuration reloaded [ant8 ~] $ pcs cluster corosync | grep nodelist -A12 && pcs cluster corosync | grep interface nodelist { node { ring0_addr: ant8 name: ant8 nodeid: 1 } node { ring0_addr: bee8 name: bee8 nodeid: 2 } } Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3311 |