Bug 1793860 - No method for storing passwords outside CIB
Summary: No method for storing passwords outside CIB
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.1
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: pre-dev-freeze
: 8.3
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1870873
Blocks: 1803995
TreeView+ depends on / blocked
 
Reported: 2020-01-22 06:00 UTC by Strahil Nikolov
Modified: 2021-03-02 16:42 UTC (History)
7 users (show)

Fixed In Version: pacemaker-2.0.4-2.el8
Doc Type: No Doc Update
Doc Text:
Any corresponding pcs functionality should be documented instead
Clone Of:
: 1803995 (view as bug list)
Environment:
Last Closed: 2020-11-04 04:00:53 UTC
Type: Feature Request
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 203243 0 None None None 2020-06-17 18:59:24 UTC
Red Hat Knowledge Base (Solution) 5570561 0 None None None 2020-11-11 15:40:30 UTC

Description Strahil Nikolov 2020-01-22 06:00:40 UTC
Description of problem:
There is no way to store passwords and other values outside the CIB.
In crmsh there is such functionality:

crm resource secret <rsc> set <param> <value>

Version-Release number of selected component (if applicable):
pacemaker-2.0.2-3.el8_1.2.x86_64.rpm

How reproducible:
Always

Steps to Reproduce:

There are no such functionality.

Actual results:
All passwords are in CIB.


Expected results:
To be able to stash in/out any parameter in/out of CIB

Comment 1 Ken Gaillot 2020-02-14 22:57:37 UTC
QA: This feature allows users to set sensitive resource parameter values in a separate file outside the pacemaker CIB. The file must be kept in sync across all nodes. Under the hood, it looks like this:

* Sensitive values are replaced with 'lrm://' in the CIB
* The actual value is stored in a plain text file /var/lib/pacemaker/lrm/secrets/<resource-id>/<parameter-name>
* Each secrets file has a corresponding <filename>.sign file with an MD-5 hash of the secret

The user does not manage these directly, but via the cibsecret tool. cibsecret requires that Pacemaker is running on the local node, and that all active nodes are reachable via pssh, pdsh, or ssh. If any nodes are not active when the command is run, "cibsecret sync" must be run later when they are active to keep the secrets in sync.

cibsecret is used like this:

cibsecret set <resource-id> <parameter-name> <value>
-> If you want a parameter to be secret from the beginning, this will create a local secret file and hash file for the given resource parameter, sync the files to all active nodes, then set the parameter value in the CIB to 'lrm://'.

cibsecret get <resource-id> <parameter-name>
-> This shows the local value of the given resource parameter if it is set as a secret.

cibsecret delete <resource-id> <parameter-name>
-> If the given resource parameter is a secret, this removes the parameter from the CIB entirely and removes the secret files.

cibsecret stash <resource-id> <parameter-name>
-> If you have an existing parameter directly in the CIB that you want to convert into a secret, this takes the existing value and does the equivalent of "cibsecret set".

cibsecret unstash <resource-id> <parameter-name>
-> If you have an existing secret parameter that you want to be directly in the CIB again, this puts the secret value directly in the CIB and gets rid of the secret files.

cibsecret sync
-> This synchronizes all secret files from the local node to all other active nodes.

cibsecret check <resource-id> <parameter-name>
-> This compares the value of a secret file with its hash. The hash is mainly intended to discourage manual editing of the secret file, though it could also detect file corruption.

Comment 2 Patrik Hagara 2020-03-23 09:19:12 UTC
qa_ack+, how to test in comment#1

Comment 5 Ken Gaillot 2020-06-11 21:01:31 UTC
It is worth highlighting these points from the newly updated cibsecret help text:

Known limitations:

This command can only be run from full cluster nodes (not Pacemaker Remote nodes).

Changes are not atomic, so the cluster may use different values while a change is in progress. To avoid problems, it is recommended to put the cluster in maintenance mode when making changes with this command.

Changes in secret values do not trigger a reload or restart of the affected resource, since they do not change the CIB. If a response is desired before the next cluster recheck interval, any CIB change (such as setting a node attribute) will trigger it.

If any node is down when changes to secrets are made, or a new node is later added to the cluster, it may have different values when it joins the cluster, before "cibsecret sync" is run. To avoid this, it is recommended to run the sync command (from another node) before starting Pacemaker on the node.

###

Additionally, cibsecret will take advantage of fping, pssh, or pdsh if installed, but none of those are supported in RHEL. fping does greatly speed up synchronization of secrets, while parallel ssh has a smaller impact. In any case, ssh must be installed on all cluster nodes, and if it's not passwordless be prepared to enter the password a whole lot of times when running cibsecret.

Comment 9 Markéta Smazová 2020-09-23 11:26:17 UTC
During verification of this bz two bugs were found and filed: bug 1870873 (see case 2) and bug 1881537 (see case 3).

It is important to note that cibsecret should not be used with any resource that is allowed to run on a remote node.
Either it should only be used in clusters that don't have remote nodes, or location constraints should be used to 
ban affected resources from all remote nodes.



>   [root@kiff-01 ~]# rpm -q pacemaker
>   pacemaker-2.0.4-5.el8.x86_64


Check that the new command is documented in crm_mon man/help.

>   [root@kiff-01 ~]# man cibsecret


>   PACEMAKER(8)                        System Administration Utilities                       PACEMAKER(8)

>   NAME
>          Pacemaker - Part of the Pacemaker cluster resource manager

>   DESCRIPTION
>          cibsecret - manage sensitive information in Pacemaker CIB

>      Usage:
>                 cibsecret [<options>] <command> [<parameters>]

>   OPTIONS
>          --help Show this message, then exit

>          --version
>                 Display version information, then exit

>          -C     Don't read or write the CIB

>      Commands and their parameters:
>                 set <resource-id> <resource-parameter> <value>

>                 Set the value of a sensitive resource parameter.

>                 get <resource-id> <resource-parameter>

>                 Display the locally stored value of a sensitive resource parameter.

>                 check <resource-id> <resource-parameter>

>                 Verify  that  the  locally  stored  value  of a sensitive resource parameter matches its
>                 locally stored MD5 hash.

>                 stash <resource-id> <resource-parameter>

>                 Make a non-sensitive resource parameter that is already in the CIB sensitive  (move  its
>                 value to a locally stored and protected file).  This may not be used with -C.

>                 unstash <resource-id> <resource-parameter>

>                 Make  a  sensitive resource parameter that is already in the CIB non-sensitive (move its
>                 value from the locally stored file to the CIB).  This may not be used with -C.

>                 delete <resource-id> <resource-parameter>

>                 Remove a sensitive resource parameter value.

>                 sync

>                 Copy all locally stored secrets to all other nodes.

>          This command manages sensitive resource parameter values that should not be stored directly  in
>          Pacemaker's Cluster Information Base (CIB). Such values are handled by storing a special string
>          directly in the CIB that tells Pacemaker to look in a separate, protected file for  the  actual
>          value.
> 
>   The  secret  files  are  not encrypted, but protected by file system permissions such that only
>          root can read or modify them.

>          Since the secret files are stored locally, they must be synchronized across all cluster  nodes.
>          This  command handles the synchronization using (in order of preference) pssh, pdsh, or ssh, so
>          one of those must be installed. Before synchronizing, this command will ping the cluster  nodes
>          to  determine  which  are  alive,  using  fping if it is installed, otherwise the ping command.
>          Installing fping is strongly recommended for better performance.

>          Known limitations:

>                 This command can only be run from full cluster nodes (not Pacemaker Remote nodes).

>                 Changes are not atomic, so the cluster may use different values while  a  change  is  in
>                 progress.  To  avoid  problems, it is recommended to put the cluster in maintenance mode
>                 when making changes with this command.

>                 Changes in secret values do not trigger a reload or restart of  the  affected  resource,
>                 since  they  do  not  change  the  CIB. If a response is desired before the next cluster
>                 recheck interval, any CIB change (such as setting a node attribute) will trigger it.

>                 If any node is down when changes to secrets are made, or a new node is  later  added  to
>                 the  cluster,  it may have different values when it joins the cluster, before "cibsecret
>                 sync" is run. To avoid this, it is recommended to run the  sync  command  (from  another
>                 node) before starting Pacemaker on the node.

>   EXAMPLES
>                 cibsecret set ipmi_node1 passwd SecreT_PASS

>                 cibsecret get ipmi_node1 passwd

>                 cibsecret check ipmi_node1 passwd

>                 cibsecret stash ipmi_node2 passwd

>                 cibsecret sync

>   AUTHOR
>          Written by Andrew Beekhof

>   Pacemaker 2.0.4-6.el8                         August 2020                                 PACEMAKER(8)


>   [root@kiff-01 ~]# cibsecret --help
>   cibsecret - manage sensitive information in Pacemaker CIB

>   Usage:
>       cibsecret [<options>] <command> [<parameters>]

>   Options:
>       --help       Show this message, then exit
>       --version    Display version information, then exit
>       -C           Don't read or write the CIB

>   Commands and their parameters:
>       set <resource-id> <resource-parameter> <value>
>           Set the value of a sensitive resource parameter.

>       get <resource-id> <resource-parameter>
>           Display the locally stored value of a sensitive resource parameter.

>       check <resource-id> <resource-parameter>
>           Verify that the locally stored value of a sensitive resource parameter
>           matches its locally stored MD5 hash.

>       stash <resource-id> <resource-parameter>
>           Make a non-sensitive resource parameter that is already in the CIB
>           sensitive (move its value to a locally stored and protected file).
>           This may not be used with -C.

>       unstash <resource-id> <resource-parameter>
>           Make a sensitive resource parameter that is already in the CIB
>           non-sensitive (move its value from the locally stored file to the CIB).
>           This may not be used with -C.

>       delete <resource-id> <resource-parameter>
>           Remove a sensitive resource parameter value.

>       sync
>           Copy all locally stored secrets to all other nodes.

>   This command manages sensitive resource parameter values that should not be
>   stored directly in Pacemaker's Cluster Information Base (CIB). Such values
>   are handled by storing a special string directly in the CIB that tells
>   Pacemaker to look in a separate, protected file for the actual value.

>   The secret files are not encrypted, but protected by file system permissions
>   such that only root can read or modify them.

>   Since the secret files are stored locally, they must be synchronized across all
>   cluster nodes. This command handles the synchronization using (in order of
>   preference) pssh, pdsh, or ssh, so one of those must be installed. Before
>   synchronizing, this command will ping the cluster nodes to determine which are
>   alive, using fping if it is installed, otherwise the ping command. Installing
>   fping is strongly recommended for better performance.

>   Known limitations:

>       This command can only be run from full cluster nodes (not Pacemaker Remote
>       nodes).

>       Changes are not atomic, so the cluster may use different values while a
>       change is in progress. To avoid problems, it is recommended to put the
>       cluster in maintenance mode when making changes with this command.

>       Changes in secret values do not trigger a reload or restart of the affected
>       resource, since they do not change the CIB. If a response is desired before
>       the next cluster recheck interval, any CIB change (such as setting a node
>       attribute) will trigger it.

>       If any node is down when changes to secrets are made, or a new node is
>       later added to the cluster, it may have different values when it joins the
>       cluster, before "cibsecret sync" is run. To avoid this, it is recommended to
>       run the sync command (from another node) before starting Pacemaker on the
>       node.

>   Examples:

>       cibsecret set ipmi_node1 passwd SecreT_PASS

>       cibsecret get ipmi_node1 passwd

>       cibsecret check ipmi_node1 passwd

>       cibsecret stash ipmi_node2 passwd

>       cibsecret sync



case 1
-------

Put the cluster in maintenance mode, verify configuration of stonith resource and make resource parameter secret. Try to 
manually edit the file, where the secret parameter value is saved and then compare the new value in the secret file with 
its hash. Set the secret parameter value back to the original value and convert the secret parameter back to not secret.
Turn off the cluster maintenance mode.

>   [root@kiff-01 ~]# rpm -q pacemaker
>   pacemaker-2.0.4-5.el8.x86_64

Put the cluster in maintenance mode.

>   [root@kiff-01 ~]# pcs property set maintenance-mode=true
>   [root@kiff-01 ~]# pcs status
>   Cluster name: kiff
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: kiff-02 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Thu Aug 20 13:36:32 2020
>     * Last change:  Thu Aug 20 13:36:28 2020 by root via cibadmin on kiff-01
>     * 3 nodes configured
>     * 9 resource instances configured
>
>                 *** Resource management is DISABLED ***
>     The cluster will not attempt to start, stop or recover services
>
>   Node List:
>     * Online: [ kiff-01 kiff-02 kiff-03 ]
>
>   Full List of Resources:
>     * fencing-kiff01	(stonith:fence_ipmilan):	 Started kiff-01 (unmanaged)
>     * fencing-kiff02	(stonith:fence_ipmilan):	 Started kiff-02 (unmanaged)
>     * fencing-kiff03	(stonith:fence_ipmilan):	 Started kiff-03 (unmanaged)
>     * Clone Set: locking-clone [locking] (unmanaged):
>       * Resource Group: locking:0:
>         * dlm	(ocf::pacemaker:controld):	 Started kiff-02 (unmanaged)
>         * lvmlockd	(ocf::heartbeat:lvmlockd):	 Started kiff-02 (unmanaged)
>       * Resource Group: locking:1:
>         * dlm	(ocf::pacemaker:controld):	 Started kiff-01 (unmanaged)
>         * lvmlockd	(ocf::heartbeat:lvmlockd):	 Started kiff-01 (unmanaged)
>       * Resource Group: locking:2:
>         * dlm	(ocf::pacemaker:controld):	 Started kiff-03 (unmanaged)
>         * lvmlockd	(ocf::heartbeat:lvmlockd):	 Started kiff-03 (unmanaged)
>
>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

Verify configuration of stonith resource and choose some sensitive parameters that should be set as secret. In this case 
it is `login` and `passwd`.

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: ipaddr=kiff-01-ilo login=Secret_Login passwd=Secret_Pass pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Use `cibsecret` to display a `login` parameter. Since `login` wasn't set as a secret yet, it won't work.

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 login
>   ERROR: resource fencing-kiff01 parameter login not set as secret, nothing to check

Set `login` and `passwd` for `fencing-kiff01` stonith resource as a secret using `cibsecret stash`.

>   [root@kiff-01 ~]# cibsecret stash fencing-kiff01 login
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/login to  kiff-01 kiff-02 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-login name=login value=lrm://

>   [root@kiff-01 ~]# cibsecret stash fencing-kiff01 passwd
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd to  kiff-01 kiff-02 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-passwd name=passwd value=lrm://

Using `cibsecret stash` on a nonexistent parameter won't work.

>   [root@kiff-01 ~]# cibsecret stash fencing-kiff01 password
>   ERROR: nothing to stash for resource fencing-kiff01 parameter password

Verify if `login` and `passwd` values are visible in Pacemaker CIB. The values are no longer visible and were replaced by "lrm://" string.

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: ipaddr=kiff-01-ilo login=lrm:// passwd=lrm:// pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Use `cibsecret` to display locally stored values of `login` and `passwd` resource attributes.

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 login
>   Secret_Login
>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 passwd
>   Secret_Pass

Use `cibsecret check` to verify that the locally stored secret value of a resource attributes `login` and `passwd` matches 
its locally stored MD5 hash.

>   [root@kiff-01 ~]# cibsecret check fencing-kiff01 login
>   [root@kiff-01 ~]# echo $?
>   0
>   [root@kiff-01 ~]# cibsecret check fencing-kiff01 passwd
>   [root@kiff-01 ~]# echo $?
>   0

Check that secret files for both `login` and `passwd` were synchronized across all cluster nodes.

>   [root@kiff-01 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd
>   Secret_Pass
>   [root@kiff-01 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/login
>   Secret_Login

>   [root@kiff-02 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd
>   Secret_Pass
>   [root@kiff-02 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/login
>   Secret_Login

>   [root@kiff-03 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd
>   Secret_Pass
>   [root@kiff-03 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/login
>   Secret_Login

Try to edit the secret file containing the `passwd` value.

>   [root@kiff-01 ~]# sed -i 's/Secret_Pass/Not_Secret_Password/' /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd
>   [root@kiff-01 ~]# cat /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd
>   Not_Secret_Password

Check the `passwd` value. In this case both `cibsecret get` and `cibsecret check` commands result in error, 
because the locally stored secret file is corrupted.

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 passwd
>   ERROR: MD5 hash mismatch for resource fencing-kiff01 parameter passwd

>   [root@kiff-01 ~]# cibsecret check fencing-kiff01 passwd
>   ERROR: MD5 hash mismatch for resource fencing-kiff01 parameter passwd

In Pacemaker CIB, the `passwd` value is still replaced by "lrm://" string.

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: ipaddr=kiff-01-ilo login=lrm:// passwd=lrm:// pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Use `cibsecret set` to set up the original `passwd` value and check that it is corrected.

>   [root@kiff-01 ~]# cibsecret set fencing-kiff01 passwd Secret_Pass
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd to  kiff-01 kiff-02 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-passwd name=passwd value=lrm://

>   [root@kiff-01 ~]# cibsecret check fencing-kiff01 passwd
>   [root@kiff-01 ~]# echo $?
>   0

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 passwd
>   Secret_Pass

Use `cibsecret unstash` to make `login` and `passwd` attributes visible again.

>   [root@kiff-01 ~]# cibsecret unstash fencing-kiff01 login
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/login to  kiff-01 kiff-02 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-login name=login value=Secret_Login

>   [root@kiff-01 ~]# cibsecret unstash fencing-kiff01 passwd
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/passwd to  kiff-01 kiff-02 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-passwd name=passwd value=Secret_Pass

Verify that both values are visible in stonith resource configuration.

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: ipaddr=kiff-01-ilo login=Secret_Login passwd=Secret_Pass pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Using `cibsecret unstash` on a nonexistent parameter won't work.

>   [root@kiff-01 ~]# cibsecret unstash fencing-kiff01 password
>   ERROR: nothing to unstash for resource fencing-kiff01 parameter password

Turn off maintenance mode for the cluster.

>   [root@kiff-01 ~]# pcs property set maintenance-mode=false
>   [root@kiff-01 ~]# pcs status
>   Cluster name: kiff
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: kiff-02 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Thu Aug 20 13:46:39 2020
>     * Last change:  Thu Aug 20 13:46:30 2020 by root via cibadmin on kiff-01
>     * 3 nodes configured
>     * 9 resource instances configured
>
>   Node List:
>     * Online: [ kiff-01 kiff-02 kiff-03 ]
>
>   Full List of Resources:
>     * fencing-kiff01	(stonith:fence_ipmilan):	 Started kiff-01
>     * fencing-kiff02	(stonith:fence_ipmilan):	 Started kiff-02
>     * fencing-kiff03	(stonith:fence_ipmilan):	 Started kiff-03
>     * Clone Set: locking-clone [locking]:
>       * Started: [ kiff-01 kiff-02 kiff-03 ]
>
>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled


case 2
--------

Configure 3 node cluster. Remove one node, put the cluster in maintenance mode. Set resource parameter as a secret on 
the live node, add the removed node back to the cluster and start Corosync on it. Run `cibsecret sync` and then start 
Pacemaker on the new node. Turn off the cluster maintenance mode, delete the secret resource parameter value.

>   [root@kiff-01 ~]# pcs status
>   Cluster name: kiff
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: kiff-03 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Thu Sep 10 14:22:37 2020
>     * Last change:  Mon Sep  7 18:13:50 2020 by root via cibadmin on kiff-01
>     * 3 nodes configured
>     * 9 resource instances configured
>
>   Node List:
>     * Online: [ kiff-01 kiff-02 kiff-03 ]
>
>   Full List of Resources:
>     * fencing-kiff01	(stonith:fence_ipmilan):	 Started kiff-01
>     * fencing-kiff02	(stonith:fence_ipmilan):	 Started kiff-02
>     * fencing-kiff03	(stonith:fence_ipmilan):	 Started kiff-03
>     * Clone Set: locking-clone [locking]:
>       * Started: [ kiff-01 kiff-02 kiff-03 ]
>
>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

Remove node `kiff-02` from the cluster.

>   [root@kiff-01 ~]# pcs cluster node remove kiff-02
>   Destroying cluster on hosts: 'kiff-02'...
>   kiff-02: Successfully destroyed cluster
>   Sending updated corosync.conf to nodes...
>   kiff-01: Succeeded
>   kiff-03: Succeeded
>   kiff-01: Corosync configuration reloaded

Put the cluster in maintenance mode.

>   [root@kiff-01 ~]# pcs property set maintenance-mode=true
>   [root@kiff-01 ~]# echo $?
>   0

Set the `delay` attribute for `fencing-kiff01` stonith resource as a secret and display its value.

>   [root@kiff-01 ~]# cibsecret set fencing-kiff01 delay 10
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/delay to  kiff-01 kiff-03  ...
>   Set 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-delay set=fencing-kiff01-instance_attributes name=delay value=lrm://

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 delay
>   10

Add node `kiff-02` back to the cluster.

>   [root@kiff-01 ~]# pcs cluster node add kiff-02
>   No addresses specified for host 'kiff-02', using 'kiff-02'
>   Disabling sbd...
>   kiff-02: sbd disabled
>   Sending 'corosync authkey', 'pacemaker authkey' to 'kiff-02'
>   kiff-02: successful distribution of the file 'corosync authkey'
>   kiff-02: successful distribution of the file 'pacemaker authkey'
>   Sending updated corosync.conf to nodes...
>   kiff-01: Succeeded
>   kiff-03: Succeeded
>   kiff-02: Succeeded
>   kiff-01: Corosync configuration reloaded

On the new cluster node `kiff-02` start Corosync and verify that it started.

>   [root@kiff-02 ~]# systemctl start corosync.service
>   [root@kiff-02 ~]# systemctl is-active corosync.service
>   active

Check cluster nodes status.

>   [root@kiff-03 ~]# pcs status nodes 
>   Pacemaker Nodes:
>    Online: kiff-01 kiff-03
>    Standby: kiff-02
>    Standby with resource(s) running:
>    Maintenance:
>    Offline:
>   [...]

Run `cibsecret sync` to synchronize the secret file to the new node `kiff-02`. It results in "No such file or directory".

>   [root@kiff-03 ~]# cibsecret sync
>   INFO: syncing /var/lib/pacemaker/lrm/secrets to  kiff-01 kiff-02 kiff-03  ...
>   /var/lib/pacemaker/lrm/secrets: No such file or directory

After using the `cibsecret sync` command, the file containing secret value, was not synced across all cluster nodes, 
but it was deleted. 
This was corrected in pacemaker-2.0.4-6.el8. Please see bug 1870873, comment 5 for verification.

>   [root@kiff-01 ~]# ls -l /var/lib/pacemaker/lrm/secrets/fencing-kiff01
>   ls: cannot access '/var/lib/pacemaker/lrm/secrets/fencing-kiff01': No such file or directory

>   [root@kiff-02 ~]# ls -l /var/lib/pacemaker/lrm/secrets/fencing-kiff01
>   ls: cannot access '/var/lib/pacemaker/lrm/secrets/fencing-kiff01': No such file or directory

>   [root@kiff-03 ~]# ls -l /var/lib/pacemaker/lrm/secrets/fencing-kiff01
>   ls: cannot access '/var/lib/pacemaker/lrm/secrets/fencing-kiff01': No such file or directory

Start cluster services on the new node.

>   [root@kiff-01 ~]# pcs cluster start kiff-02
>   kiff-02: Starting Cluster...

Turn off cluster maintenance mode.

>   [root@kiff-01 ~]# pcs property set maintenance-mode=false
>   [root@kiff-01 ~]# pcs status
>   Cluster name: kiff
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: kiff-03 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Thu Sep 10 14:28:44 2020
>     * Last change:  Thu Sep 10 14:27:13 2020 by hacluster via crmd on kiff-03
>     * 3 nodes configured
>     * 9 resource instances configured
>
>   Node List:
>     * Online: [ kiff-01 kiff-02 kiff-03 ]
>
>   Full List of Resources:
>     * fencing-kiff01	(stonith:fence_ipmilan):	 Started kiff-01
>     * fencing-kiff02	(stonith:fence_ipmilan):	 Started kiff-03
>     * fencing-kiff03	(stonith:fence_ipmilan):	 Started kiff-02
>     * Clone Set: locking-clone [locking]:
>       * Started: [ kiff-01 kiff-02 kiff-03 ]
>
>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

Since all the secret files were deleted, displaying the secret value of attribute `delay` now results in error as well.

>   [root@kiff-01 ~]# cibsecret get fencing-kiff01 delay
>   ERROR: no MD5 hash for resource fencing-kiff01 parameter delay 

Verify if `delay` value is visible in stonith resource configuration. The value is still replaced by "lrm://" string.

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: delay=lrm:// ipaddr=kiff-01-ilo login=Secret_Login passwd=Secret_Pass pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Delete the `delay` value and check that `fencing-kiff01` configuration has changed.

>   [root@kiff-01 ~]# cibsecret delete fencing-kiff01 delay
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/delay to  kiff-02 kiff-03  ...
>   Deleted 'fencing-kiff01' option: id=fencing-kiff01-instance_attributes-delay name=delay

>   [root@kiff-01 ~]# pcs stonith config fencing-kiff01
>    Resource: fencing-kiff01 (class=stonith type=fence_ipmilan)
>     Attributes: ipaddr=kiff-01-ilo login=Secret_Login passwd=Secret_Pass pcmk_host_list=kiff-01
>     Operations: monitor interval=60s (fencing-kiff01-monitor-interval-60s)

Try to delete a nonexistent secret parameter. This results in info about syncing a secret file to the cluster nodes, 
but no secret file is created or deleted.

>   [root@kiff-01 ~]# cibsecret delete fencing-kiff01 password
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/fencing-kiff01/password to  kiff-02 kiff-03  ...

>   [root@kiff-01 ~]#  ls -l /var/lib/pacemaker/lrm/secrets/fencing-kiff01/
>   total 0

case 3
-------

The `cibsecret` command can only be run from full cluster nodes (not Pacemaker Remote nodes). 
This is tested on 3 node cluster: two cluster nodes, and one Pacemaker remote node.

>   [root@virt-032 ~]# pcs status
>   Cluster name: STSRHTS7851
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-042 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Fri Sep 11 16:57:08 2020
>     * Last change:  Fri Sep 11 16:53:55 2020 by root via cibadmin on virt-032
>     * 3 nodes configured
>     * 8 resource instances configured
>
>   Node List:
>     * Online: [ virt-032 virt-042 ]
>     * RemoteOnline: [ virt-041 ]
>
>   Full List of Resources:
>     * fence-virt-032	(stonith:fence_xvm):	 Started virt-042
>     * fence-virt-041	(stonith:fence_xvm):	 Started virt-042
>     * fence-virt-042	(stonith:fence_xvm):	 Started virt-032
>     * virt-041	(ocf::pacemaker:remote):	 Started virt-032
>     * dummy1	(ocf::pacemaker:Dummy):	 Started virt-041
>     * dummy2	(ocf::pacemaker:Dummy):	 Started virt-041
>     * dummy3	(ocf::pacemaker:Dummy):	 Started virt-032
>     * dummy4	(ocf::pacemaker:Dummy):	 Started virt-041
>
>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled

Resource `dummy2` is running on Pacemaker remote node `virt-041`. Try to set its attribute `delay` as a secret 
on the Pacemaker remote node. It should not work.

>   [root@virt-041 ~]# cibsecret set dummy2 delay 10
>   ERROR: pacemaker not running? cibsecret needs pacemaker

On cluster node `virt-032` set resource `dummy2` attribute `delay` as a secret .

>   [root@virt-032 ~]# cibsecret set dummy2 delay 10
>   INFO: syncing /var/lib/pacemaker/lrm/secrets/dummy2/delay to  virt-032 virt-042  ...
>   Set 'dummy2' option: id=dummy2-instance_attributes-delay set=dummy2-instance_attributes name=delay value=lrm://

Display the `delay` secret value and then verify if it was replaced by "lrm://" string in Pacemaker CIB.

>   [root@virt-032 ~]# cibsecret get dummy2 delay
>   10

>   [root@virt-032 ~]# pcs resource config dummy2
>    Resource: dummy2 (class=ocf provider=pacemaker type=Dummy)
>     Attributes: passwd=lrm://
>   [...]

Setting `delay` attribute as secret caused `dummy2` resource to fail. This was reported as bug 1881537.

>   [root@virt-032 ~]# pcs status
>   Cluster name: STSRHTS7851
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-042 (version 2.0.4-5.el8-2deceaa3ae) - partition with quorum
>     * Last updated: Fri Sep 11 17:16:41 2020
>     * Last change:  Fri Sep 11 16:57:47 2020 by root via crm_resource on virt-032
>     * 3 nodes configured
>     * 8 resource instances configured
>
>   Node List:
>     * Online: [ virt-032 virt-042 ]
>     * RemoteOnline: [ virt-041 ]
>
>   Full List of Resources:
>     * fence-virt-032	(stonith:fence_xvm):	 Started virt-042
>     * fence-virt-041	(stonith:fence_xvm):	 Started virt-042
>     * fence-virt-042	(stonith:fence_xvm):	 Started virt-032
>     * virt-041	(ocf::pacemaker:remote):	 Started virt-032
>     * dummy1	(ocf::pacemaker:Dummy):	 Started virt-041
>     * dummy2	(ocf::pacemaker:Dummy):	 Stopped
>     * dummy3	(ocf::pacemaker:Dummy):	 Started virt-041
>     * dummy4	(ocf::pacemaker:Dummy):	 Started virt-041
>
>   Failed Resource Actions:
>     * dummy2_start_0 on virt-041 'not configured' (6): call=226, status='complete', exitreason='', last-rc-change='2020-09-11 16:57:47 +02:00', queued=0ms, exec=5ms
>
>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled

The secret file exists only on cluster nodes `virt-032` and `virt-042`. It doesn't exist on Pacemaker remote node `virt-041`.

>   [root@virt-032 ~]# ls -l /var/lib/pacemaker/lrm/secrets/dummy2/
>   total 8
>   -rw-------. 1 root root  3 Sep 11 16:57 delay
>   -rw-------. 1 root root 33 Sep 11 16:57 delay.sign

>   [root@virt-042 ~]# ls -l /var/lib/pacemaker/lrm/secrets/dummy2/
>   total 8
>   -rw-------. 1 root root  3 Sep 11 16:57 delay
>   -rw-------. 1 root root 33 Sep 11 16:57 delay.sign

>   [root@virt-041 ~]# ls -l /var/lib/pacemaker/lrm/secrets/dummy2/
>   ls: cannot access '/var/lib/pacemaker/lrm/secrets/dummy2/': No such file or directory

Comment 12 errata-xmlrpc 2020-11-04 04:00:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:4804


Note You need to log in before you can comment on or make changes to this bug.