Hide Forgot
This request arises from several factors: 1. pcs fails at semantic versioning as retrospectively even a minor version hasn't been bumped since at least Q1 2012, despite many big new features has landed there 2. pcs has no clear roadmap declaring which features are planned for upcoming releases (provided that at least minor version is moving substantially faster, see 1.) 3. pcs currently has plenty of limitations, and there is a hope some of them will be solved in the future, best if this would be known ahead of time (see 2.) As an illustrative case where the requested feature would help: clufter's "pcscmd" output (that is, sequence of pcs commands, along with some opt-in/opt-out auxiliary decorations for convenience), specifically ccs2pcscmd-flatiron (-needle is to be implemented ATM) amongst others, checks that the subsequent commands are to be run from an upcoming member of the cluster under construction because pcs currently does not allow to run most of the subcommands non-locally (unlike, e.g., "pcs status pcsd" expecting node name/s): [bug 1210833]. The crazy (and a bit fragile) check [1] will be avoidable if one is able to know in advance that pcs is capable of fully distributed operation. For this to work in a compatible way, either the feature would have to be anchored with a pcs release (see 2.) or with a "capabilities" list that one can query directly from pcs (either part of --version or a brand new option/subcommand, such as --capabilites). [1] https://github.com/jnpkrn/clufter/commit/89f56a536ea2780b6929eb5386158296c4db50c2 Furthermore, the distributed operation means several different versions of pcs/pcsd may be in play, so for some operations to succeed, given capability (or version limit) has to hold for all the nodes involved. For this reason, there might be a global option, say --dry-run, that would just contact all the suitable nodes for a distributed operaion and checked if the capabilities are present as a requirement for operation to finish successfully.
FWIW, Pacemaker prefers "feature set" label for what I likely meant with "capabilities".
For demonstration: # systemctl --version > systemd 219 > +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN # pacemakerd -F > Pacemaker 1.1.12 (Build: a9c8177) > Supporting v3.0.9: generated-manpages agent-manpages ncurses libqb-logging libqb-ipc upstart systemd nagios corosync-native atomic-attrd acls
Another example: http://www.open-zfs.org/wiki/Feature_Flags
As you've been unable to come up with anything to fill this gap for the last year, I resorted to tracking the features/extras I care about on my own in clufter: https://pagure.io/clufter/blob/d46cb340257664289d9af30e9a0a39b5cbf988b9/f/facts.py#_256 As pcs already depends on clufter, I would suggest not to duplicate the effort and keep a single place tracking such knowledge. I'll happily accept the patches extending such knowledge or adding convenient helpers to be called from pcs.
It is a good idea to put the list of capabilities to a separate plain-text file (or use JSON to store structures if needed) so both pcs and pcsd as well as others (like clufter) can easily read it.
We are not going to implement any "dry run" option in scope of this bz. There is already a separate bz which requests this feature: bz1330774
Summary (from a parallel discussion) of my vision as far as CLI usability is concerned: - for the purpose of suitable targeting the destination system and its standard version of pcs in clufter generated sequences of pcs commands, I had no choice but to start tracking some relevant "capabilities" directly in clufter: https://pagure.io/clufter/blob/ddf7628385105e3bef0f4e44f85f8234fcf65195/f/facts.py#_323 so that is rather non-systemic blueprint of what I'd be interested in querying dynamically - my idea about CLI interface for that: $ pcs system can node-maintenance > +node-maintenance $ echo $? > 0 $ pcs system can utilization node-maintenance > -utilization > +node-maintenance $ echo $? > 1 $ pcs system can rainbow unicorns > -rainbow > -unicorns $ echo $? > 2 $ pcs system can > [list of all +capabilities] $ echo $? > 0 I.e., return code is a sum of command-passed identifiers designating capabilities in question that are not recognized/supported.
re [comment 11]: Thinking about that and considering [bug 1210833], "system" namespace is perhaps too good to be dedicated to local pcs knowledge, so it might be we worth to use still naturally flowing: pcs itself can ...
Another example, this time from DSL world: https://sourceware.org/binutils/docs/ld/Miscellaneous-Commands.html > LD_FEATURE(string)
Created attachment 1340655 [details] proposed fix Test: pcs: * 'pcs --version --full' displays the list of capabilities pcsd: * Similar list of capabilities is present in cluster status json provided by pcsd ("pcsd_capabilities"). * Pcsd WUI reacts to capabilities (not)present in the capabilities list. It is sufficient for a capability to be present in either of "available_features" (the old deprecated list) or "pcsd_capabilities" (the new list).
See also how git deals with the extensions in its protocol (and how well it's documented, for that matter): https://github.com/git/git/blob/master/Documentation/technical/protocol-capabilities.txt
After Fix: [ant ~] $ rpm -q pcs pcs-0.9.161-1.el7.x86_64 [ant ~] $ pcs --version --full 0.9.161 booth cluster.config.backup-local cluster.config.export.to-pcs-commands cluster.config.import-cman... Similar list of capabilities is present in cluster status json provided by pcsd (key is "pcsd_capabilities").
Owing to a shaky history with "pcs cluster setup" wrt. whether encryption is enabled or disabled by default, I'd also add something like "corosync.setup.default_encrypted". It should be expressly noted that it doesn't apply to CMAN. Related to that: 1. cannot any issue arise from combining non/encrypting-by-default pcs(d) peers in the cluster? 2. does "corosync" apply to both flatiron/CMAN and needle equally? otherwise, there should be more systemic branching, IMHO
re [comment 19]: I mean: s/corosync.setup.default_encrypted/corosync.setup.default_nonencrypted/
(In reply to Jan Pokorný from comment #19) > Owing to a shaky history with "pcs cluster setup" wrt. whether > encryption is enabled or disabled by default, I'd also add > something like "corosync.setup.default_encrypted". Good point. If this gets changed again, we will do it. For now I think it does not matter that much, because previous pcs versions did not have the capabilities list. > It should be expressly noted that it doesn't apply to CMAN. > > Related to that: > > 1. cannot any issue arise from combining non/encrypting-by-default > pcs(d) peers in the cluster? Only the pcs instance where the setup command is run matters. > > 2. does "corosync" apply to both flatiron/CMAN and needle equally? > otherwise, there should be more systemic branching, IMHO Yes and no. RHEL6 qdisk is not supported. Configuring quorum options (lms, atb, etc.) is not supported by corosync 1.x. We are not focusing CMAN/corosync 1.x anymore.
Thanks for the answers. Regarding 1., so I understand it that some general file/content distribution mechanism present for long already is used, hence any reasonably expectable combination of pcs versions will work correctly when encryption requested as long as the triggering pcs version supports that. Correct? Btw. while it's a bit too late for clufter to start emitting commands that would be universal based on run-time "capabilities" feedback (and such convoluted conditionalizing in "pcs commands output" would also be unnecessarily ugly), the whole facility is going to support feature sketched in <https://pagure.io/clufter/issue/2>. (happy end :)
(In reply to Jan Pokorný from comment #22) > Thanks for the answers. Regarding 1., so I understand it that some > general file/content distribution mechanism present for long already > is used, hence any reasonably expectable combination of pcs versions > will work correctly when encryption requested as long as the > triggering pcs version supports that. Correct? Yes.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0866