Bug 1347335 - built-in documentation fixes
Summary: built-in documentation fixes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1375279 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-16 14:28 UTC by Ivan Devat
Modified: 2017-08-01 18:22 UTC (History)
8 users (show)

Fixed In Version: pcs-0.9.156-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 18:22:57 UTC
Target Upstream Version:


Attachments (Terms of Use)
proposed fix (61.42 KB, patch)
2017-01-27 11:40 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1958 normal SHIPPED_LIVE pcs bug fix and enhancement update 2017-08-01 18:09:47 UTC
Red Hat Bugzilla 1318656 None CLOSED document supported ENV variables in man pages 2019-09-20 16:40:54 UTC
Red Hat Bugzilla 1352047 None NEW [cli] possibly dangerous treatment of environment variables when running a subprocess 2019-09-20 16:40:54 UTC

Internal Links: 1318656 1352047

Description Ivan Devat 2016-06-16 14:28:25 UTC
Problem originally described here https://bugzilla.redhat.com/show_bug.cgi?id=1225946#c17 . We want to have it as a separate bug.

Another point (for which I can file a separate bug if you want to) is
that I can imagine if I were a novice user, I'd be pretty baffled by
the gory details in the documentation, such as:

>       cleanup [<resource id>] [--node <node>]
>              Cleans up the resource in the lrmd (useful to reset
>              the resource status and failcount).  [...]

(ehm, what's lrmd and why should I care if I just need to know the
external impact, not the Pacemaker runtime internals?)

Comment 3 Jan Pokorný [poki] 2016-06-22 18:32:34 UTC
2/ pcs doesn't bother to document type-only-specified resource resolution
   (in the context of "pcs resource create"), nor to warn about ambiguous
   requests (which might be even more suitable)

# pcs -f test.xml resource create r1 dummy
# pcs -f test.xml resource 
> r1     (ocf::heartbeat:Dummy): Stopped     

(heck, I wanted ocf:pacemaker:Dummy!)

# pcs resource list | grep [Dd]ummy
> ocf:heartbeat:Dummy - Example stateless resource agent
> ocf:pacemaker:Dummy - Example stateless resource agent

Comment 4 Tomas Jelinek 2016-06-23 08:18:13 UTC
from bz1305049 comment 10

CLI inconsistency, not sure what's correct:

# pcs constraint ticket help | head -3
> 
> Usage: pcs constraint [constraints]...
                        ^^^^^^^^^^^^^
>     ticket [show] [--full]

... while ...

# pcs cluster uidgid help | head -n3
> 
> Usage: pcs cluster uidgid...
                     ^^^^^^
>     uidgid

Comment 5 Tomas Jelinek 2016-06-29 07:23:05 UTC
We should also add "see also" section to manpage.

Comment 6 Jan Pokorný [poki] 2016-06-29 19:00:04 UTC
re [comment 5]:
When talking about the man page, there's related request to document
environment variables able to influence the run-time of pcsd:
[bug 1318656]

Situation with pcs may be even more straightforward as beside various
Python run-time variables (PYTHONDEBUG, ...), there may be just a single
one that pcs really cares about: EDITOR.

For inspiration, see ENVIRONMENT or ENVIRONMENT VARIABLES sections
at curl/wget and python man pages, respectively.

Comment 7 Jan Pokorný [poki] 2016-06-29 19:11:22 UTC
Another one:

While I can see that current pcs code already honors (at least some)
mutually exclusive parameters, that's not the case for the help texts.

For instance:

$ pcs constraint colocation -h | sed -n '/colocation set/{:s;p;n;/^$/d;bs}'
>     colocation set <resource1> [resourceN]... [options]
>                [set <resourceX> ... [options]]
>                [setoptions [constraint_options]]
>         Create a colocation constraint with a resource set.
>         Available options are sequential=true/false, require-all=true/false,
>         action=start/promote/demote/stop and role=Stopped/Started/Master/Slave.
>         Available constraint_options are id, score, score-attribute and
>         score-attribute-mangle.

Last two lines should read:

>         Available constraint_options are id, and either of: score,
>         score-attribute, score-attribute-mangle.

You get the point.

Comment 8 Jan Pokorný [poki] 2016-06-30 17:12:47 UTC
re [comment 7]:
In fact, both score-attribute and score-attribute-mangle are spurious
for rsc_colocation: https://github.com/ClusterLabs/pacemaker/pull/1091

Comment 9 Jan Pokorný [poki] 2016-07-01 18:48:55 UTC
Another one:

$ pcs constraint order -h | sed -n '/order set/{:s;p;n;/^$/d;bs}'
>    order set <resource1> [resourceN]... [options] [set
>              <resourceX> ... [options]]
>              [setoptions [constraint_options]]
>        Create an ordered set of resources.
>        Available options are sequential=true/false, require-all=true/false,
>        action=start/promote/demote/stop and role=Stopped/Started/Master/Slave.
>        Available constraint_options are id=<constraint-id>,
>        kind=Optional/Mandatory/Serialize and symmetrical=true/false.

where do you get the information "role" is meaningful in this context
from?

See also:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-sets
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617324808160

Comment 10 Tomas Jelinek 2016-07-11 15:24:36 UTC
from bz1158805

Jan Pokorný 2016-07-11 10:53:13 EDT

# pcs quorum -h | sed -n '/device add/{:s;s/    //;p;n;/^$/d;bs}'
> device add [<generic options>] model <device model> [<model options>]
>     Add a quorum device to the cluster.  Quorum device needs to be created
>     first by "pcs qdevice setup" command.  It is not possible to use more
>     than one quorum device in a cluster simultaneously.  Generic options,
>     model and model options are all documented in corosync's
>     corosync-qdevice(8) man page.

I don't follow why it stresses "corosync" out (already apparent from the
man page title).  It may also be a bit misleading as for RHEL 7, the
man page is in corosync-qdevice package.

Comment 11 Tomas Jelinek 2016-07-13 09:16:06 UTC
"pcs constraint location <resource> avoids" and "pcs constraint location <resource> prefers" may take more then one <node>[=<score>] e.g. pcs constraint location dummy prefers nodeA=100 nodeB=200 nodeC

Comment 12 Jan Pokorný [poki] 2016-07-29 12:34:34 UTC
Another one:

It seems nonsensical that help screen for "pcs cluster" lists
"remote-node {add,remove}" prior to "node {add,remove}".
It would also be nice to have them cross-referenced, or possibly
even merged into "[remote-]node {add,remove}" entries.

Comment 13 Jan Pokorný [poki] 2016-07-29 13:18:50 UTC
Another one:

"pcs acl {role,user,group,permission}" (no futher argument) is not
documented.

Comment 14 Jan Pokorný [poki] 2016-07-29 13:53:11 UTC
Another one:

with "pcs status -h": s/corosync|both|config/config|corosync|both/

(per logic, "both" should be the last, rest is in alphabet order)

Comment 15 Jan Pokorný [poki] 2016-07-29 14:09:24 UTC
Another one:

Brief descriptions of top-level commands are inconsistent:

# pcs |  sed -n '/Commands:/{n;:s;s/    //;p;n;/^$/d;bs}'
> cluster     Configure cluster options and nodes.                          
> resource    Manage cluster resources.                                     
> stonith     Configure fence devices.                                      
> constraint  Set resource constraints.                                     
> property    Set pacemaker properties.                                     
> acl         Set pacemaker access control lists.                           
> qdevice     Manage quorum device provider.                                
> quorum      Manage cluster quorum settings.                               
> status      View cluster status.                                          
> config      View and manage cluster configuration.                        
> pcsd        Manage pcs daemon.                                            
> node        Manage cluster nodes.                                         
> alert       Set pacemaker alerts.                                         

Most viable verb is perhaps "manage" as both configure and set denote
a priori (and wrongly) limited scope of what can be actually achieved
with the respective subcommands.

Comment 16 Tomas Jelinek 2016-09-15 13:18:20 UTC
Miroslav Lisik 2016-09-12 11:26:32 EDT

Bad syntax description for 'pcs constraint remove':

[root@virt-264 ~]# pcs constraint remove --help | grep remove
    remove [constraint id]...

It should be:
    remove <constraint id>...

Comment 17 Tomas Jelinek 2016-09-15 13:18:34 UTC
*** Bug 1375279 has been marked as a duplicate of this bug. ***

Comment 18 Tomas Jelinek 2016-11-11 14:59:58 UTC
From Ken Gaillot:

report [--from "YYYY-M-D H:M:S" [--to "YYYY-M-D" H:M:S"]] dest
extra quote after the second YYYY-M-D

Comment 19 Tomas Jelinek 2016-12-06 10:48:23 UTC
replace "ommited" with "omitted"
replace "sepcified" with "specified"

Comment 20 Tomas Jelinek 2016-12-07 11:23:52 UTC
* comment 3 is resolved in bz1262001
* set constraints (comment 7, comment 8, comment 9) will be resolved in bz1376491, for now we can at least delete the meaningless parts from help

Comment 21 Tomas Jelinek 2016-12-09 08:18:23 UTC
"cluster node add" says it adds the new node to corosync.conf / cluster.conf and then copies the file to the new node. This is correct but it is a detail which we don't need to explain. On top of that there is no mention of qdevice, booth, sbd... configuration being set up on the new node as well. Help for this command should be simplified. Something along the lines "adds the new node to the cluster and synchronizes all configuration to the new node" should do. Corosync / pacemaker should be replaced with cluster.

Comment 22 Tomas Jelinek 2016-12-09 08:20:17 UTC
(In reply to Tomas Jelinek from comment #21)
> "cluster node add" says it adds the new node to corosync.conf / cluster.conf
> and then copies the file to the new node. This is correct but it is a detail
> which we don't need to explain. On top of that there is no mention of
> qdevice, booth, sbd... configuration being set up on the new node as well.
> Help for this command should be simplified. Something along the lines "adds
> the new node to the cluster and synchronizes all configuration to the new
> node" should do. Corosync / pacemaker should be replaced with cluster.

Also it may be worth mentioning this command does not work when run on the new node i.e. it must be run on a node in a cluster.

Comment 24 Tomas Jelinek 2017-01-27 11:40:30 UTC
Created attachment 1245099 [details]
proposed fix

Fixed issues described in: comment 0, comment 5, comment 6, comment 10, comment 11, comment 15, comment 16, comment 18, comment 19, comment 21, comment 22

comment 3: Fixed in bz1262001.

comment 4: Postponed. This is caused by mechanism that finds a relevant piece of in the help and combines it with the top level command. We would neet do do quite a big change to fix this.

comment 7, comment 8, comment 9: Some fixes done, will be fixed completely in bz1376491.

comment 12: Help for remote-node command was put below node commands.

comment 13: "pcs acl {role,user,group,permission}" are not valid pcs commands.

comment 14: The current order makes sense, as "both" means both "pcs status nodes" and "pcs status nodes corosync".

Comment 27 errata-xmlrpc 2017-08-01 18:22:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1958


Note You need to log in before you can comment on or make changes to this bug.