Hide Forgot
pcs resource create glance-fs Filesystem device="192.168.0.2:/srv/vms/clusters/nfs-storage/glance" directory="/var/lib/glance/" fstype="nfs" options="v3" --group glance chown glance:nobody /var/lib/glance pcs resource create glance-registry lsb:openstack-glance-registry --group glance pcs resource create glance-api lsb:openstack-glance-api --group glance Resource Group: glance glance-fs (ocf::heartbeat:Filesystem): Started rhos4-node6 glance-registry (lsb:openstack-glance-registry): Started rhos4-node6 glance-api (lsb:openstack-glance-api): Started rhos4-node6 pcs resource clone glance pcs resource status Clone Set: glance-clone [glance] Started: [ rhos4-node5 rhos4-node6 ] ^^^^ after cloning the Resource group display is gone.
This is intentional by crm_mon to reduce the level of noise. We only expand an instance of a cloned group if it is partially up/down.
(In reply to Andrew Beekhof from comment #1) > This is intentional by crm_mon to reduce the level of noise. > We only expand an instance of a cloned group if it is partially up/down. I am ok NOT to change the default, but can we make it optional? pcs status --full or something? I want to know what's running in those clones from time to time, to make sure I didn't miss bits around :)
IIRC, pcs is just displaying the crm_mon output here. Chris: is that correct? If so, we'd have to change Pacemaker.
Yes, that's correct. I can have pcs extract information from the XML that crm_mon provides, but it's easier for me just to use what crm_mon gives. It sounds like we're going to add a new option to crm_mon, when that is ready I'll update pcs status --full to use that option.
I've added a -R option to crm_mon that will display this. It's in upstream git.
We'll pick this up in the acl rebase
Fix is upstream here: https://github.com/feist/pcs/commit/8a40e89ef68a293de09352e97f89684d2f1d4d94
Before Update: [root@bid-06 pcs]# rpm -q pcs pcs-0.9.122-3.el6.x86_64 [root@bid-06 pcs]# pcs resource create D1 Dummy --group dg [root@bid-06 pcs]# pcs resource create D2 Dummy --group dg [root@bid-06 pcs]# pcs resource clone dg [root@bid-06 pcs]# pcs status --full Cluster name: test99 Last updated: Mon Jun 23 10:42:55 2014 Last change: Mon Jun 23 10:42:49 2014 via crmd on bid-05 Stack: cman Current DC: bid-06 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured 4 Resources configured Online: [ bid-05 bid-06 ] Full list of resources: Clone Set: dg-clone [dg] Started: [ bid-05 bid-06 ] After Update: [root@bid-06 pcs]# rpm -q pcs pcs-0.9.123-2.el6.x86_64 [root@bid-06 pcs]# pcs status --full Cluster name: test99 Last updated: Mon Jun 23 10:44:10 2014 Last change: Mon Jun 23 10:42:49 2014 via crmd on bid-05 Stack: cman Current DC: bid-06 - partition with quorum Version: 1.1.11-97629de 2 Nodes configured 4 Resources configured Online: [ bid-05 bid-06 ] Full list of resources: Clone Set: dg-clone [dg] Resource Group: dg:0 D1 (ocf::heartbeat:Dummy): Started bid-06 D2 (ocf::heartbeat:Dummy): Started bid-06 Resource Group: dg:1 D1 (ocf::heartbeat:Dummy): Started bid-05 D2 (ocf::heartbeat:Dummy): Started bid-05 Started: [ bid-05 bid-06 ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1526.html