Description of problem: When having multiple <vm> resources with different failover domain, but different name attributes, rgmanager reports an error "Error storing vm: Duplicate" Version-Release number of selected component (if applicable): rgmanager-2.0.31-1.el5 How reproducible: Always Steps to Reproduce: 1. Have two vm resources, with same domain, but different names. 2. Update cluster.conf throughout cluster with ccs_tool and cman_tool 3. Examine /var/log/messages Actual results: clurgmgrd reports an error Dec 28 11:46:20 dom0 ccsd[5920]: Update of cluster.conf complete (version 2007122127 -> 2007122128). Dec 28 11:46:25 dom0 clurgmgrd[9762]: <notice> Reconfiguring Dec 28 11:46:25 dom0 clurgmgrd[9762]: <err> Error storing vm: Duplicate Dec 28 11:46:26 dom0 clurgmgrd[9762]: <err> Error storing vm: Duplicate Expected results: rgmanager should configure this resource without errors Additional info: The VM's are started properly. See attached cluster.conf
Created attachment 292354 [details] cluster.conf example
Actually, you're looking in the wrong place (and it's my fault for telling you to look there). I'm sorry for pointing you in the wrong direction - the errors aren't actually from cluster.conf - they're from a dirty /usr/share/cluster. Clean up /usr/share/cluster ... # cd /usr/share/cluster # rm -f *~ *.rpm* rg_test works on that conf file (note that both VMs are displayed below). I left some errors (showing duplicate resource agents): [root@ayanami daemons]# ./rg_test ../resources test ~lhh/futon.conf Using ../resources as resource agent path Running in test mode. Warning: Ignoring ../resources/ip.sh.rpmsave: Bad extension .rpmsave Error storing fs: Duplicate Error storing clusterfs: Duplicate Error storing fs: Duplicate ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ duplicate resource agent errors Loaded 21 resource rules === Resources List === Resource type: vm [INLINE] Instances: 1/1 Agent: vm.sh Attributes: name = vm-xennfs1-pru [ primary ] domain = xen_pru [ reconfig ] autostart = 0 [ reconfig ] hardrecovery = 0 [ reconfig ] exclusive = 0 [ reconfig ] recovery = disable [ reconfig ] path = /guests/virtdata/scripts:/etc/xen migrate = live depend_mode = hard max_restarts = 0 [ reconfig ] restart_expire_time = 0 [ reconfig ] Resource type: vm [INLINE] Instances: 1/1 Agent: vm.sh Attributes: name = vm-xennfs2-pru [ primary ] domain = xen_pru [ reconfig ] autostart = 0 [ reconfig ] hardrecovery = 0 [ reconfig ] exclusive = 0 [ reconfig ] recovery = disable [ reconfig ] path = /guests/virtdata/scripts:/etc/xen migrate = live depend_mode = hard max_restarts = 0 [ reconfig ] restart_expire_time = 0 [ reconfig ] === Resource Tree === vm { name = "vm-xennfs1-pru"; domain = "xen_pru"; autostart = "0"; hardrecovery = "0"; exclusive = "0"; recovery = "disable"; path = "/guests/virtdata/scripts:/etc/xen"; migrate = "live"; depend_mode = "hard"; max_restarts = "0"; restart_expire_time = "0"; } vm { name = "vm-xennfs2-pru"; domain = "xen_pru"; autostart = "0"; hardrecovery = "0"; exclusive = "0"; recovery = "disable"; path = "/guests/virtdata/scripts:/etc/xen"; migrate = "live"; depend_mode = "hard"; max_restarts = "0"; restart_expire_time = "0"; } === Failover Domains === Failover domain: xen_pru Flags: Ordered Restricted Node node1 (id 1, priority 1) Node node2 (id 2, priority 2) === Event Triggers === Event Priority Level 100: Name: Default (Any event) File: /usr/share/cluster/default_event_script.sl If it was a question of unique/non-unique, the error would look like this: Error: Primary attribute collision. type=vm attr=name value=vm-xennfs1-pru Error storing vm resource You, however, are correct - rgmanager does not allow configuration of the same VM name even if restricted failover domains do not overlap. For example, the following configuration would result in the above error collision error: <rm log_facility="local4" log_level="5"> <resources/> <failoverdomains> <failoverdomain name="domain1" ordered="1" restricted="1"> <failoverdomainnode name="node1" priority="1"/> <failoverdomainnode name="node2" priority="2"/> </failoverdomain> <failoverdomain name="domain2" ordered="1" restricted="1"> <failoverdomainnode name="node3" priority="1"/> <failoverdomainnode name="node4" priority="2"/> </failoverdomain> </failoverdomains> <vm autostart="0" domain="domain1" exclusive="0" name="vm-xennfs1-pru" path="/guests/virtdata/scripts:/etc/xen" recovery="disable"/> <vm autostart="0" domain="domain2" exclusive="0" name="vm-xennfs1-pru" path="/guests/virtdata/scripts:/etc/xen" recovery="disable"/> </rm> This is a known limitation. It is kind of a design flaw - basically, resource placement policies and individual resource configuration are not the same thing in rgmanager. Fortunately, it's fairly easy to make rgmanager allow multiple domains with the same name.
Created attachment 292381 [details] Example of how to allow multiple instances of the same domain name Untested. The idea is that you can provide a non-unique name_override in cluster.conf which is the -real- Xen domain name for the VM.