Description of problem: I don't want anything to start automaticly, so I've turned off automatic start of cluster during boot and not checked autostart on any cluster services. Problem: when services are enabled and started, after restart of rgmanager (and after reboot or power cycle and manual start of cluster software), cluster services are starting (if they wasn't disabled...) Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. create 2node cluster (propably doesn't matter, but i was checking on 2node) 2. create fence domain with only one node 3. create service with action restart and set fence domain created at 2. 4. enable service 4. stop start rgmanager or power cycle node with started service Actual results: cman,ccsd,fenced,rgmanager started service enabled and started (by clusvcadm -e) service rgmanager stop/start (or powercycle/reboot and cman, ccsd, fenced, rgmanager start) services are starting Expected results: cman,ccsd,fenced,rgmanager started service enabled and started (by clusvcadm -e) service rgmanager stop/start (or powercycle/reboot and cman, ccsd, fenced, rgmanager start) services are not starting (unless they have autostart option set) after i'll check if machine, data and other are ok i would like start them using clusvcadm -e Additional info:
What rgmanager package do you have?
- rgmanager-1.9.54-1
Ok, so, in your test case, you are never rebooting the other node, or shutting rgmanager down, right... If this is the case, then what you are seeing is correct/expected behavior. The other node has the state recorded as 'stopped'. Whenever a membership transition occurs, services are evaluated to see if the a node (presumably the new node) is capable of running any service in the 'stopped' state. If so, then the service is started. 'Autostart' is for cluster quorum transitions, or total rgmanager group transitions. Since the surviving node maintains the cluster quorum (and rgmanager is still running), the "enabled" vs "disabled" state is not reset. What we can do is provide an option to disable a service at the point no nodes are capable of running it. For example, in your case, only one node is capable of running the service. When it is offline (or rgmanager is down), the other node could then mark the service as 'disabled' automatically - because it knows that: (a) No nodes are capable of running the service, and (b) the administrator wants the service disabled if (a) occurs.
Note that rgmanager does stop-before-start - so your stop scripts should be able to clean up any problems that exist prior to the service starting. If that's not the case (and you want manual intervention prior to service start), you can disable rgmanager on boot: chkconfig --del rgmanager
Need to reopen this momentarily for housekeeping