Hide Forgot
Description of problem: I've been running with 1 version (old) of rhcs on one node, and the latest version on the other two. Regardless of whether or not this actually causes a problem, something in rhcs should log a version mismatch to warn the user. Version-Release number of selected component (if applicable): ccs.x86_64 0:0.16.2-63.el6 cluster-glue-libs.x86_64 0:1.0.5-6.el6 clusterlib.x86_64 0:3.0.12.1-49.el6_4.1 cman.x86_64 0:3.0.12.1-49.el6_4.1 corosync.x86_64 0:1.4.1-15.el6_4.1 corosynclib.x86_64 0:1.4.1-15.el6_4.1 fence-agents.x86_64 0:3.1.5-25.el6_4.2 fence-virt.x86_64 0:0.2.3-13.el6 modcluster.x86_64 0:0.16.2-20.el6 resource-agents.x86_64 0:3.9.2-21.el6_4.3 rgmanager.x86_64 0:3.0.12.1-17.el6 ricci.x86_64 0:0.16.2-63.el6 How reproducible: 100% Steps to Reproduce: 1. Start cluster with version mismatch 2. 3. Actual results: Nothing, cluster proceeds normally Expected results: Log message to warn about mismatched versions. Additional info: Packages in question, from yum update: ---> Package ccs.x86_64 0:0.16.2-43.el6 will be updated ---> Package ccs.x86_64 0:0.16.2-63.el6 will be an update ---> Package cluster-glue-libs.x86_64 0:1.0.5-2.el6 will be updated ---> Package cluster-glue-libs.x86_64 0:1.0.5-6.el6 will be an update ---> Package clusterlib.x86_64 0:3.0.12.1-23.el6 will be updated ---> Package clusterlib.x86_64 0:3.0.12.1-49.el6_4.1 will be an update ---> Package cman.x86_64 0:3.0.12.1-23.el6 will be updated ---> Package cman.x86_64 0:3.0.12.1-49.el6_4.1 will be an update ---> Package corosync.x86_64 0:1.4.1-4.1.el6 will be updated ---> Package corosync.x86_64 0:1.4.1-15.el6_4.1 will be an update ---> Package corosynclib.x86_64 0:1.4.1-4.1.el6 will be updated ---> Package corosynclib.x86_64 0:1.4.1-15.el6_4.1 will be an update ---> Package fence-agents.x86_64 0:3.1.5-10.el6 will be updated ---> Package fence-agents.x86_64 0:3.1.5-25.el6_4.2 will be an update ---> Package fence-virt.x86_64 0:0.2.3-5.el6 will be updated ---> Package fence-virt.x86_64 0:0.2.3-13.el6 will be an update ---> Package modcluster.x86_64 0:0.16.2-14.el6 will be updated ---> Package modcluster.x86_64 0:0.16.2-20.el6 will be an update ---> Package resource-agents.x86_64 0:3.9.2-7.el6 will be updated ---> Package resource-agents.x86_64 0:3.9.2-21.el6_4.3 will be an update ---> Package rgmanager.x86_64 0:3.0.12.1-5.el6 will be updated ---> Package rgmanager.x86_64 0:3.0.12.1-17.el6 will be an update ---> Package ricci.x86_64 0:0.16.2-43.el6 will be updated ---> Package ricci.x86_64 0:0.16.2-63.el6 will be an update
well, if "cluster proceeds normally", then I don't think this warrants a bug :)
Still a bug, just a low priority one. Surely you recognize that it is suboptimal to be using different versions of RHCS simultaneously. Call it a usability issue if you like.
(In reply to Chester Knapp from comment #3) > Still a bug, just a low priority one. Surely you recognize that it is > suboptimal to be using different versions of RHCS simultaneously. Call it a > usability issue if you like. We have clear documentation on how to update a cluster with a process that ensure compatibility across upgrades. A situation as described should only be a temporary state while upgrading cluster nodes when a user is sitting in front of a terminal. It basically adds no useful information for the end user, at the cost of adding very complex pieces of code to hook into rpm.