Description of problem: Using SSM with multiple machines, selecting update package A on machine 1, and selecting update different package B on machine 2, leads to updating or installing both packages A and B on both machines 1 and 2. Version-Release number of selected component (if applicable): 0.6, 0.8, untested on 0.7 How reproducible: Easily, once you find the right packages installed on one server with possible upgrades, and not installed on the other server. Steps to Reproduce: 1. Add 2 or more systems to SSM, 2. In SSM select Upgrade packages, 3. Select a package that is possible to upgrade on machine1, that is not installed on machine2. Select a package that is possible to upgrade on machine2, this may also be possible to upgrade on machine1, doesnt matter. System Package to Upgrade machine1 tzdata-2010e-1.el5-noarch machine2 screen-4.0.3-1.el5_4.1-x86_64 tzdata-2010e-1.el5-noarch 4. You can see that the desired package deployments are destined to go to the appropriate machines. machine1 does not have screen installed, so spacewalk web interface displays correctly that this machine should not get this update. 5. When machine1 checks in, it will install screen. [root@machine1 ~]# rpm -e screen [root@machine1 ~]# /usr/sbin/rhn_check -vv D: check_action {'action': "<?xml version='1.0'?>\n<methodCall>\n<methodName>packages.update</methodName>\n<params>\n<param>\n<value><array><data>\n<value><array><data>\n<value><string>screen</string></value>\n<value><string>4.0.3</string></value>\n<value><string>1.el5_4.1</string></value>\n<value><string></string></value>\n<value><string>x86_64</string></value>\n</data></array></value>\n<value><array><data>\n<value><string>tzdata</string></value>\n<value><string>2010e</string></value>\n<value><string>1.el5</string></value>\n<value><string></string></value>\n<value><string>noarch</string></value>\n</data></array></value>\n</data></array></value>\n</param>\n</params>\n</methodCall>\n", 'version': 2, 'id': 33} updateLoginInfo() login info D: login(forceUpdate=True) invoked logging into up2date server D: writeCachedLogin() invoked D: Wrote pickled loginInfo at 1271412707.31 with expiration of 1271416307.31 seconds. successfully retrieved authentication token from up2date server D: logininfo: {'X-RHN-Server-Id': 1000010000, 'X-RHN-Auth-Server-Time': '1271412707.28', 'X-RHN-Auth': 'vx9+go0zrhEymgZ29yR4sg==', 'X-RHN-Auth-Channels': [['centos-5-base-x86_64', '20100325085522', '1', '1'], ['centos-5-custom-x86_64', '20100416111625', '0', '1'], ['centos-5-epel-x86_64', '20100416110241', '0', '1'], ['centos-5-extras-x86_64', '20100325180449', '0', '1'], ['centos-5-pgdg-8.4-x86_64', '20100325181739', '0', '1'], ['centos-5-customapps-x86_64', '20100416115551', '0', '1'], ['centos-5-updates-x86_64', '20100416104833', '0', '1']], 'X-RHN-Auth-User-Id': '', 'X-RHN-Auth-Expire-Offset': '3600.0'} D: handle_action {'action': "<?xml version='1.0'?>\n<methodCall>\n<methodName>packages.update</methodName>\n<params>\n<param>\n<value><array><data>\n<value><array><data>\n<value><string>screen</string></value>\n<value><string>4.0.3</string></value>\n<value><string>1.el5_4.1</string></value>\n<value><string></string></value>\n<value><string>x86_64</string></value>\n</data></array></value>\n<value><array><data>\n<value><string>tzdata</string></value>\n<value><string>2010e</string></value>\n<value><string>1.el5</string></value>\n<value><string></string></value>\n<value><string>noarch</string></value>\n</data></array></value>\n</data></array></value>\n</param>\n</params>\n</methodCall>\n", 'version': 2, 'id': 33} D: handle_action actionid = 33, version = 2 D: do_call packages.update ([['screen', '4.0.3', '1.el5_4.1', '', 'x86_64'], ['tzdata', '2010e', '1.el5', '', 'noarch']],) Loaded plugins: fastestmirror, rhnplugin Loading mirror speeds from cached hostfile Loading mirror speeds from cached hostfile D: Called update [['screen', '4.0.3', '1.el5_4.1', '', 'x86_64'], ['tzdata', '2010e', '1.el5', '', 'noarch']] D: Dependencies Resolved D: Downloading Packages: D: Running Transaction Test warning: screen-4.0.3-1.el5_4.1: Header V3 DSA signature: NOKEY, key ID e8562897 D: Finished Transaction Test D: Transaction Test Succeeded D: Running Transaction Updating package profile D: Sending back response (0, 'Update Succeeded', {}) D: do_call packages.checkNeedUpdate ('rhnsd=1',) D: Called refresh_rpmlist Updating package profile D: local action status: (0, 'rpmlist refreshed', {}) [root@machine1 ~]# uname -a Linux machine1 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 2009 x86_64 x86_64 x86_64 GNU/Linux [root@machine1 ~]# Actual results: All machines get all packages/installed or updated that were scheduled to seperately go to other machines in the group. Expected results: Machines should only get updates that are supposed to go the specific machine, and it should definitely not have a package installed when you selected update in the web interface. Additional info: This example is for 2 packages over 2 machines, what happens when you want to upgrade 50 packages on mysql servers machine1-50, and 100 packages on postgresql servers machine51-100, and 10 packages on http servers machine101-200? Your mysql servers will get postgres and httpd installed, and so on..... I know machines can be grouped by type but I dont think that's the resolution or point of this bug report
Mass-aligning under space12, so that we don't lose track of this bugzilla. This however does not mean that we plan (will be able to) address this bug in Spacewalk 1.2.
Mass-moving to space13.
We did not have time for this one during Spacewalk 1.4 time frame. Mass moving to Spacewalk 1.5.
Aligning under space16.
I've just verified with Spacewalk nightly (spacewalk-backend-1.6.26-1.el5) and with RHEL 6 client that have packages rhn-check-1.0.0-61.el6.noarch rhn-client-tools-1.0.0-61.el6.noarch rhnlib-2.5.22-10.el6.noarch rhnpush-0.4.5-2.el6.noarch rhnsd-4.9.3-2.el6.x86_64 rhn-setup-1.0.0-61.el6.noarch yum-3.2.29-17.el6.noarch yum-metadata-parser-1.1.2-16.el6.x86_64 yum-rhn-plugin-0.9.1-26.el6_1.1.noarch installed that the upgrades are correctly directed/picked up by only to the client machines that already have the old version installed. No new installations took place. Closing as CURRENTRELEASE -- we've presumably fixed the problem along the way.