Created attachment 896223 [details] see the aks key where 'consumed' coloumn says '2out of 1' Description of problem: I've created a activation-key with system limit to unlimited. And I've registered two systems with same activation-key. Later, when I tried to reduce the limit to '1' which is less than the already consumed hosts, it was correctly updated and notification says: Activation Key updated. But when I navigate away and come back on same page then found the system limit was still set to what it was originally i.e. unlimited in this case. So ideally, UI should throw a validation error if I set the limit below to already consumed host. Version-Release number of selected component (if applicable): Satellite-6.0.3-RHEL-6-20140508.1 * apr-util-ldap-1.3.9-3.el6_0.1.x86_64 * candlepin-0.9.7-1.el6_5.noarch * candlepin-scl-1-5.el6_4.noarch * candlepin-scl-quartz-2.1.5-5.el6_4.noarch * candlepin-scl-rhino-1.7R3-1.el6_4.noarch * candlepin-scl-runtime-1-5.el6_4.noarch * candlepin-selinux-0.9.7-1.el6_5.noarch * candlepin-tomcat6-0.9.7-1.el6_5.noarch * elasticsearch-0.90.10-4.el6sat.noarch * foreman-1.6.0.7-1.el6sat.noarch * foreman-compute-1.6.0.7-1.el6sat.noarch * foreman-gce-1.6.0.7-1.el6sat.noarch * foreman-libvirt-1.6.0.7-1.el6sat.noarch * foreman-ovirt-1.6.0.7-1.el6sat.noarch * foreman-postgresql-1.6.0.7-1.el6sat.noarch * foreman-proxy-1.6.0.4-1.el6sat.noarch * foreman-selinux-1.5.0-0.develop.el6sat.noarch * foreman-vmware-1.6.0.7-1.el6sat.noarch * katello-1.5.0-22.el6sat.noarch * katello-ca-1.0-1.noarch * katello-certs-tools-1.5.5-1.el6sat.noarch * katello-installer-0.0.37-1.el6sat.noarch * openldap-2.4.23-32.el6_4.1.x86_64 * pulp-katello-plugins-0.2-1.el6sat.noarch * pulp-nodes-common-2.3.1-0.4.beta.el6sat.noarch * pulp-nodes-parent-2.3.1-0.4.beta.el6sat.noarch * pulp-puppet-plugins-2.3.1-0.4.beta.el6sat.noarch * pulp-rpm-plugins-2.3.1-0.4.beta.el6sat.noarch * pulp-selinux-2.3.1-0.4.beta.el6sat.noarch * pulp-server-2.3.1-0.4.beta.el6sat.noarch * python-ldap-2.3.10-1.el6.x86_64 * ruby193-rubygem-ldap_fluff-0.2.2-2.el6sat.noarch * ruby193-rubygem-net-ldap-0.3.1-3.el6sat.noarch * ruby193-rubygem-runcible-1.0.8-1.el6sat.noarch * rubygem-hammer_cli-0.1.0-12.el6sat.noarch * rubygem-hammer_cli_foreman-0.1.0-12.el6sat.noarch * rubygem-hammer_cli_foreman_tasks-0.0.2-5.el6sat.noarch * rubygem-hammer_cli_katello-0.0.3-22.el6sat.noarch How reproducible: always Steps to Reproduce: 1. create a activation-key with unlimited system limit 2. register two content-host with it 3. update the system limit of created key to "1" Actual results: Key was updated and ssytem limit set to '1' but when you navigates away from that page and come back the limit was set to what it was originally. Expected results: UI should raise validation error. Additional info:
Below is the similar bug related to host collection limit: https://bugzilla.redhat.com/show_bug.cgi?id=1098418
Since this issue was entered in Red Hat Bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release.
I'm not sure I agree with this. Adjusting the limit is one way to control usage of the key itself. For example, if I set the limit to 1 that means that it may only for one systems at a time. The "at a time" is important because it means that it could be used repeatedly so long as the first system unregisters first. One way the administrator could enforce the activation key would be to set the limit to one, then when it had been used for its intended purpose, change the limit to zero. This would effectively prevent any further use of the key while still keeping it in place for record keeping or use in the future. I've switched this to an RFE to allow further discussion. Thank you for the report, it is a very good question!
The other way might be relevant to this RFE too.. what happened if we set the ak limit to '1' and try to register more than one system via rhsm with same key? rhsm allows to register more than one system with same activation key whose limit is set to '1' with sat6 beta snap4. And client can consume contents from sat6 server. Not sure, where we can validate this. May be while registering system via rhsm ?
@cfouant May already be fixed, please check in code. Thanks!
Created redmine issue http://projects.theforeman.org/issues/6709 from this bug
Saw that the functionality worked for UI, but was not getting validation error in hammer, so fixed that controller.
Moving to POST since upstream bug http://projects.theforeman.org/issues/6709 has been closed ------------- Christine Fouant Applied in changeset commit:katello|4a12b1a66ca99cba128e21099d7627e685e5d96f.
Created attachment 945575 [details] Activation key VERIFIED : *** This bug is verified in upstream. This fix should eventually land in future downstream builds *** # rpm -qa | grep foreman foreman-gce-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch ruby193-rubygem-foreman_discovery-1.4.0-0.1.rc4.el7.noarch hp-bl420cgen8-01.rhts.eng.bos.redhat.com-foreman-proxy-1.0-1.noarch foreman-compute-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el7.noarch rubygem-hammer_cli_foreman_tasks-0.0.3-2.201409091410git163c264.git.0.988ca80.el7.noarch foreman-release-1.7.0-0.develop.201410071158git54141ab.el7.noarch foreman-proxy-1.7.0-0.develop.201410081229git52f0bac.el7.noarch hp-bl420cgen8-01.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch foreman-ovirt-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch ruby193-rubygem-foreman-tasks-0.6.9-1.el7.noarch foreman-selinux-1.7.0-0.develop.201409301113git2f345de.el7.noarch foreman-postgresql-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch foreman-vmware-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch ruby193-rubygem-foreman_bootdisk-4.0.0-1.el7.noarch foreman-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch foreman-libvirt-1.7.0-0.develop.201410081938git1cf31c6.el7.noarch rubygem-hammer_cli_foreman-0.1.3-1.201409191432gitc38f9c8.el7.noarch Validation error shown : content hosts cannot be lower than current usage count (2) screen shot attached
This bug is slated to be released with Satellite 6.1.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1592