Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 963227 - Got 2 (instance based subscription) sub pool for virtual client
Summary: Got 2 (instance based subscription) sub pool for virtual client
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Subscription Management
Version: Nightly
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: Unspecified
Assignee: Devan Goodwin
QA Contact: Tazim Kolhar
URL:
Whiteboard:
: 964121 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-15 12:47 UTC by spandey
Modified: 2019-09-26 18:20 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-11 12:19:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description spandey 2013-05-15 12:47:37 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:
always

Prerequisites : 
Rhel6.4 host with following subscription-manager rpm 
 subscription-manager-1.8.7-1.git.40.c4425a0.el6.x86_64
subscription-manager-migration-1.8.7-1.git.40.c4425a0.el6.x86_64
subscription-manager-firstboot-1.8.7-1.git.40.c4425a0.el6.x86_64
subscription-manager-gui-1.8.7-1.git.40.c4425a0.el6.x86_64
subscription-manager-migration-data-1.12.2.6-1.git.0.171d4c3.el6.noarch

Rhel5.10 guest with following rpm 
subscription-manager-firstboot-1.8.7-1.git.40.c4425a0.el5
subscription-manager-1.8.7-1.git.40.c4425a0.el5
subscription-manager-migration-1.8.7-1.git.40.c4425a0.el5
subscription-manager-gui-1.8.7-1.git.40.c4425a0.el5
subscription-manager-migration-data-1.11.3.1-1.git.1.78afd75.el5

Steps to repro 
1) configure rhel5.10 guest on rhel6.4 host.
2) Register host to candlepin and subscribe both “instance based subscription pool” around 22 quantity . 
3) Register rhel5.10 guest to candlepin and list available subscription 
 
Expected Result:
Available list should not display 2 virtual pool for instance based subscription.

Actual Result : 
[root@dhcp201-162 ~]# subscription-manager list --avail 
+-------------------------------------------+
    Available Subscriptions
+-------------------------------------------+
Subscription Name: Awesome OS Instance Based (Standard Support)
SKU:               awesomeos-instancebased
Pool ID:           000000003ea779c1013ea807755e1392
Quantity:          1
Service Level:     Standard
Service Type:      L1-L3
Multi-Entitlement: Yes
Ends:              05/15/2014
System Type:       Virtual

Subscription Name: Awesome OS Instance Based (Standard Support)
SKU:               awesomeos-instancebased
Pool ID:           000000003ea779c1013ea80774bd137b
Quantity:          1
Service Level:     Standard
Service Type:      L1-L3
Multi-Entitlement: Yes
Ends:              05/15/2014
System Type:       Virtual

Comment 1 John Sefler 2013-05-16 12:39:42 UTC
I suspect that the cause for the generation of the second sub-pool occurred when the host system made two seperate bind transactions either on the same multi-entitlement physical instance-based pool or a second bind on a second physical instance-based pool.

Is this what happened?

Comment 2 spandey 2013-05-16 12:54:10 UTC
Configured facts for 22 socket and autosubscribed , so its consumed quantity from both the "instanced based subscription pool "

Comment 3 spandey 2013-05-16 12:55:48 UTC
(In reply to comment #2)
> Configured facts for 22 socket and autosubscribed , so its consumed quantity
> from both the "instanced based subscription pool "

for above comment i am using physical6.4 machine

Comment 4 Devan Goodwin 2013-05-17 12:02:56 UTC
There's a lot to consider here.

If this was two separate entitlements from one pool, you would probably incorrectly get two sub-pools. There should be only one, but of course if we skip the bonus pool when the second bind is done, and then the first entitlement is revoked, there will be no bonus pool, despite the second entitlement existing. That might be a reasonable side effect though, the workaround being you just remove the entitlement and re-attach to get your sub-pool back.

If however as Sachin is stating in comment #2 these came from *two separate pools*, then I'm not even sure what should happen. If you're crossing subscriptions, should there be two sub-pools? Will take this question to PM.

Comment 5 Jesus M. Rodriguez 2013-05-17 15:45:46 UTC
*** Bug 964121 has been marked as a duplicate of this bug. ***

Comment 6 Devan Goodwin 2013-05-17 16:11:27 UTC
We need a new sub-pool implementation, one sub-pool per stack, disappears when the last ent in the stack is gone. Moved to backlog, will be a pretty substantial change.

Comment 8 RHEL Program Management 2013-05-17 16:58:56 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux release.  Product Management has
requested further review of this request by Red Hat Engineering, for
potential inclusion in a Red Hat Enterprise Linux release for currently
deployed products.  This request is not yet committed for inclusion in
a release.

Comment 9 Devan Goodwin 2013-08-01 14:46:31 UTC
Not a client bug, re-aligning to Sat 6 + Candlepin and SAM 1.3 tracker, as we need this for 1.3 release.

Comment 10 Bryan Kearney 2013-08-20 18:19:41 UTC
moving to NEW to get it looked at.

Comment 11 Michael Stead 2013-08-21 17:08:12 UTC
This has been addressed as part of the One Sub Pool Per Stack implementation.

It is available in:
  candlepin-0.8.21-1

GitHub Pull Request: https://github.com/candlepin/candlepin/pull/345

Commit: feca66fcecadd5248fc6ab06411b7927cec01f2d

Comment 12 Bryan Kearney 2013-08-28 17:12:01 UTC
Lates build (Snap4) should contain these fixes. Please test on Snap4.

Comment 13 Mike McCune 2013-10-17 20:59:00 UTC
Moving this to be tested during MDP3, not critical for MDP2 success story

Comment 16 Tazim Kolhar 2014-05-22 09:36:00 UTC
please provide verification steps

Comment 17 Devan Goodwin 2014-06-27 13:57:38 UTC
Quite tricky to describe this one but here we go.

Essentially you need to locate at least one pool with product attributes virt_limit and stacking_id. Ideally you would find two separate pools with the same stacking_id, but one will do so long as it has some quantity available. (more than 1)

Here is how I lookup pools and their attributes:

Lookup my production owner ID:

curl -k -u rhn-engineering-dgoodwin:password https://subscription.rhn.redhat.com/subscription/users/rhn-engineering-dgoodwin/owners

Now lookup all pools for my owner ID:

curl -k -u rhn-engineering-dgoodwin:password https://subscription.rhn.redhat.com/subscription/owners/5894300/pools | python -mjson.tool | less

In this list, look for pools with productAttributes virt_limit and stacking_id. Once found, these are the pools you need to add to your distributor/subscription-management application and export a manifest for. Export quantity 10 or so to be safe.

Import the manifest in Satellite. Now we need to register a virt host, and assign it *two separate entitlements*. If you exported just one pool, give the host two separate entitlements from that pool, i.e. quantity 5 and quantity 5.

Now you need a guest, running on that host. There are ways to fake this if needed just let us know.

Register the guest, make sure the virt host/guest mapping is established, and list available pools for the guest. You should see *just one* virtual sub-pool, even though we gave the host two separate entitlements.

Comment 18 Corey Welton 2014-06-27 14:05:28 UTC
Moving to 6.0.4 for verification

Comment 19 Tazim Kolhar 2014-09-02 06:12:40 UTC
VERIFIED

a  single virtual sub-pool seen even if two separate entitlements
to host is assigned

Comment 20 Bryan Kearney 2014-09-11 12:19:20 UTC
This was delivered with Satellite 6.0 which was released on 10 September 2014.


Note You need to log in before you can comment on or make changes to this bug.