Bug 1121207

Summary: capsule-installer option dispute / Could not add the lifecycle environment
Product: Red Hat Satellite Reporter: paul <paul.vanallsburg>
Component: Foreman ProxyAssignee: Justin Sherrill <jsherril>
Status: CLOSED CURRENTRELEASE QA Contact: Corey Welton <cwelton>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.0.3CC: bbuckingham, cwelton, jmontleo, sthirugn, xdmoon
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
URL: http://projects.theforeman.org/issues/7175
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Build Name: 14370, Installation Guide-6.0 Beta-1 Build Date: 30-06-2014 17:42:29 Topic ID: 24318-674103 [Latest]
Last Closed: 2014-09-11 12:22:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description paul 2014-07-18 16:02:48 UTC
Title: Adding Lifecycle Environments to a Red Hat Satellite Capsule Server

Describe the issue:
capsule-certs-generate output says use --register-in-foreman  "true", but Installation Guide 5.3.2.b.2 says --register-in-forem an "false"?

Suggestions for improvement:
 I don't know what's correct.

Additional information:
 in the next step I fail with 
 Could not add the lifecycle environment to the capsule:
  Task d2feb860-bc85-43de-9ef4-7d114d01809a: RuntimeError: Could not find node distributor for repository



Notes:
on my capsule server:

subscription-manager unregister
subscription-manager clean
subscription-manager register --org="priority_health" --environment="Library" --release=6.5  admin / changeme

subscription-manager subscribe --pool=8ab2829946f7a4e10146f83f150700f3   (rhel lic)

subscription-manager subscribe --pool=8ab2829946f7a4e10146f83875150052   (capsule6 beta)

subscription-manager repos --disable "*"
subscription-manager repos --enable rhel-6-server-rpms 
subscription-manager repos --enable rhel-server-rhscl-6-beta-rpms
subscription-manager repos --enable rhel-server-6-satellite-capsule-6-beta-rpms


yum info  katello-installer
yum install  katello-installer
Complete!


Generate a Satellite Capsule Server certificate On the Satellite Server!!!

On the Satellite6 server:
[root@satellite6 ~]#

capsule-certs-generate --capsule-fqdn satellite6-cap2.internal.priority-health.com --certs-tar  ~/satellite6-cap2.internal.priority-health.com-certs.tar

Installing             Done                                               [100%] [............]
  Success!

  To finish the installation, follow these steps:

  1. Ensure that the capsule-installer is available on the system.
     The capsule-installer comes from the katello-installer package and
     should be acquired through the means that are appropriate to your deployment.
  2. Copy /root/satellite6-cap2.internal.priority-health.com-certs.tar to the system satellite6-cap2.internal.priority-health.com 
  3. Run the following commands on the capsule (possibly with the customized
     parameters, see capsule-installer --help and
     documentation for more info on setting up additional services):

  rpm -Uvh http://satellite6.internal.priority-health.com/pub/katello-ca-consumer-latest.noarch.rpm
  subscription-manager register --org "ACME_Corporation"

capsule-installer --parent-fqdn          "satellite6.internal.priority-health.com"\
                    --register-in-foreman  "true"\
                    --foreman-oauth-key    "ktGSVhLrafgzmFtYBmNQqeCc5tpAkJ9L"\
                    --foreman-oauth-secret "RGaECf24jsXYeGDVHnVZNFCJZW8C5LKq"\
                    --pulp-oauth-secret    "rMYr2gqpD5KEe5qJ9uT4gmMYYfEKTwvY"\
                    --certs-tar            "/root/satellite6-cap2.internal.priority-health.com-certs.tar"\
                    --puppet               "true"\
                    --puppetca             "true"\
                    --pulp                 "true"
  The full log is at /var/log/katello-installer/capsule-certs-generate.log
[root@satellite6 ~]# 

Copy tar file to capsule server:
scp satellite6* root@satellite6-cap2:/root


Now on Satellite6-cap2 do:

The Install manual says to do: (error in subscription-manager cmd)

# rpm -Uvh http://sat6host.example.redhat.com /pub/katello-ca-consumerlatest.noarch.rpm
# subscription-manager register --org register --org "ACME_Corporation" --env [environment]/[content_view_name]  
                                --------------<drop this>
HOWEVER, I have already completed this prior is order to get repo access from the satellite server & katello-install 
from History:
   19  yum -y --nogpgcheck install http://satellite6.internal.priority-health.com/pub/katello-ca-consumer-satellite6.internal.priority-health.com-1.0-1.noarch.rpm
   20  subscription-manager register --org="priority_health" --environment="Library" --release=6.5
So I'll just continue... I want to do ii. listed below from the Install Guide:

ii. Option 2 - Satellite Capsule Server as a Content Node:


capsule-installer   --parent-fqdn          "satellite6.internal.priority-health.com"\
                    --register-in-foreman  "false"\
                    --pulp-oauth-secret    "rMYr2gqpD5KEe5qJ9uT4gmMYYfEKTwvY"\
                    --certs-tar            "/root/satellite6-cap2.internal.priority-health.com-certs.tar"\
                    --puppet               "true"\
                    --puppetca             "true"\
                    --pulp                 "true"

Installing             Done                                               [100%] [...........................................]
  Success!
  * Capsule is running at https://satellite6-cap2.internal.priority-health.com:9090
  The full log is at /var/log/katello-installer/capsule-installer.log

Now, when I continue with 5.4. Adding Lifecycle Environments, 

in Satellite6 do:


[root@satellite6 ~]# hammer capsule list
---|-----------------------------------------|-----------------------------------------------------
ID | NAME                                    | URL                                                 
---|-----------------------------------------|-----------------------------------------------------
1  | satellite6.internal.priority-health.com | https://satellite6.internal.priority-health.com:9090
---|-----------------------------------------|-----------------------------------------------------
[root@satellite6 ~]#


My capsule is missing. 
So I looked back at the output from capsule-certs-generate, and it specified this for of the command which
I reran on the capsule server:

  capsule-installer --parent-fqdn          "satellite6.internal.priority-health.com"\
                    --register-in-foreman  "true"\
                    --foreman-oauth-key    "ktGSVhLrafgzmFtYBmNQqeCc5tpAkJ9L"\
                    --foreman-oauth-secret "RGaECf24jsXYeGDVHnVZNFCJZW8C5LKq"\
                    --pulp-oauth-secret    "rMYr2gqpD5KEe5qJ9uT4gmMYYfEKTwvY"\
                    --certs-tar            "/root/satellite6-cap2.internal.priority-health.com-certs.tar"\
                    --puppet               "true"\
                    --puppetca             "true"\
                    --pulp                 "true"

Again, it completed successfully. 
Now I see

[root@satellite6 ~]# hammer capsule list
---|----------------------------------------------|----------------------------------------------------------
ID | NAME                                         | URL                                                      
---|----------------------------------------------|----------------------------------------------------------
2  | satellite6-cap2.internal.priority-health.com | https://satellite6-cap2.internal.priority-health.com:9090
1  | satellite6.internal.priority-health.com      | https://satellite6.internal.priority-health.com:9090     
---|----------------------------------------------|----------------------------------------------------------
[root@satellite6 ~]# hammer capsule content available-lifecycle-environments --id 1                                    
No data.
[root@satellite6 ~]# hammer capsule content available-lifecycle-environments --id 2
---|---------|-----------------
ID | NAME    | ORGANIZATION    
---|---------|-----------------
4  | dev     | priority health 
3  | Library | spectrum health 
5  | test    | priority health 
1  | Library | ACME_Corporation
6  | prod    | priority health 
2  | Library | priority health 
---|---------|-----------------
[root@satellite6 ~]#


I don't know why --id 1 has No data. That is the capsule running on the primary satellite6 server.

[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 1
Lifecycle environment successfully added to the capsule
[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 2
Could not add the lifecycle environment to the capsule:
  Task d2feb860-bc85-43de-9ef4-7d114d01809a: RuntimeError: Could not find node distributor for repository priority_health-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_ISOs_x86_64_6_5; RuntimeError: Could not find node distributor for repository priority_health-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_ISOs_x86_64_6Server; RuntimeError: Could not find node distributor for repository priority_health-Red_Hat_Satellite_6_Beta-Red_Hat_Satellite_6_0_Beta_for_RHEL_6_Server_ISOs_x86_64; RuntimeError: Could not find node distributor for repository priority_health-Red_Hat_Satellite_Capsule_6_Beta-Red_Hat_Satellite_Capsule_6_0_Beta_for_RHEL_6_Server_ISOs_x86_64
[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 3
Lifecycle environment successfully added to the capsule
[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 4
Lifecycle environment successfully added to the capsule
[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 5
Lifecycle environment successfully added to the capsule
[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 6
Lifecycle environment successfully added to the capsule
[root@satellite6 ~]# 


Thanks, 
Paul

Comment 1 RHEL Program Management 2014-07-18 16:13:45 UTC
Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 3 paul 2014-07-21 19:24:10 UTC
Just an update...

I came back to the servers today and got new results - I now have reported data is id=1 as follows:

[root@satellite6 ~]# hammer capsule content available-lifecycle-environments --id 1
---|---------|-----------------
ID | NAME    | ORGANIZATION    
---|---------|-----------------
4  | dev     | priority health 
3  | Library | spectrum health 
5  | test    | priority health 
1  | Library | ACME_Corporation
6  | prod    | priority health 
2  | Library | priority health 
---|---------|-----------------
[root@satellite6 ~]# 
[root@satellite6 ~]# hammer capsule content available-lifecycle-environments --id 2
---|---------|-----------------
ID | NAME    | ORGANIZATION    
---|---------|-----------------
4  | dev     | priority health 
3  | Library | spectrum health 
5  | test    | priority health 
1  | Library | ACME_Corporation
6  | prod    | priority health 
2  | Library | priority health 
---|---------|-----------------
[root@satellite6 ~]# 


so I figured I would rerun the failed "add the lifecycle environment" cmd:

[root@satellite6 ~]# hammer capsule content add-lifecycle-environment --id 2 --environment-id 2
Could not add the lifecycle environment to the capsule:
  Validation failed: Lifecycle environment is already attached to the capsule

Might as well see if I can sync - 

[root@satellite6 ~]# hammer capsule content synchronize --id 2
[......................................................                                                       ] [50%]


Sure enought - my cap2 server is furiously collecting data... 

/dev/mapper/vg_pulp-lv_pulp             99G   26G   69G  27% /var/lib/pulp
[root@satellite6-cap2 ~]#

from :

/dev/mapper/vg_pulp-lv_pulp            197G   57G  131G  31% /var/lib/pulp
[root@satellite6 ~]#

it should complete within the half-hour.  



Maybe there is a timing issue?  did I try to display content before it got regstered?

Comment 4 Athene Chan 2014-07-22 01:09:40 UTC
Hi Paul!

Thanks for catching that in the docs, I will be looking into that one and verifying the issues. On the other hand, I am not very familiar with most of the technical questions to adequately answer your question on Satellite.

So, what I have done is:
1. Separate the documentation component into this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1121814

2. Changing the component of this bug so that the developers can look this over and assist with your questions.

Cheers,
Athene

Comment 6 Justin Sherrill 2014-08-20 13:53:00 UTC
Created redmine issue http://projects.theforeman.org/issues/7175 from this bug

Comment 7 Bryan Kearney 2014-08-22 16:03:13 UTC
Moving to POST since upstream bug http://projects.theforeman.org/issues/7175 has been closed
-------------
Justin Sherrill
Applied in changeset commit:katello|fbc828c7fe7a424d9d445bbb34e632861f2f4338.

Comment 10 Corey Welton 2014-09-02 15:55:26 UTC
This seems to be ok now in Satellite-6.0.4-RHEL-7-20140829.0

Comment 11 Bryan Kearney 2014-09-11 12:22:54 UTC
This was delivered with Satellite 6.0 which was released on 10 September 2014.