Bug 819924

Summary: RFE - support disabling RHUI repos during image customization (push)
Product: [Retired] CloudForms Cloud Engine Reporter: James Laska <jlaska>
Component: imagefactoryAssignee: Ian McLeod <imcleod>
Status: CLOSED EOL QA Contact: Rehana <aeolus-qa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.0.0CC: athomas, hbrock, srevivo, whayutin
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-27 18:40:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description James Laska 2012-05-08 16:01:13 UTC
Description of problem:

Currently, when deploying to amazon, RHUI repos are enabled during image customization.  For customers who want to use *only* System Engine hosted content, having the RHUI repos enabled during push potentially satisfies dependencies using a repository not under their control.

It's possible to disable rhui repos, and install packages during image deployment, but that defeats the purpose of the image customization step (push).  Having RHUI enabled during push, but disabled during deployment ... can produce a broken package dependency scenario.

Version-Release number of selected component (if applicable):
 * imagefactory-1.0.0rc11-1.el6.noarch


Steps to Reproduce:
1. Build and push an image to ec2 that includes <package name='katello-all'>
2. Deploy instance and setup magic needed so that ec2 image consumes private katello content
3. Login to deployment ... and disable rhui repos
4. Run 'katello-configure'
  
Actual results:

katello-configure fails because some deps were resolved using rhui repos.  However, since we disabled rhui repos ... we've introduces a broken deps map.

Expected results:

 1) Either never disable RHUI repos ....
 2) Or, provide a facility for imagefactory to disable RHUI repos during push

Additional info:

Comment 1 wes hayutin 2012-05-08 16:04:57 UTC
weighing in here..
IMHO customers are going to have different content sets and versions in their own katello servers.  If the rpm versions conflict w/ whats in ec2 RHUI.. I believe we will hear customers screaming.

Comment 2 jrd 2012-05-08 17:07:07 UTC
This is making my brain hurt.  I think I understand the desired behaviour, but I don't off the top of my head see how to make this work in the general case.  I worry about what happens when you've got "conflicting" packages available at run time vs build time.

Presumably we will need to pull together some folks from katello team and IF team and hash out what we want the end to end behaviour to be.  I'm hoping it's not as scary as It looks right now.

Comment 3 James Laska 2012-05-08 17:40:19 UTC
(In reply to comment #2)
> I worry about what happens when you've got "conflicting" packages available at
> run time vs build time.

You got it, that's exactly the problem I hit.  The repository set during build and deploy,emt time are different ... and it resulted in a yum dependency problem.

Granted, I'm the one adjusting the deploy-time repository set, by disabling the RHUI repos.  I can easily not do that.  However, it raises the question whether a customer will want to build and deploy ec2 images using *ONLY* private system engine content.  As currently designed, RHUI will be used to satisfy dependencies during image customization.

Comment 4 Hugh Brock 2012-05-08 19:26:28 UTC
Talked to Wes about this for a while.

Basically the model we're stuck with imposes a hard one-or-the-other situation w/r/t RHUI and System Engine. Customers simply aren't going to be able to use both unless they really know what they're doing.

The right way to deal with this in Factory for snapshot builds is to by default disable the existing repos in the JEOS before installing any new software. This should ensure that we never get into a conflicting package situation, at least given that the user is smart enough not to use a JEOS that is newer than the base  repo they have set up in System Engine. (This seems like a minimal requirement to me.)

If a user *really wants* to pull content from RHUI or CDN, then they should have to turn that on explicitly either in the template or in Factory config (or wherever it is most appropriate). They will then be reasonably prepared to deal with the consequences.

Note, I have no idea how difficult it will be to make this work in Factory. It seems like it ought to be a fairly minor patch that would be a candidate for z-stream -- somebody correct me if I'm wrong...

Comment 5 Ian McLeod 2012-06-04 19:36:02 UTC
Hugh,

I believe you are correct in thinking this is a relatively minor change.  Disabling RHUI amounts to moving, deleting or editing a couple of repo files prior to customization.  We already have hooks in place to do some RHEL specific shell commands that run before image customization.  We could simply add the required fixes there.

I also tend to agree that the default behaviour should be to disable the RHUI package access and only re-enable it via an explicit flag, either within the TDL or globally for the factory.  (Putting the flag in the TDL violates our stated goal of making TDL cloud-agnostic, so we should discuss further.)

Doing this will bring the post-JEOS customization environments for both EC2 snapshots and local upload builds into parity.  At the moment an upload build does not have access to the non-JEOS package set unless the user has created a repo containing these packages and included it explicitly in the TDL.

As long as we are discussing the differences between snapshot and upload builds on EC2, it would also be nice to make the starting point AMIs themselves more closely match the JEOS images that are created by Oz when doing upload builds.  The AMIs we currently use, the official RHEL hourlies, have a somewhat larger package set than the bare bones JEOS image created in Anaconda via Oz.

However, this is a separate issue.