Bug 1316750

Summary: Puppet 3.x environment limitations conflict with Content View Versioning feature Satellite 6.x
Product: Red Hat Satellite Reporter: Ashfaqur Rahaman <arahaman>
Component: ProvisioningAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED WONTFIX QA Contact: Katello QA List <katello-qa-list>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1.6CC: bkearney, rjerrido
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 18:03:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ashfaqur Rahaman 2016-03-11 02:08:21 UTC
Description of problem:
Wrong puppet version selected in Satellite due to limitation of puppet environment

Version-Release number of selected component (if applicable):
puppet 3.x
Satellite 6.x

How reproducible:
100%

Steps to Reproduce:
1. Create a puppet module "test_environment_lib" with
  a. a puppet master function in lib/parser/functions/get_environment_lib_test.pp that returns 2
  b. a manifest with a single class that defines a variable $test_var = get_environment_test() \n notify { "test result is ${test_var}" }

2. in satellite import this module into a repository, and publish a Content View that contains this class (promote this Content View to a random LifeCycle Environment), use this Content View to deploy a host, make sure the notify is seen in the report

3. Update the puppet module so that:
 a. the function takes an argument and returns another value
 b. update the manifest to call the function with the correct argument

4. upload the module to the repository, update the Content View to use the new version, publish the content view, promote the new content view version in the very same LifeCycle Environment

5. start a puppet run on the host

Actual results:
the run fails due to wrong number of arguments.

Expected results:
this is expected to work, moreover, both versions of the module should work at the same time on multiple hosts in different LifeCylce Environments (even in the same LifeCylce Path) if Content View Versioning is to work as expected.

Additional info:

https://docs.puppetlabs.com/puppet/3.6/reference/environments_limitations.html

Comment 1 Ashfaqur Rahaman 2016-03-11 02:10:46 UTC
Impact on Satellite : 

In Puppet this was a limitation in the workings of environments and not critical, however for satellite It can be consider as a bug.

Use case for Satellite : 

- Have one company, with a satellite setup with two Organisations say "O" and "N"
- Each organisation has its own LifeCycle Environment say "L" for organisation "O", and "M" for Organisation "N"
- Each organisation uses its own Location with a capsule, Organisation "O" is on capsule "C", and organisation "N" is on capsule "D"
- Each organisation has a repository configured as a puppet module source which contains two versions of the same module say versions "A" and "B" where "B" is a bugfix release of "A"
- Organisation "O" uses version "A" of the module in LifyCycle Environment "L" because they never hit the bug and did not try to update it, while Organisation "N" uses version "B" because they hit the bug during testing period of the LifeCycle environement "M" that destroyed the configuration of their machines when triggered
- In the current setting, everything is working fine:
  ° Organisation "O" on capsule "C" and LifeCycle Environment "L" uses version "A" of the module
  ° Organisation "N" on capsule "D" and LifeCycle Environment "M" uses version "B" of the module

Now for some reason, Organisation "O" needs to deploy a using capsule "D" (in their LifeCycle Environment "L")
In that case something weird happens, the puppet environment will use version "B" of the module (due to the puppet environment limitation) so this means that Organisation "O" on capsule "D" is using version "B" of the module while thinking they use version "A" as they do on capsule "C", at that point everything is still fine and they do not know of this particularity
Everything works as expected during months, until for some reason the capsule "D" needs to be rebooted (say for a kernel update) then this happens:
 1. after the capsule "D" reboots, the first machine to do a puppet run is one for Organisation "O" on LifeCycle Environment "L" which requests module version "A", this works as expected and version "A" is now PERMANENTLY loaded into capsule "D"
 2. next a machine for organisation "N" do a puppet run, on capsule "D", LifeCycle Environment "M" and requests module version "B", however, as version "A" is already loaded, the bug is triggered and the configuration is destroyed

Customer wants to know : 

1. What is Red Hat's thought on this ?
2. Is there any a work around at the moment ?

Comment 2 Bryan Kearney 2016-04-13 21:13:48 UTC
We have not triaged this yet. We will update the but when it is triaged.

Comment 3 Bryan Kearney 2016-07-26 19:02:05 UTC
Moving 6.2 bugs out to sat-backlog.

Comment 6 Bryan Kearney 2018-09-04 18:03:49 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and we do not expect this to be implemented in the product in the foreseeable future. We are therefore closing this out as WONTFIX. If you have any concerns about this, please feel free to contact Rich Jerrido or Bryan Kearney. Thank you.