Bug 1316750 - Puppet 3.x environment limitations conflict with Content View Versioning feature Satellite 6.x
Puppet 3.x environment limitations conflict with Content View Versioning feat...
Status: NEW
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Provisioning (Show other bugs)
Unspecified Unspecified
unspecified Severity medium (vote)
: Unspecified
: --
Assigned To: satellite6-bugs
Katello QA List
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2016-03-10 21:08 EST by Ashfaqur Rahaman
Modified: 2018-03-16 15:58 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ashfaqur Rahaman 2016-03-10 21:08:21 EST
Description of problem:
Wrong puppet version selected in Satellite due to limitation of puppet environment

Version-Release number of selected component (if applicable):
puppet 3.x
Satellite 6.x

How reproducible:

Steps to Reproduce:
1. Create a puppet module "test_environment_lib" with
  a. a puppet master function in lib/parser/functions/get_environment_lib_test.pp that returns 2
  b. a manifest with a single class that defines a variable $test_var = get_environment_test() \n notify { "test result is ${test_var}" }

2. in satellite import this module into a repository, and publish a Content View that contains this class (promote this Content View to a random LifeCycle Environment), use this Content View to deploy a host, make sure the notify is seen in the report

3. Update the puppet module so that:
 a. the function takes an argument and returns another value
 b. update the manifest to call the function with the correct argument

4. upload the module to the repository, update the Content View to use the new version, publish the content view, promote the new content view version in the very same LifeCycle Environment

5. start a puppet run on the host

Actual results:
the run fails due to wrong number of arguments.

Expected results:
this is expected to work, moreover, both versions of the module should work at the same time on multiple hosts in different LifeCylce Environments (even in the same LifeCylce Path) if Content View Versioning is to work as expected.

Additional info:

Comment 1 Ashfaqur Rahaman 2016-03-10 21:10:46 EST
Impact on Satellite : 

In Puppet this was a limitation in the workings of environments and not critical, however for satellite It can be consider as a bug.

Use case for Satellite : 

- Have one company, with a satellite setup with two Organisations say "O" and "N"
- Each organisation has its own LifeCycle Environment say "L" for organisation "O", and "M" for Organisation "N"
- Each organisation uses its own Location with a capsule, Organisation "O" is on capsule "C", and organisation "N" is on capsule "D"
- Each organisation has a repository configured as a puppet module source which contains two versions of the same module say versions "A" and "B" where "B" is a bugfix release of "A"
- Organisation "O" uses version "A" of the module in LifyCycle Environment "L" because they never hit the bug and did not try to update it, while Organisation "N" uses version "B" because they hit the bug during testing period of the LifeCycle environement "M" that destroyed the configuration of their machines when triggered
- In the current setting, everything is working fine:
  ° Organisation "O" on capsule "C" and LifeCycle Environment "L" uses version "A" of the module
  ° Organisation "N" on capsule "D" and LifeCycle Environment "M" uses version "B" of the module

Now for some reason, Organisation "O" needs to deploy a using capsule "D" (in their LifeCycle Environment "L")
In that case something weird happens, the puppet environment will use version "B" of the module (due to the puppet environment limitation) so this means that Organisation "O" on capsule "D" is using version "B" of the module while thinking they use version "A" as they do on capsule "C", at that point everything is still fine and they do not know of this particularity
Everything works as expected during months, until for some reason the capsule "D" needs to be rebooted (say for a kernel update) then this happens:
 1. after the capsule "D" reboots, the first machine to do a puppet run is one for Organisation "O" on LifeCycle Environment "L" which requests module version "A", this works as expected and version "A" is now PERMANENTLY loaded into capsule "D"
 2. next a machine for organisation "N" do a puppet run, on capsule "D", LifeCycle Environment "M" and requests module version "B", however, as version "A" is already loaded, the bug is triggered and the configuration is destroyed

Customer wants to know : 

1. What is Red Hat's thought on this ?
2. Is there any a work around at the moment ?
Comment 2 Bryan Kearney 2016-04-13 17:13:48 EDT
We have not triaged this yet. We will update the but when it is triaged.
Comment 3 Bryan Kearney 2016-07-26 15:02:05 EDT
Moving 6.2 bugs out to sat-backlog.

Note You need to log in before you can comment on or make changes to this bug.