Bug 1399631

Summary: Get well Program for Satellite 6, Puppet scalability issues
Product: Red Hat Satellite Reporter: Sam Nelson <snelsond>
Component: Docs Puppet GuideAssignee: Stephen Wadeley <swadeley>
Status: CLOSED CURRENTRELEASE QA Contact: Charles Wood <chwood>
Severity: medium Docs Contact:
Priority: medium    
Version: UnspecifiedCC: adahms, bbuckingham, dlobatog, ehelms, ktordeur, stbenjam
Target Milestone: Unspecified   
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-14 09:51:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Sam Nelson 2016-11-29 13:16:15 UTC
Description of problem:

Some of the cases for big customers we receive are related to puppet scalability and usage. 


 how to scale puppet:
- How many clients can it handle per capsule/satellite?
- How many puppet modules, size of the modules?
 (here we had to configure splay to prevent all clients from connecting simultaneously)

We do not have a lot of documentation around this


Actual results: Scalability issues, lack of documentation for Support to proceed with puppet cases


Expected results: Documentation and also answer to puppet sclability questions above 


Additional info:
 This bug is created as part of the Get Well program driven by GSS Management

Comment 1 Andrew Dahms 2017-02-03 04:24:35 UTC
Assigning to Stephen for review.

Comment 12 Stephen Wadeley 2017-03-14 09:51:36 UTC
Hello


These changes are now live on the customer portal.


Puppet Performance and Scalability on Satellite 6


https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/puppet_guide/chap-red_hat_satellite-puppet_guide-overview#sect-Red_Hat_Satellite-Puppet_Guide-Overview-Puppet_Performance_and_Scalability_on_Satellite_6


Thank you