Bug 2087067

Summary: Re-enabling puppet fails when it was disabled with -f option before
Product: Red Hat Satellite Reporter: Vladimír Sedmík <vsedmik>
Component: PuppetAssignee: Sayan Das <saydas>
Status: NEW --- QA Contact: Satellite QE Team <sat-qe-bz-list>
Severity: medium Docs Contact: Zuzana Lena Ansorgova <zuansorg>
Priority: medium    
Version: 6.11.0CC: ahumbe, bangelic, dhjoshi, ehelms, gpathan, gsigrisi, ldelouw, lstejska, nalfassi, rlavi, saydas, zuansorg
Target Milestone: UnspecifiedKeywords: Reopened, Triaged, WorkAround
Target Release: Unused   
Hardware: x86_64   
OS: Linux   
Fixed In Version: Doc Type: Known Issue
Doc Text:
.Disabled Puppet with all data removed cannot be re-enabled If the Puppet plug-in was disabled with the `-f, --remove-all-data` argument and you attempt to enable it again, Satellite maintain fails.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-21 13:07:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vladimír Sedmík 2022-05-17 08:55:52 UTC
Description of problem:
Re-enabling puppet fails when it was disabled with -f option before. When disabling without this option, re-enable works without any issues.

Version-Release number of selected component (if applicable):
6.11.0 snap 20

How reproducible:

Steps to Reproduce:

1. Have a fresh 6.11 Satellite

2. Enable puppet plugin
# satellite-installer --enable-foreman-plugin-puppet \
--enable-foreman-cli-puppet \
--foreman-proxy-puppet true \
--foreman-proxy-puppetca true \
--foreman-proxy-content-puppet true \
--enable-puppet \
--puppet-server true \
--puppet-server-foreman-ssl-ca /etc/pki/katello/puppet/puppet_client_ca.crt \
--puppet-server-foreman-ssl-cert /etc/pki/katello/puppet/puppet_client.crt \
--puppet-server-foreman-ssl-key /etc/pki/katello/puppet/puppet_client.key

3. Disable the puppet plugin with -f option
# foreman-maintain plugin purge-puppet -f

4. Try to enable puppet plugin again

Actual results:
2022-05-17 04:45:40 [NOTICE] [configure] 1000 configuration steps out of 2093 steps complete.
2022-05-17 04:45:47 [NOTICE] [configure] 1250 configuration steps out of 2097 steps complete.
2022-05-17 04:46:39 [ERROR ] [configure] '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]
2022-05-17 04:46:39 [ERROR ] [configure] /Stage[main]/Foreman::Database/Foreman::Rake[db:migrate]/Exec[foreman-rake-db:migrate]/returns: change from 'notrun' to ['0'] failed: '/usr/sbin/foreman-rake db:migrate' returned 1 instead of one of [0]
2022-05-17 04:47:04 [NOTICE] [configure] 1500 configuration steps out of 2097 steps complete.
2022-05-17 04:47:08 [NOTICE] [configure] 1750 configuration steps out of 2901 steps complete.

#  /usr/sbin/foreman-rake db:migrate
== 20121018152459 CreateHostgroupClasses: migrating ===========================
-- rename_table(:hostgroups_puppetclasses, :hostgroup_classes)
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

PG::UndefinedTable: ERROR:  relation "hostgroups_puppetclasses" does not exist
/opt/theforeman/tfm/root/usr/share/gems/gems/activerecord- `async_exec'
/opt/theforeman/tfm/root/usr/share/gems/gems/activerecord- `block (2 levels) in execute'
/opt/theforeman/tfm/root/usr/share/gems/gems/activesupport- `block in permit_concurrent_loads'
/opt/theforeman/tfm/root/usr/share/gems/gems/activesupport- `yield_shares'

Expected results:
no errors

Comment 2 Brad Buckingham 2022-09-07 15:48:43 UTC
*** Bug 2123439 has been marked as a duplicate of this bug. ***

Comment 6 Zuzana Lena Ansorgova 2023-07-12 17:36:11 UTC
Added fully-reviewed RN.

Comment 36 Sayan Das 2023-11-23 18:36:29 UTC
Connecting Redmine issue:

Bug #36942: Unable to enable back puppet plugin again after completely purging the puppet plugin and related stuff - Foreman

Fixes #36942 - Improve the puppet plugin cleanup by sayan3296 · Pull Request #9918 · theforeman/foreman

Comment 38 Leos Stejskal 2024-01-17 13:35:21 UTC
Hi, I suggest to remove the issue from 6.15.
There hasn't been done any work from the engineering side, and since we have only two snaps before the freeze there is little to no chance of fixing it in time.

If we want to fix it in 6.16, we should align it with the scope and make the prioritization accordingly.