Bug 1669498 - Upgrade from 6.3 to 6.4, PostgreSQL is removed during the upgrade process.
Summary: Upgrade from 6.3 to 6.4, PostgreSQL is removed during the upgrade process.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Satellite Maintain
Version: 6.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: 6.9.0
Assignee: Kavita
QA Contact: Gaurav Talreja
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-25 13:53 UTC by Rudnei Bertol Jr.
Modified: 2024-03-25 15:12 UTC (History)
9 users (show)

Fixed In Version: rubygem-foreman_maintain-0.7.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-21 14:48:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 30234 0 Normal Closed Upgrade from 6.3 to 6.4, PostgreSQL is removed during the upgrade process. 2021-02-20 16:51:54 UTC
Red Hat Knowledge Base (Solution) 3707201 0 None None None 2019-01-30 16:41:35 UTC
Red Hat Product Errata RHBA-2021:1312 0 None None None 2021-04-21 14:48:34 UTC

Description Rudnei Bertol Jr. 2019-01-25 13:53:15 UTC
Description of problem:

Upgrading from 6.3 to 6.4, PostgreSQL is removed by foreman-maitain if the 'clean_requirements_on_remove' is enabled on yum.conf

Version-Release number of selected component (if applicable):

6.3.5

How reproducible:

Upgrade Satellite 6.3.5 to 6.4.x

Steps to Reproduce:

1. On the /etc/yum.conf add the option 'clean_requirements_on_remove=1'
2. Try to upgrade the Satellite using the foreman-maintain
3. It will fail as PostgreSQL is removed 

Actual results:

The PostgreSQL is removed and the upgrade fails.

Expected results:

The PostgreSQL would not be removed during the upgrade.

Additional info:

Below an output from a reproducer.
Note: The PostgreSQL is removed during the 'yum upgrade step'
~~~
[root@sat632 ~]# foreman-maintain upgrade run --target-version 6.4 --whitelist="disk-performance"
Running Checks before upgrading to Satellite 6.4
================================================================================
Check for verifying syntax for ISP DHCP configurations:               [SKIPPED]
DHCP feature is not enabled
--------------------------------------------------------------------------------
Check whether all services are running:                               [OK]
--------------------------------------------------------------------------------
Check whether all services are running using hammer ping:             [OK]
--------------------------------------------------------------------------------
Check for paused tasks:                                               [OK]
--------------------------------------------------------------------------------
Check to validate candlepin database:                                 [OK]
--------------------------------------------------------------------------------
Check if EPEL repository enabled on system: 
| Checking for presence of EPEL repository                            [OK]      
--------------------------------------------------------------------------------
Check for running tasks:                                              [OK]
--------------------------------------------------------------------------------
Check for old tasks in paused/stopped state:                          [OK]
--------------------------------------------------------------------------------
Check for pending tasks which are safe to delete:                     [OK]
--------------------------------------------------------------------------------
Check for tasks in planning state:                                    [OK]
--------------------------------------------------------------------------------
Check for recommended disk speed of pulp, mongodb, pgsql dir.:        [SKIPPED]
--------------------------------------------------------------------------------
Check whether reports have correct associations:                      [OK]
--------------------------------------------------------------------------------
Verify puppet and provide upgrade guide for it: 
- current puppet version:                                             [OK]      
--------------------------------------------------------------------------------
Validate availability of repositories: 
- Validating availability of repositories for 6.4                     [OK]      
--------------------------------------------------------------------------------


The pre-upgrade checks indicate that the system is ready for upgrade.
It's recommended to perform a backup at this stage.
Confirm to continue with the modification part of the upgrade, [y(yes), n(no), q(quit)] y
Running Procedures before migrating to Satellite 6.4                            
================================================================================
Turn on maintenance mode:                                             [OK]
--------------------------------------------------------------------------------
disable active sync plans: 
| Total 0 sync plans are now disabled.                                [OK]      
--------------------------------------------------------------------------------
Stop applicable services: 
Stopping the following service(s):

mongod, postgresql, qdrouterd, qpidd, squid, pulp_celerybeat, pulp_resource_manager, pulp_streamer, pulp_workers, smart_proxy_dynflow_core, tomcat, foreman-tasks, httpd, puppetserver, foreman-proxy
| stopping foreman-proxy                                                        
Redirecting to /bin/systemctl stop foreman-proxy.service
/ stopping puppetserver                                                         
Redirecting to /bin/systemctl stop puppetserver.service
- stopping httpd                                                                
Redirecting to /bin/systemctl stop httpd.service
/ stopping foreman-tasks                                                        
Redirecting to /bin/systemctl stop foreman-tasks.service
\ stopping tomcat                                                               
Redirecting to /bin/systemctl stop tomcat.service
| stopping smart_proxy_dynflow_core                                             
Redirecting to /bin/systemctl stop smart_proxy_dynflow_core.service
/ stopping pulp_workers                                                         
Redirecting to /bin/systemctl stop pulp_workers.service
- stopping pulp_streamer                                                        
Redirecting to /bin/systemctl stop pulp_streamer.service
| stopping pulp_resource_manager                                                
Redirecting to /bin/systemctl stop pulp_resource_manager.service
| stopping pulp_celerybeat                                                      
Redirecting to /bin/systemctl stop pulp_celerybeat.service
- stopping squid                                                                
Redirecting to /bin/systemctl stop squid.service
| stopping qpidd                                                                
Redirecting to /bin/systemctl stop qpidd.service
/ stopping qdrouterd                                                            
Redirecting to /bin/systemctl stop qdrouterd.service
| stopping postgresql                                                           
Redirecting to /bin/systemctl stop postgresql.service
| stopping mongod                                                               
Redirecting to /bin/systemctl stop mongod.service
/ All services stopped                                                [OK]      
--------------------------------------------------------------------------------


Running Migration scripts to Satellite 6.4
================================================================================
Setup repositories: 
| Configuring repositories for 6.4                                    [OK]      
--------------------------------------------------------------------------------
Update package(s) : 
Loaded plugins: product-id, search-disabled-repos, subscription-manager
https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: [Errno 12] Timeout on https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror.
https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: [Errno 12] Timeout on https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror.
https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: [Errno 12] Timeout on https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/ansible/2.6/os/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror.
rhel-7-server-ansible-2.6-rpms                                                                                                                                         | 4.1 kB  00:00:00     
rhel-7-server-rpms                                                                                                                                                     | 3.5 kB  00:00:00     
rhel-7-server-satellite-6.4-rpms                                                                                                                                       | 4.0 kB  00:00:00     
rhel-7-server-satellite-maintenance-6-rpms                                                                                                                             | 3.8 kB  00:00:00     
rhel-7-server-satellite-tools-6.4-rpms                                                                                                                                 | 3.8 kB  00:00:00     
rhel-server-rhscl-7-rpms                                                                                                                                               | 3.5 kB  00:00:00     
(1/18): rhel-7-server-ansible-2.6-rpms/x86_64/updateinfo                                                                                                               |  12 kB  00:00:00     
(2/18): rhel-7-server-rpms/7Server/x86_64/group                                                                                                                        | 856 kB  00:00:00     
(3/18): rhel-7-server-ansible-2.6-rpms/x86_64/primary_db                                                                                                               |  15 kB  00:00:01     
(4/18): rhel-7-server-satellite-6.4-rpms/x86_64/group                                                                                                                  | 5.5 kB  00:00:00     
(5/18): rhel-7-server-rpms/7Server/x86_64/updateinfo                                                                                                                   | 3.1 MB  00:00:02     
(6/18): rhel-7-server-satellite-6.4-rpms/x86_64/updateinfo                                                                                                             |  36 kB  00:00:01     
(7/18): rhel-7-server-satellite-6.4-rpms/x86_64/primary_db                                                                                                             | 202 kB  00:00:00     
(8/18): rhel-7-server-satellite-maintenance-6-rpms/x86_64/group                                                                                                        |  104 B  00:00:00     
(9/18): rhel-7-server-satellite-maintenance-6-rpms/x86_64/updateinfo                                                                                                   | 7.9 kB  00:00:00 

=== ommited output ===

 tfm-rubygem-polyglot                                            noarch               0.3.5-2.el7sat                           rhel-7-server-satellite-6.4-rpms                         6.9 k
 tfm-rubygem-record_tag_helper                                   noarch               1.0.0-1.el7sat                           rhel-7-server-satellite-6.4-rpms                         7.3 k
 tfm-rubygem-unicode                                             x86_64               0.4.4.1-5.el7sat                         rhel-7-server-satellite-6.4-rpms                          85 k
Updating for dependencies:
 python-pulp-rpm-common                                          noarch               2.16.4.1-5.el7sat                        rhel-7-server-satellite-6.4-rpms                          74 k
 tfm-rubygem-net-scp                                             noarch               1.2.1-2.el7sat                           rhel-7-server-satellite-6.4-rpms                          18 k
Removing for dependencies:
 postgresql-server                                               x86_64               9.2.24-1.el7_5                           @rhel-7-server-rpms                                       16 M

Transaction Summary
==============================================================================================================================================================================================
Install   12 Packages (+114 Dependent packages)
Upgrade  243 Packages (+  2 Dependent packages)
Remove                (   1 Dependent package)

Total download size: 411 M
Downloading packages:
No Presto metadata available for rhel-7-server-rpms
No Presto metadata available for rhel-7-server-satellite-maintenance-6-rpms
No Presto metadata available for rhel-7-server-satellite-6.4-rpms
^C

Exiting on user cancel
                                                                      [FAIL]
Failed executing yum -y update , exit status 1
--------------------------------------------------------------------------------
Scenario [Migration scripts to Satellite 6.4] failed.

The following steps ended up in failing state:

  [packages-update]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="packages-update"
~~~

Comment 5 Ivan Necas 2019-01-29 11:34:46 UTC
I think the foreman-maintain should prevent the upgrade if the option `clean_requirements_on_remove` is set in yum: this is not a tested configuration and there is risk more things like that would happen.

Comment 8 Kavita 2020-06-29 07:01:30 UTC
Created redmine issue https://projects.theforeman.org/issues/30234 from this bug

Comment 9 Bryan Kearney 2020-09-11 12:05:23 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/30234 has been resolved.

Comment 10 Gaurav Talreja 2020-12-23 08:50:37 UTC
Verified.

Version Tested:  Satellite-6.9.0 Snap 6 and rubygem-foreman_maintain-0.7.0-1.el7sat.noarch

Steps followed:
**** set exclude option in /etc/yum.conf  ****

# foreman-maintain health check --label validate-yum-config
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check to validate yum configuration before upgrade:                   [FAIL]
exclude is set in /etc/yum.conf as below:
exclude='cat* bear*'
Unset this configuration as it is risky while yum update or upgrade!
--------------------------------------------------------------------------------
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:

  [validate-yum-config]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="validate-yum-config"

**** set exclude and clean_requirements_on_remove=1 option in /etc/yum.conf  ****

# foreman-maintain health check --label validate-yum-config
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check to validate yum configuration before upgrade:                   [FAIL]
exclude,clean_requirements_on_remove are set in /etc/yum.conf as below:
exclude='cat* bear*'
clean_requirements_on_remove=1
Unset this configuration as it is risky while yum update or upgrade!
--------------------------------------------------------------------------------
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:

  [validate-yum-config]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="validate-yum-config"


**** set clean_requirements_on_remove=1 option in /etc/yum.conf  ****

# foreman-maintain health check --label validate-yum-config
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check to validate yum configuration before upgrade:                   [FAIL]
clean_requirements_on_remove is set in /etc/yum.conf as below:
clean_requirements_on_remove=1
Unset this configuration as it is risky while yum update or upgrade!
--------------------------------------------------------------------------------
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:

  [validate-yum-config]

Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="validate-yum-config"

**** set clean_requirements_on_remove=0 option in /etc/yum.conf  ****

# foreman-maintain health check --label validate-yum-config
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check to validate yum configuration before upgrade:                   [OK]
--------------------------------------------------------------------------------


Observation:
validate_yum_config check was able to detect exclude set and clean_requirements_on_remove=1 option present in /etc/yum.conf

Comment 13 errata-xmlrpc 2021-04-21 14:48:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Satellite 6.9 Satellite Maintenance Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1312


Note You need to log in before you can comment on or make changes to this bug.