Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
it fails to restore backup what had been created before mongo storage engine was upgraded to wiredtiger
# foreman-maintain restore /tmp/backup-6.5/satellite-backup-2019-01-08-15-19-56/
Running Restore backup
...
--------------------------------------------------------------------------------
Restore configs from backup:
\ Restoring configs [OK]
...
Run installer reset:
\ Installer reset [FAIL]
Failed executing yes | satellite-installer -v --reset --disable-system-checks , exit status 6:
[ INFO 2019-01-08T16:42:08 verbose] Dropping Pulp database!
rm -f /var/lib/pulp/init.flag finished successfully!
systemctl stop httpd pulp_workers finished successfully!
Job for rh-mongodb34-mongod.service failed because the control process exited with error code. See "systemctl status rh-mongodb34-mongod.service" and "journalctl -xe" for details.
systemctl start rh-mongodb34-mongod failed! Check the output for error!
[ERROR 2019-01-08T16:42:08 verbose] systemctl start rh-mongodb34-mongod failed! Check the output for error!
MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017/pulp_database
2019-01-08T16:42:08.847+0100 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2019-01-08T16:42:08.939+0100 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:237:13
@(connect):1:6
exception: connect failed
mongo pulp_database --eval 'db.dropDatabase();' failed! Check the output for error!
[ERROR 2019-01-08T16:42:08 verbose] mongo pulp_database --eval 'db.dropDatabase();' failed! Check the output for error!
...
[ERROR 2019-01-08T16:42:33 verbose] Jan 08 16:42:08 vm-198-147.lab.eng.pek2.redhat.com mongod.27017[11541]: [initandlisten] exception in initAndListen: 28662 Cannot start server. Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating
...
Something went wrong! Check the log for ERROR-level output
The full log is at /var/log/foreman-installer/satellite.log
[ INFO 2019-01-08T16:54:31 verbose] All hooks in group post finished
[ INFO 2019-01-08T16:54:31 verbose] Installer finished in 804.191110789 seconds
--------------------------------------------------------------------------------
Scenario [Restore backup] failed.
...
problem is that /etc/* files are restored first (Restore configs from backup) where /etc/opt/rh/rh-mongodb34/mongod.conf contains
storage.engine: mmapv1
so later when it does sth. with mongo, it fails to start because it detects wiredtiger files in /var/lib/mongodb
Version-Release number of selected component (if applicable):
sat 6.5 snap 10
rubygem-foreman_maintain-0.3.0-1.el7sat.noarch
How reproducible:
always
Steps to Reproduce:
1. install satellite 6.3 (last version with old mongo)
2. upgrade to 6.4
3. upgrade to 6.5
4. foreman-maintain backup ...
5. satellite-installer --upgrade-mongo-storage-engine
6. foreman-maintain restore ...
At a minimum, there needs to be some documentation or messaging so that the user knows that they need to do a new backup after the upgrade to mongo-storage-engine.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2019:1222