Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
If pulp-manage-db is executed (due to whatever user/doc error) when pulp workers are running, various collections in mongo can violate their unique indices. Removing the duplicates is quite challenging, esp. because there is no relation model of pulp_database so one does not simply know where all could references to the duplicates be hidden.
This situation is not rare, it happened to 3 customers at least.
Please add to pulp-manage-db a test checking if some worker is running, and if so, stop the tool (or require human confirmation to continue).
See https://pulp.plan.io/issues/2186 raised in upstream.
Version-Release number of selected component (if applicable):
pulp-server-2.8.3.4-1.el7sat.noarch
How reproducible:
100%
Steps to Reproduce:
1. for i in pulp_resource_manager pulp_workers pulp_celerybeat; do service $i start; done
2. pulp-manage-db
Actual results:
2. is executed without a warning or so
Expected results:
2. should be stopped with an error that pulp workers are running, or the tool should require user confirmation to proceed.
Additional info:
Comment 2pulp-infra@redhat.com
2016-10-05 15:01:31 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.
Comment 3pulp-infra@redhat.com
2016-10-05 15:01:35 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.
Comment 4pulp-infra@redhat.com
2016-10-07 19:31:13 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.
Comment 5pulp-infra@redhat.com
2016-10-11 17:01:20 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.
Comment 6pulp-infra@redhat.com
2016-10-26 05:01:41 UTC
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.
Verified in:
satellite-6.2.8-1.0.el7sat.noarch
Results:
[root ~]# for i in pulp_resource_manager pulp_workers pulp_celerybeat; do service $i start; done
Redirecting to /bin/systemctl start pulp_resource_manager.service
Redirecting to /bin/systemctl start pulp_workers.service
Redirecting to /bin/systemctl start pulp_celerybeat.service
[root ~]# sudo -u apache pulp-manage-db
Attempting to connect to localhost:27017
Attempting to connect to localhost:27017
Write concern for Mongo connection: {}
The following processes might still be running:
scheduler
resource_manager
reserved_resource_worker-0
reserved_resource_worker-1
reserved_resource_worker-3
reserved_resource_worker-2
Please wait 1 seconds while Pulp confirms this.
The following processes are still running, please stop the running workers before retrying the pulp-manage-db command.
scheduler
resource_manager
reserved_resource_worker-0
reserved_resource_worker-1
reserved_resource_worker-3
reserved_resource_worker-2
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2017:0447