Bug 1916485 - pulp3: content migration may fail if pulp isn't up yet
Summary: pulp3: content migration may fail if pulp isn't up yet
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Infrastructure
Version: 6.8.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: 6.9.0
Assignee: satellite6-bugs
QA Contact: Lukas Pramuk
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-14 21:28 UTC by Justin Sherrill
Modified: 2021-04-21 13:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-21 13:25:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 31644 0 Normal Ready For Testing content migration may fail if pulp isn't up yet 2021-02-18 20:07:59 UTC
Red Hat Product Errata RHSA-2021:1313 0 None None None 2021-04-21 13:25:19 UTC

Description Justin Sherrill 2021-01-14 21:28:21 UTC
downstream foreman maintain starts the pulp services prior to migration.  It seems that the pulp3 services aren't always up:

{:services=>
  {:candlepin=>{:status=>"ok", :duration_ms=>"26"},
   :candlepin_auth=>{:status=>"ok", :duration_ms=>"36"},
   :foreman_tasks=>{:status=>"ok", :duration_ms=>"3"},
   :katello_events=>
    {:status=>"ok", :message=>"0 Processed, 0 Failed", :duration_ms=>"0"},
   :candlepin_events=>
    {:status=>"ok", :message=>"0 Processed, 0 Failed", :duration_ms=>"0"},
   :pulp3=>{:status=>"FAIL", :message=>"503 Service Unavailable"},
   :pulp=>{:status=>"ok", :duration_ms=>"98"},
   :pulp_auth=>{:status=>"ok", :duration_ms=>"57"}},
 :status=>"FAIL"}

(the katello/candlepin events messages just need something to hit the webserver).  We need some sort of sleep for 30 seconds if the pulp3 server is throwing a 503.

Comment 1 Justin Sherrill 2021-01-14 21:28:25 UTC
Created from redmine issue https://projects.theforeman.org/issues/31644

Comment 2 Justin Sherrill 2021-01-14 21:28:27 UTC
Upstream bug assigned to None

Comment 3 Justin Sherrill 2021-01-20 20:27:34 UTC
may be fixed by https://bugzilla.redhat.com/show_bug.cgi?id=1918464

Comment 4 Eric Helms 2021-02-08 15:15:46 UTC
We believe this has been resolved by a bug in the pulpcore service definitions that were incorrectly setting the systemd type.

Comment 5 Ondrej Gajdusek 2021-03-11 10:30:49 UTC
Verified
python3-pulpcore-3.7.3-1.el7pc.noarch

Checked on an upgraded instance. 6.8 -> 6.9.

$ grep -n Type= `find /etc/systemd/system -name pulpcore-*.service`
/etc/systemd/system/pulpcore-api.service:7:Type=notify
/etc/systemd/system/pulpcore-content.service:7:Type=notify
/etc/systemd/system/pulpcore-resource-manager.service:7:Type=simple
/etc/systemd/system/pulpcore-worker@.service:7:Type=simple

$ satellite-maintain service status
.
.
.
- All services are running                                            [OK]

Comment 8 errata-xmlrpc 2021-04-21 13:25:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.9 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1313


Note You need to log in before you can comment on or make changes to this bug.