Created attachment 1155611 [details]
Description of problem:
Manifest fails to upload
Steps to Reproduce:
1. generate a new manifest on customer portal
2. Ensure that pulp is not running(this was not intentional to find the bug)
3. Try to upload the manifest just generated
4. It will fail with error:
Katello::Resources::Candlepin::Owner: Request Timeout (POST /candlepin/owners/MyOrg/imports)
5. Then start katello to ensure all services are up
# hammer ping to confirm
6. Try to upload the manifest but it gives error message
Import is the same as existing data
However on the left frame, it shows no subscriptions.
Issue resolved by doing:
Created attachment 1155613 [details]
I also encountered that on my testinstall, however my pulp was running fine while the initial import of the Manifest errored out with "RestClient::RequestTimeout: Katello::Resources::Candlepin::Owner: Request Timeout (POST /candlepin/owners/ACME/imports)"
For me just clicking "refresh manifest" was enough to fix that issue, though.
This will require changing the manifest import from a synchronous to an asynchronous call. See the upstream bug for the details of that call.
If Pulp isn't running, many many things will fail besides manifest import. We currently require all services to be running for Satellite to be functional and anything less is an error state that must be resolved before continuing.
This isn't new for 6.2, and would behave the same in 6.1 or older.
That said, Satellite should show the user in a much more obvious manner that required services are not functioning upon login. If Pulp was down, we should have a big banner/error or something that indicates they should cease trying to utilize the Satellite and resolve the issue.
Essentially a heartbeat or monitor to notify that things weren't working properly. I searched for an RFE in this area and couldn't find one so I filed a new one:
feel free to comment there.
As for Evgeni's comment here:
that is a valid issue worth investigating but is a different bug that I filed here:
this is a genuine issue, our default timeout needs to be increased, I'm glad this was spotted as it often only occurs with large manifests or slower VMs.
I'm going to close the original bug because it is a bit vague and I'd prefer to have 2 different bugs to handle this.