Description of problem: Currently in the sync dynflow action, after the repo sync in pulp is finished, we initiate a node metadata generation and sync as part of the sync task. this means that if the capsule is not running the sync task will hang until the capsule comes back online. It should behave just like the content view publish/promote and initiate it as a 2nd entire action which is not dependent on the first. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Install a capsule and associate it with library 2. Shut down the capsule 3. Attempt to sync a repo Actual results: Sync hangs around 69% Expected results: Sync completes successfully Additional info:
Created redmine issue http://projects.theforeman.org/issues/8770 from this bug
*** Bug 1173133 has been marked as a duplicate of this bug. ***
Moving to POST since upstream bug http://projects.theforeman.org/issues/8770 has been closed ------------- Justin Sherrill Applied in changeset commit:katello|82cd0a41d1374fedd00550debb5b05da671663ef.
Is there a workaround available? I.e. will cancellint the task and syncing the same repo again work? (or will that fail in either case)?
Created attachment 990952 [details] required lock is already taken by other running task
Created attachment 990953 [details] status of pending task from dynflow
Created attachment 990954 [details] pulp task error in production.log of sat6
But keep on getting these errors : I think sat6 is polling for capsule node? Polling failed, attempt no. 1, retrying in 8 Pulp task error. Refer to task for more details. (StandardError) /opt/rh/ruby193/root/usr/share/gems/gems/katello-1.5.0/app/lib/actions/pulp/consumer/abstract_sync_node_task.rb:26:in `block in external_task=' /opt/rh/ruby193/root/usr/share/gems/gems/katello-1.5.0/app/lib/actions/pulp/consumer/abstract_sync_node_task.rb:24:in `each' /
OK, I enabled another RedHat repo (Red Hat Enterprise Linux 7 Server - Supplementary RPMs x86_64 7Server) and synced it. It was synced properly on sat6 server. I think the fix is working as expected. But would be great if Justin can review if steps take to stop capsule are correct. Thanks
Created attachment 990959 [details] synced supplementary repo, its start time says: 5 minutes ago
Sachin, Yes, those steps seem fine. (Or you could have just shut down katello-agent on the capsule). You still should see a 'sync capsule' task that is probably stuck and not moving.
Thanks Justin, So I stopped goferd on capsule and tried to enable and sync redhat repo, but now when I enable the repo from 'Redhat repositories', the spinner keeps on rotating. No error in production.log but firebug raises this error: -- 25 TypeError: data is undefined ...a(this,str_data);data.w=w!==undefined?w:elem.width(),data.h=h!==undefined?h:elem... --
Created attachment 991213 [details] rotating spinner on enabling redhat repo with type error in firebug
Created attachment 991214 [details] I tried enabling repo multiple times..but each time I got rotating spinner and here is dynflow tasks status
Verified against a RHEL 6.6 system running Satellite-6.0.8 compose 3. As the risk of being redundant, here's the test procedure I used: 1. Go to Content > Red Hat Subscriptions and upload a manifest file. 2. Go to Content > Red Hat Repositories and enable a repository. 3. Go to Content > Sync Status and sync the just-enabled repository. 4. If not already done, attach a capsule. Make the capsule inoperable. You can do this in several ways. You could shut down the capsule system, or you can log in to the capsule system and stop the goferd service. 5. Go to Content > Sync Status and sync a repository that has been synced at least once before. 6. Go to Monitor > Tasks and verify that the sync task completes. A second task for generating node metadata should be in the running/pending state. Verifying this bug is slightly complicated by the presence of BZ 1192500. If an attached capsule is inoperable, you cannot then enable a new repository or sync an already-enabled repository for the first time. That's a separate issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2015:0247