Description of problem: Adding a Source RPM repo (with SRPMs and errata) to a Content View and publishing this C.V., the resulting C.V. repository is empty. Moreover, due to https://bugzilla.redhat.com/show_bug.cgi?id=1491646, the resulting repo is never published (and Capsule sync fails). Checking dynflow steps for C.V. publish: - SRPMs are not copied at all - errata are copied, but purged away at the end (since no underlying units - the missing SRPMs - are in the repo, so purging the just-added errata) Version-Release number of selected component (if applicable): Sat 6.2.11 (but imho any) How reproducible: 100% Steps to Reproduce: 1. Sync a Source RPM repo 2. Add it to a Content View and Publish 3. Set a Content Host to this C.V. 4. run "subscription-manager refresh; yum install some-SRPM" on the Content Host Actual results: yum fails on parsing metadata with error <some-path>/source/SRPMS/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found Expected results: yum succeeds Additional info:
Manual workaround fore C.V.s without filters - regrettably it fails on publishing the repos :( 1) identify (e.g. from dynflow console of CV publish, from steps Actions::Pulp::Repository::CopyErrata) the source_pulp_id and target_pulp_id. There should be _twice_ as many these steps as # of repos in the CVs are. For _each_ such step, store the repo names in source_pulp_id / target_pulp_id bash variables. 2) associate erratum, SRPMs, metadata and distribution between source and target repos - URI and POST params taken from katello debug logs: pulpAdminPassword=$(grep ^default_password /etc/pulp/server.conf | cut -d' ' -f2) for unit in erratum srpm yum_repo_metadata_file distribution; do curl -i -H "Content-Type: application/json" -X POST -d "{\"source_repo_id\":\"$source_repo_id\",\"criteria\":{\"type_ids\":[\"$unit\"],\"filters\":{}}}" -u admin:$pulpAdminPassword https://$(hostname -f)/pulp/api/v2/repositories/$target_repo_id/actions/associate/ done 3) Wait until the associate tasks are completed - below command should not list the tasks / UUIDs printed in 2) output: pulp-admin -u admin -p $pulpAdminPassword tasks list 4) Once units association is completed / tasks are gone (usually happens within a second, but for large repos, it can be longer), publish the repo: curl -i -H "Content-Type: application/json" -X POST -d "{\"id\":\"$repo\",\"override_config\":{\"force_full\":true}}" -u admin:$pulpAdminPassword https://$(hostname -f)/pulp/api/v2/repositories/$target_repo_id/actions/publish/ 5) Again check via pulp-admin when the task finishes. Since that time, client should be able to access that repo and its content. As written, it fails to publish the repo with exception: 2017-09-16T16:23:39.356378+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) Task pulp.server.managers.repo.publish.publish[189b404a-f701-4edb-ae4e-b176924b3897] raised unexpected: MissingResource({'resource_id': {'repo_id': u'Default_Organization-Library-MRG_src-Red_Hat_Enterprise_MRG_Messaging_3_for_RHEL_7-Red_Hat_Enterprise_MRG_Messaging_3_for_RHEL_7_Source_RPMs_x86_64_7Server', 'distributor_id': u''}},) 2017-09-16T16:23:39.356557+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) Traceback (most recent call last): 2017-09-16T16:23:39.356724+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task 2017-09-16T16:23:39.356901+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) R = retval = fun(*args, **kwargs) 2017-09-16T16:23:39.357063+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/pulp/server/async/tasks.py", line 473, in __call__ 2017-09-16T16:23:39.357222+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) return super(Task, self).__call__(*args, **kwargs) 2017-09-16T16:23:39.357375+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/pulp/server/async/tasks.py", line 103, in __call__ 2017-09-16T16:23:39.357549+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) return super(PulpTask, self).__call__(*args, **kwargs) 2017-09-16T16:23:39.357712+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/celery/app/trace.py", line 437, in __protected_call__ 2017-09-16T16:23:39.357864+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) return self.run(*args, **kwargs) 2017-09-16T16:23:39.358012+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/pulp/server/controllers/repository.py", line 961, in publish 2017-09-16T16:23:39.358173+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) dist = model.Distributor.objects.get_or_404(repo_id=repo_id, distributor_id=dist_id) 2017-09-16T16:23:39.358323+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) File "/usr/lib/python2.7/site-packages/pulp/server/db/querysets.py", line 116, in get_or_404 2017-09-16T16:23:39.358477+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) raise pulp_exceptions.MissingResource(**kwargs) 2017-09-16T16:23:39.358634+02:00 pmoravec-sat62-rhel7 pulp: celery.worker.job:ERROR: (21540-90144) MissingResource: Missing resource(s): repo_id=Default_Organization-Library-MRG_src-Red_Hat_Enterprise_MRG_Messaging_3_for_RHEL_7-Red_Hat_Enterprise_MRG_Messaging_3_for_RHEL_7_Source_RPMs_x86_64_7Server, distributor_id= (why distributor_id is empty???)
.. and here is a working version of the workaround: bash script: pulpAdminPassword=$(grep ^default_password /etc/pulp/server.conf | cut -d' ' -f2) source_repo_id=$1 target_repo_id=$2 for unit in erratum srpm yum_repo_metadata_file distribution; do curl -i -H "Content-Type: application/json" -X POST -d "{\"source_repo_id\":\"$source_repo_id\",\"criteria\":{\"type_ids\":[\"$unit\"],\"filters\":{}}}" -u admin:$pulpAdminPassword https://$(hostname -f)/pulp/api/v2/repositories/$target_repo_id/actions/associate/ done pulp-admin -u admin -p $pulpAdminPassword tasks list curl -i -H "Content-Type: application/json" -X POST -d "{\"id\":\"${target_repo_id}_clone\",\"override_config\":{\"force_full\":true, \"source_repo_id\":\"$source_repo_id\", \"source_distributor_id\":\"$source_repo_id\"}}" -u admin:$pulpAdminPassword https://$(hostname -f)/pulp/api/v2/repositories/$target_repo_id/actions/publish/ Run it with 2 arguments source_repo_id and target_repo_id for each "CopyErrata" of source repository in the dynflow task, in the same ordering like in the task. In case pulp-admin in the middle prints a running task "associate units", re-run the latest cmd manually. After each execution of the script, ensure the publish task finished, before running next round for next pair of repos.
Created redmine issue http://projects.theforeman.org/issues/21154 from this bug
Upstream bug assigned to jsherril
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/21154 has been resolved.
Verified with sat6.3 snap28. I can see synced SRPMS in content-view. UI shows the count after publish/promote. Please see the attached screenshot.
Created attachment 1367302 [details] srpms count on UI under published cv
To test the fix, I synced sat6 srms and published them in two CVs a) cv_srpms b) cv_rhel73 Here are the list of source rpms published in a CV: /var/lib/pulp/published/yum/master/yum_distributor/1-cv_srpms-v1_0-db74d53d-be62-451a-9dae-62c644009aa2/1513154587.51/Packages/t/tfm-rubygem-hammer_cli-0.5.1.13-2.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_srpms-v1_0-db74d53d-be62-451a-9dae-62c644009aa2/1513154587.51/Packages/t/tfm-rubygem-activerecord-session_store-0.1.2-1.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_srpms-v1_0-db74d53d-be62-451a-9dae-62c644009aa2/1513154587.51/Packages/t/tfm-rubygem-angular-rails-templates-0.1.2-4.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_srpms-v1_0-db74d53d-be62-451a-9dae-62c644009aa2/1513154587.51/Packages/t/tfm-rubygem-sprockets-rails-2.3.3-1.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_srpms-v1_0-db74d53d-be62-451a-9dae-62c644009aa2/1513154587.51/Packages/t/tfm-rubygem-rbovirt-0.0.37-1.el7sat.src.rpm A few more in another CV: /var/lib/pulp/published/yum/master/yum_distributor/1-cv_rhel73-v3_0-db74d53d-be62-451a-9dae-62c644009aa2/1513156318.1/Packages/r/rubygem-clamp-0.6.2-2.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_rhel73-v3_0-db74d53d-be62-451a-9dae-62c644009aa2/1513156318.1/Packages/r/rubygem-rack-1.4.1-13.el7sat.src.rpm /var/lib/pulp/published/yum/master/yum_distributor/1-cv_rhel73-v3_0-db74d53d-be62-451a-9dae-62c644009aa2/1513156318.1/Packages/r/ruby-augeas-0.5.0-1.el7.src.rpm [root@cloud-qe-14 661a8abb8a17787c68fa65f383ffda96f52adbc814fa082934b8ba208ccdbe]# ll total 1876 -rw-r--r--. 1 apache apache 1918580 Dec 13 03:47 katello-installer-base-3.0.0.88-1.el7sat.src.rpm [root@cloud-qe-14 661a8abb8a17787c68fa65f383ffda96f52adbc814fa082934b8ba208ccdbe]# pwd /var/lib/pulp/content/units/srpm/d7/661a8abb8a17787c68fa65f383ffda96f52adbc814fa082934b8ba208ccdbe
Later, I tried to sync the contents to capsule to see if srpms got synced to external capsule: capsule sync was successfully completed. here are the srpms from capsule. [root@cloud-qe-06 ~]# cd /var/lib/pulp/content/units/srpm/2d/b449d3d12f90605ed66e8f9e25f258338acd9fc6972cac762086b124b7668f/ [root@cloud-qe-06 b449d3d12f90605ed66e8f9e25f258338acd9fc6972cac762086b124b7668f]# ll total 1884 -rw-r--r--. 1 apache apache 1926473 Dec 13 03:48 katello-installer-base-3.0.0.95-1.el7sat.src.rpm [root@cloud-qe-06 b449d3d12f90605ed66e8f9e25f258338acd9fc6972cac762086b124b7668f]# p]# cd /var/lib/pulp/published/yum/master/yum_distributor/1-cv_rhel73-Dev-db74d53d-be62-451a-9dae-62c644009aa2/1513165243.1/Packages/p/ [root@cloud-qe-06 p]# ls pulp-2.8.3.3-1.el7sat.src.rpm pulp-ostree-1.1.3.3-1.el7sat.src.rpm pyserial-2.6-5.el7.src.rpm python-mongoengine-0.10.5-2.el7sat.src.rpm
I provisioned a content host and registered it with same cv that includes the srpms. here is the output of repolist. [root@sghaisrpmtest ~]# yum repolist Loaded plugins: product-id, search-disabled-repos, subscription-manager repo id repo name status !Default_Organization_63capsule_63capsule_rhel7 63capsule_rhel7 176 !rhel-7-server-rpms/7Server/x86_64 Red Hat Enterprise Linux 7 Server (RPMs) 17,802 !rhel-7-server-satellite-6.2-source-rpms/x86_64 Red Hat Satellite 6.2 (for RHEL 7 Server) (Source RPMs) 441 repolist: 18,419 ~]# yumdownloader --source foreman Loaded plugins: product-id, subscription-manager Default_Organization_63capsule_63capsule_rhel7 | 2.5 kB 00:00:00 rhel-7-server-rpms | 2.0 kB 00:00:00 rhel-7-server-satellite-6.2-source-rpms | 2.1 kB 00:00:00 foreman-1.11.0.85-1.el7sat.src.rpm | 3.6 MB 00:00:00 [root@sghaisrpmtest ~]# ls *src.rpm foreman-1.11.0.85-1.el7sat.src.rpm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. > > > > For information on the advisory, and where to find the updated files, follow the link below. > > > > If the solution does not work for you, open a new bug report. > > > > https://access.redhat.com/errata/RHSA-2018:0336