Since this issue was entered in Red Hat Bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release.
Looks like this is caused by SELinux: # service pulp_resource_manager restart celery init v10.0. Using config script: /etc/default/pulp_resource_manager celery multi v3.1.11 (Cipater) Traceback (most recent call last): File "/usr/bin/celery", line 9, in <module> load_entry_point('celery==3.1.11', 'console_scripts', 'celery')() File "/usr/lib/python2.6/site-packages/celery/__main__.py", line 30, in main main() File "/usr/lib/python2.6/site-packages/celery/bin/celery.py", line 81, in main cmd.execute_from_commandline(argv) File "/usr/lib/python2.6/site-packages/celery/bin/celery.py", line 769, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "/usr/lib/python2.6/site-packages/celery/bin/base.py", line 306, in execute_from_commandline return self.handle_argv(self.prog_name, argv[1:]) File "/usr/lib/python2.6/site-packages/celery/bin/celery.py", line 761, in handle_argv return self.execute(command, argv) File "/usr/lib/python2.6/site-packages/celery/bin/celery.py", line 693, in execute ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) File "/usr/lib/python2.6/site-packages/celery/bin/celery.py", line 97, in run_from_argv [command] + argv, prog_name, File "/usr/lib/python2.6/site-packages/celery/bin/multi.py", line 206, in execute_from_commandline self.commands[argv[0]](argv[1:], cmd) File "/usr/lib/python2.6/site-packages/celery/bin/multi.py", line 380, in restart self._stop_nodes(p, cmd, retry=2, callback=on_node_shutdown) File "/usr/lib/python2.6/site-packages/celery/bin/multi.py", line 362, in _stop_nodes self.shutdown_nodes(self.getpids(p, cmd, callback=callback), File "/usr/lib/python2.6/site-packages/celery/bin/multi.py", line 336, in getpids pid = Pidfile(pidfile).read_pid() File "/usr/lib/python2.6/site-packages/celery/platforms.py", line 179, in read_pid 'pidfile {0.path} contents invalid.'.format(self)) File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__ self.gen.throw(type, value, traceback) File "/usr/lib/python2.6/site-packages/celery/platforms.py", line 732, in ignore_errno yield File "/usr/lib/python2.6/site-packages/celery/platforms.py", line 169, in read_pid with open(self.path, 'r') as fh: IOError: [Errno 13] Permission denied: u'/var/run/pulp/resource_manager.pid' # getenforce Enforcing # setenforce 0 # service pulp_resource_manager restart celery init v10.0. Using config script: /etc/default/pulp_resource_manager celery multi v3.1.11 (Cipater) > Stopping nodes... > resource_manager@<fqdn>: QUIT -> 32472 > Waiting for 1 node -> 32472..... > resource_manager@<fqdn>: OK > Restarting node resource_manager@<fqdn>: OK # cat /var/log/audit/audit.log | audit2allow #============= celery_t ============== allow celery_t initrc_t:process { signal signull }; allow celery_t initrc_var_run_t:file { read getattr open }; #============= passenger_t ============== allow passenger_t apmd_var_run_t:sock_file getattr; allow passenger_t binfmt_misc_fs_t:dir getattr; allow passenger_t boot_t:dir getattr; allow passenger_t fixed_disk_device_t:blk_file getattr; allow passenger_t httpd_config_t:dir search; allow passenger_t httpd_config_t:lnk_file read; allow passenger_t httpd_var_run_t:dir search; allow passenger_t httpd_var_run_t:sock_file getattr; allow passenger_t mongod_tmp_t:sock_file getattr; allow passenger_t rpcbind_var_run_t:sock_file getattr; allow passenger_t self:process sigstop; allow passenger_t sysctl_fs_t:dir search; allow passenger_t system_dbusd_var_run_t:dir search; allow passenger_t system_dbusd_var_run_t:sock_file getattr; allow passenger_t tmpfs_t:dir getattr; allow passenger_t usbfs_t:dir getattr; allow passenger_t var_t:sock_file getattr; #============= qpidd_t ============== allow qpidd_t qpidd_initrc_exec_t:file read; #============= setfiles_t ============== allow setfiles_t qpidd_initrc_exec_t:file read;
CCing
Pulp 2.4 had a fairly lax SELinux policy. The new SELinux policy introduced in 2.5 significantly restricts what celery process running as pulp_workers, pulp_celerybeat, and pulp_resource_manager can do. As a result, if a process was started before an upgrade, it is running in a different context than the process started after the new SELinux policy is applied. As a result the new process is not able to read the PID file of the old process in order to stop it. This is an issue when upgrading from anything before 2.5. In order to avoid this problem, pulp_workers, pulp_celerybeat, and pulp_resource_manager need to be stopped before the upgrade.
ok, I finished testing this and here's what the workflow that worked was: # Stop pulp services for app in pulp_workers pulp_celerybeat pulp_resource_manager; do service $app stop; done # Update the system yum update -y # Upgrade katello-installer --upgrade
The fix for this will require modifying the pulp.spec file to stop pulp_workers, pulp_celerybeat, and pulp_resource_manager right before uninstall. The patch will land in Pulp 2.6.1, but the change will be available for cherry-picking before 2.6.1 release. Most likely in the next 7 days.
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.
The Pulp upstream bug status is at VERIFIED. Updating the external tracker on this bug.
The upstream bug was fixed and verified in 2.6.1. Wouldn't downstream want to cherry pick this?
Adding mhrivnak to cc list
FAILEDQA: # rpm -qa | grep foreman rubygem-hammer_cli_foreman-0.1.4.10-1.el6_6sat.noarch foreman-libvirt-1.7.2.18-1.el6_6sat.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.12-1.el6_6sat.noarch dell-pe1950-01.rhts.eng.bos.redhat.com-foreman-proxy-1.0-1.noarch foreman-1.7.2.18-1.el6_6sat.noarch foreman-debug-1.7.2.18-1.el6_6sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el6_6sat.noarch foreman-selinux-1.7.2.13-1.el6_6sat.noarch foreman-compute-1.7.2.18-1.el6_6sat.noarch foreman-vmware-1.7.2.18-1.el6_6sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el6_6sat.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.6-1.el6_6sat.noarch foreman-ovirt-1.7.2.18-1.el6_6sat.noarch foreman-gce-1.7.2.18-1.el6_6sat.noarch ruby193-rubygem-foreman_discovery-2.0.0.12-1.el6_6sat.noarch foreman-postgresql-1.7.2.18-1.el6_6sat.noarch dell-pe1950-01.rhts.eng.bos.redhat.com-foreman-client-1.0-1.noarch ruby193-rubygem-foreman-redhat_access-0.1.0-1.el6_6sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el6_6sat.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.8-1.el6_6sat.noarch ruby193-rubygem-foreman_docker-1.2.0.10-1.el6_6sat.noarch ruby193-rubygem-foreman-tasks-0.6.12.4-1.el6_6sat.noarch foreman-proxy-1.7.2.4-1.el6_6sat.noarch Steps: # katello-installer --upgrade File not found /usr/share/katello-installer/modules/katello_plugin_gutterball/manifests/init.pp, check your answer file foreman-debug attached
Created attachment 1022114 [details] foreman-debug attached
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.
It looks like this errors is caused by 1205668 so I'm adding a depends relationship on that bug before it can be retested.
VERIFIED: # rpm -qa | grep foreman ruby193-rubygem-foreman_discovery-2.0.0.13-1.el6_6sat.noarch ruby193-rubygem-foreman_docker-1.2.0.11-1.el6_6sat.noarch rubygem-hammer_cli_foreman-0.1.4.11-1.el6_6sat.noarch foreman-vmware-1.7.2.19-1.el6_6sat.noarch ruby193-rubygem-foreman-tasks-0.6.12.5-1.el6_6sat.noarch intel-sugarbay-dh-03.lab.bos.redhat.com-foreman-proxy-1.0-1.noarch foreman-proxy-1.7.2.4-1.el6_6sat.noarch intel-sugarbay-dh-03.lab.bos.redhat.com-foreman-proxy-client-1.0-1.noarch rubygem-hammer_cli_foreman_discovery-0.0.1.9-1.el6_6sat.noarch foreman-1.7.2.19-1.el6_6sat.noarch foreman-ovirt-1.7.2.19-1.el6_6sat.noarch ruby193-rubygem-foreman_bootdisk-4.0.2.13-1.el6_6sat.noarch ruby193-rubygem-foreman_gutterball-0.0.1.9-1.el6_6sat.noarch rubygem-hammer_cli_foreman_tasks-0.0.3.4-1.el6_6sat.noarch foreman-selinux-1.7.2.13-1.el6_6sat.noarch foreman-compute-1.7.2.19-1.el6_6sat.noarch foreman-libvirt-1.7.2.19-1.el6_6sat.noarch intel-sugarbay-dh-03.lab.bos.redhat.com-foreman-client-1.0-1.noarch rubygem-hammer_cli_foreman_bootdisk-0.1.2.7-1.el6_6sat.noarch foreman-gce-1.7.2.19-1.el6_6sat.noarch ruby193-rubygem-foreman_hooks-0.3.7-2.el6_6sat.noarch ruby193-rubygem-foreman-redhat_access-0.1.0-1.el6_6sat.noarch foreman-postgresql-1.7.2.19-1.el6_6sat.noarch foreman-debug-1.7.2.19-1.el6_6sat.noarch steps: 1. Install sat6.0.8 2. disable sat6.0.8 repo 3. enable sat6.1 repo 4. katello-service stop 5. service-wait mongod start 6. yum update -y 7. katello-installer --upgrade Upgrading... Upgrade Step: stop_services... Upgrade Step: start_mongo... Upgrade Step: migrate_pulp... Upgrade Step: migrate_candlepin... Upgrade Step: migrate_foreman... Upgrade Step: Running installer... Installing Done [100%] Upgrade Step: Restarting services... Upgrade Step: db:seed... Upgrade Step: Running errata import task (this may take a while)... Katello upgrade completed! # hammer ping [Foreman] Username: admin [Foreman] Password for admin: candlepin: Status: ok Server Response: Duration: 2078ms candlepin_auth: Status: ok Server Response: Duration: 309ms pulp: Status: ok Server Response: Duration: 269ms pulp_auth: Status: ok Server Response: Duration: 232ms elasticsearch: Status: ok Server Response: Duration: 2804ms foreman_tasks: Status: ok Server Response: Duration: 1ms
This bug is slated to be released with Satellite 6.1.
This bug was fixed in version 6.1.1 of Satellite which was released on 12 August, 2015.