Description of problem: hammer ping shows FAIL status for foreman-tasks [root@qe-sat6-rhel7 ~]# hammer --username admin --password changeme ping candlepin: Status: ok Server Response: Duration: 31ms candlepin_auth: Status: ok Server Response: Duration: 24ms pulp: Status: ok Server Response: Duration: 18ms pulp_auth: Status: ok Server Response: Duration: 21ms elasticsearch: Status: ok Server Response: Duration: 21ms foreman_tasks: Status: FAIL Server Response: Message: foreman-tasks service not running But looks like service is running. [root@qe-sat6-rhel7 ~]# systemctl status foreman-tasks foreman-tasks.service - Foreman jobs daemon Loaded: loaded (/usr/lib/systemd/system/foreman-tasks.service; enabled) Active: active (running) since Tue 2015-02-17 18:12:57 EST; 9h ago Docs: https://github.com/theforeman/foreman-tasks CGroup: /system.slice/foreman-tasks.service └─28567 dynflow_executor_monitor Feb 17 18:07:49 qe-sat6-rhel7.usersys.redhat.com systemd[1]: Starting Foreman jobs daemon... Feb 17 18:07:56 qe-sat6-rhel7.usersys.redhat.com foreman-tasks[28541]: Dynflow Executor: start in progress Feb 17 18:07:57 qe-sat6-rhel7.usersys.redhat.com foreman-tasks[28541]: dynflow_executor: process with pid 28564 started. Feb 17 18:07:57 qe-sat6-rhel7.usersys.redhat.com foreman-tasks[28541]: Waiting for the executor to be ready... Feb 17 18:12:57 qe-sat6-rhel7.usersys.redhat.com foreman-tasks[28541]: ............................................................................... Feb 17 18:12:57 qe-sat6-rhel7.usersys.redhat.com systemd[1]: Started Foreman jobs daemon. Hint: Some lines were ellipsized, use -l to show in full. Version-Release number of selected component (if applicable): sat6.1 beta snap3 Satellite-6.1.0-RHEL-7-20150217.0 How reproducible: always Steps to Reproduce: 1. install snap3 and once katello-installer finished, check all services with hammer ping command. 2. 3. Actual results: foreman-tasks service has FAIL status Expected results: should be 'OK' Additional info: After restarting foreman-tasks..hammer ping shows 'ok' [root@qe-sat6-rhel7 ~]# hammer --username admin --password changeme ping candlepin: Status: ok Server Response: Duration: 30ms candlepin_auth: Status: ok Server Response: Duration: 24ms pulp: Status: ok Server Response: Duration: 17ms pulp_auth: Status: ok Server Response: Duration: 22ms elasticsearch: Status: ok Server Response: Duration: 21ms foreman_tasks: Status: ok Server Response: Duration: 1ms please note state of foreman task failed just after the install.
on creating a product when foreman-tasks were failing, production.log filled with following exception: Connecting to database specified by database.yml Processing by Katello::Api::V2::ProductsController#create as JSON Parameters: {"organization_id"=>"1", "name"=>"p2", "label"=>"p2", "api_version"=>"v2", "product"=>{"name"=>"p2", "label"=>"p2", "organization_id"=>"1"}} No such file or directory - "/usr/share/foreman/tmp/sockets/dynflow_socket" (Errno::ENOENT) /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/remote_via_socket/core.rb:142:in `initialize' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/remote_via_socket/core.rb:142:in `new' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/remote_via_socket/core.rb:142:in `connect' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/remote_via_socket/core.rb:91:in `block in on_message' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:859:in `block in assigns' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:in `tap' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:858:in `assigns' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:138:in `match_value' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:116:in `block in match' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:in `each' /opt/rh/ruby193/root/usr/share/gems/gems/algebrick-0.4.0/lib/algebrick.rb:115:in `match' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/executors/remote_via_socket/core.rb:79:in `on_message' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:80:in `block in on_envelope' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/future.rb:75:in `call' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/future.rb:75:in `evaluate_to' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:80:in `on_envelope' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:72:in `receive' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in `block (2 levels) in run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in `loop' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in `block in run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in `catch' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:99:in `run' /opt/rh/ruby193/root/usr/share/gems/gems/dynflow-0.7.3/lib/dynflow/micro_actor.rb:13:in `block in initialize' /opt/rh/ruby193/root/usr/share/gems/gems/logging-1.8.1/lib/logging/diagnostic_context.rb:323:in `call' /opt/rh/ruby193/root/usr/share/gems/gems/logging-1.8.1/lib/logging/diagnostic_context.rb:323:in `block in create_with_logging_context' Dynflow::Error: Cannot do Dynflow::Executors::Abstract::Execution[execution_plan_id: 7ff4421a-3a46-43e2-baed-04e7fd2b1664, finished: #<Dynflow::Future:0x0000000eca6490>], no connection to a Listener
Created attachment 993026 [details] foreman-debug logs attached
I guess this is a selinux issue: could you confirm that it doesn't happen in enforcing mode?
Hey, assuming this is RHEL 7.0. With recent change in RHEL 7.1, pulp policy won't load anymore. You need to do either permissive (start over) or test on RHEL 7.1.
I saw this partucular issue on RHEL 7.1 beta as well
Ivan, I don't see any denials in regard to dynflow. I see some Pulp issues which are related to the RHEL 7.0 vs 7.1 problem tho.
Not a SELinux issue: sesearch -A -s passenger_t -t foreman_var_run_t Found 5 semantic av rules: allow passenger_t foreman_var_run_t : file { ioctl read write create getattr setattr lock relabelfrom relabelto append unlink link rename open } ; allow passenger_t foreman_var_run_t : dir { ioctl read write create getattr setattr lock relabelfrom relabelto unlink link rename add_name remove_name reparent search rmdir open } ; allow passenger_t foreman_var_run_t : lnk_file { ioctl read write create getattr setattr lock relabelfrom relabelto append unlink link rename } ; allow passenger_t foreman_var_run_t : sock_file { ioctl read write create getattr setattr lock relabelfrom relabelto append unlink link rename open } ; allow passenger_t foreman_var_run_t : fifo_file { ioctl read write create getattr setattr lock relabelfrom relabelto append unlink link rename open } ;
I've found the reason of the issues: the problem is the gutterball configuration happens after the foreman-tasks is started: [DEBUG 2015-02-20 01:24:15 main] Class[Foreman::Plugin::Tasks]: The container Stage[main] will propagate my refresh event [ WARN 2015-02-20 01:24:15 main] /Stage[main]/Katello::Plugin::Gutterball::Config/File[/etc/foreman/plugins/gutterball.yaml]/ensure: defined content as '{md5}298b5ec1aadba3f072ee3c4c96bdf8ff' [DEBUG 2015-02-20 01:24:15 main] /Stage[main]/Katello::Plugin::Gutterball::Config/File[/etc/foreman/plugins/gutterball.yaml]: The container Class[Katello::Plugin::Gutterball::Config] will propagate my refresh event This missing file causes foreman-tasks to not start properly, because the executor startup always ends up with `undefined method `[]' for nil:NilClass`
Created redmine issue http://projects.theforeman.org/issues/9483 from this bug
Proposed fix here https://github.com/Katello/puppet-katello/pull/62
https://github.com/Katello/katello-installer/pull/185 was just merged
Verified in Satellite-6.1.0-RHEL-7-20150303.0
This bug is slated to be released with Satellite 6.1.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1592