Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1098639

Summary: cartridges don't show up in console after installation
Product: OpenShift Container Platform Reporter: Jeff McCormick <jeff.mccormick>
Component: InstallerAssignee: Luke Meyer <lmeyer>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 2.1.0CC: bleanhar, jokerman, libra-bugs, libra-onpremise-devel, mmccomas
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-05-22 13:34:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
this is the log from the installer, it shows some errors at the end that don't appear normal none

Description Jeff McCormick 2014-05-16 19:13:20 UTC
Created attachment 896503 [details]
this is the log from the installer, it shows some errors at the end that don't appear normal

Description of problem:

I installed ose2.1 today, after installation, no application cartridges
show up even though I see them in /usr/libexec/openshift/cartridges

After rebooting the system, adding a user, within the console there are
no cartridges listed to create an application with.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. perform install
2. reboot, add first user
3. browse the console to create first app

Actual results:


Expected results:


Additional info:

here is the output of the oo-admin-cartridge command, it shows
the cartridges are installed.

[root@ose21 init.d]# oo-admin-cartridge --list
(redhat, jenkins, 1, 0.0.11)
(redhat, jenkins-client, 1, 0.0.8)
(redhat, jbosseap, 6, 0.0.14)
(redhat, ruby, 1.8, 0.0.17)
(redhat, ruby, 1.9, 0.0.17)
(redhat, mysql, 5.1, 0.2.12)
(redhat, mysql, 5.5, 0.2.12)
(redhat, jbossews, 1.0, 0.0.15)
(redhat, jbossews, 2.0, 0.0.15)
(redhat, nodejs, 0.10, 0.0.16)
(redhat, python, 2.6, 0.0.16)
(redhat, python, 2.7, 0.0.16)
(redhat, postgresql, 8.4, 0.3.13)
(redhat, postgresql, 9.2, 0.3.13)
(redhat, haproxy, 1.4, 0.0.16)
(redhat, cron, 1.4, 0.0.13)
(redhat, php, 5.3, 0.0.15)
(redhat, php, 5.4, 0.0.15)
(redhat, perl, 5.10, 0.0.15)
(redhat, diy, 0.1, 0.0.11)

Comment 1 Brenton Leanhardt 2014-05-16 19:25:51 UTC
Can you check if this is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1098544?

You may simply need to run:

oo-admin-ctl-cartridge -c import-node --activate --obsolete

This is new in 2.1 since cartridge data is now stored in the database.

Comment 2 Brenton Leanhardt 2014-05-16 19:27:14 UTC
Installations from install.openshift.com should handle this step as well as adding a default districts since districts are now required.

Comment 4 Jeff McCormick 2014-05-16 19:45:38 UTC
the oo-admin-ctl-cartridge command has the cartridges showing up....but when I try to create my first application, I get this error:

Unable to complete the requested operation due to: No district nodes available. Please try again and contact support if the issue persists. Reference ID: 7f417acdded23da263485a3baa3e8a2c

this seems to be something the installation should have performed?

Comment 5 Jeff McCormick 2014-05-16 19:48:26 UTC
[root@ose21 init.d]# oo-admin-ctl-district 
No districts created yet.  Use 'oo-admin-ctl-district -c create' to create one.

it appears no districts were created by the installer

Comment 6 Miciah Dashiel Butler Masters 2014-05-16 19:50:20 UTC
The installation log attached to this bug report indicates that the oo-admin-ctl-cartridge import-node command was run:

    + oo-admin-ctl-cartridge -c import-node --activate --obsolete
    /usr/sbin/oo-admin-ctl-cartridge:29:in `rescue in import_node': uninitialized constant Command::OpenShift (NameError)
            from /usr/sbin/oo-admin-ctl-cartridge:10:in `import_node'
            from /usr/sbin/oo-admin-ctl-cartridge:559:in `<main>'

It is not clear why the command failed because the error handler apparently is buggy.  I'm looking into that.

Meanwhile, my best guess is that something is wrong with the ruby193-mcollective server.  We should check the output of `service ruby193-mcollective status` (although it looks like it started OK, earlier in the log), `oo-mco ping`, and assuming those work, the output of `oo-diagnostics -v`.

Comment 7 Jeff McCormick 2014-05-16 19:55:56 UTC
here is the output on my system.....


[root@ose21 init.d]# service ruby193-mcollective status
mcollectived (pid  15372) is running...
[root@ose21 init.d]# oo-mco ping
ose21.example.com                        time=147.00 ms


---- ping statistics ----
1 replies max: 147.00 min: 147.00 avg: 147.00 
[root@ose21 init.d]# oo-diagnostics -v
INFO: loading list of installed packages
INFO: OpenShift broker installed.
INFO: OpenShift node installed.
INFO: Loading the broker rails environment.
INFO: running: prereq_dns_server_available
INFO: checking that the first server in /etc/resolv.conf responds
INFO: running: prereq_domain_resolves
INFO: checking that we can resolve our application domain
INFO: running: test_enterprise_rpms
INFO: Checking that all OpenShift RPMs are actually from OpenShift Enterprise
INFO: running: test_selinux_policy_rpm
INFO: rpm selinux-policy installed with at least version 3.7.19-195.el6_4.4
INFO: running: test_selinux_enabled
INFO: No recent SELinux AVCs logged. However, SELinux violations are not always logged.
INFO: running: test_broker_cache_permissions
INFO: broker application cache permissions appear fine
INFO: running: test_node_profiles_districts_from_broker
INFO: checking node profiles via MCollective
INFO: profile for ose21.example.com: small
FAIL: test_node_profiles_districts_from_broker
          No districts are defined. Districts are required before creating applications.
          Please consult the Administration Guide.

INFO: skipping test_node_profiles_districts_from_broker
INFO: running: test_broker_accept_scripts
INFO: running oo-accept-broker
INFO: oo-accept-broker ran without error:
--BEGIN OUTPUT--
PASS

--END oo-accept-broker OUTPUT--
INFO: running oo-accept-systems -w 2
INFO: oo-accept-systems -w 2 ran without error:
--BEGIN OUTPUT--
PASS

--END oo-accept-systems -w 2 OUTPUT--
INFO: running: test_node_accept_scripts
INFO: running oo-accept-node
FAIL: run_script
oo-accept-node had errors:
--BEGIN OUTPUT--
FAIL: This Node is currently undistricted. Districts should be used in any production installation. Please consult the Administration Guide.  
1 ERRORS

--END oo-accept-node OUTPUT--
INFO: running: test_broker_httpd_error_log
INFO: running: test_broker_passenger_ps
INFO: checking the broker application process tree
INFO: running: test_for_nonrpm_rubygems
INFO: checking for presence of gem-installed rubygems
INFO: looking in /opt/rh/ruby193/root/usr/local/share/gems/specifications/*.gemspec /opt/rh/ruby193/root/usr/share/gems/specifications/*.gemspec
INFO: running: test_for_multiple_gem_versions
INFO: checking for presence of gem-installed rubygems
INFO: running: test_node_httpd_error_log
INFO: running: test_node_containerization_plugin
INFO: running: test_node_mco_log
INFO: running: test_pam_openshift
INFO: running: test_services_enabled
INFO: checking that required services are running now
INFO: checking that required services are enabled at boot
INFO: running: test_missing_iptables_config
INFO: running: test_system_config_firewall
WARN: test_system_config_firewall
         Using system-config-firewall and lokkit with OpenShift is not recommended.
         To continue using lokkit please ensure the following custom rules are 
         installed in /etc/sysconfig/system-config-firewall:

         --custom-rules=ipv4:filter:/etc/openshift/system-config-firewall-compat
         --custom-rules=ipv4:filter:/etc/openshift/iptables.filter.rules
         --custom-rules=ipv4:nat:/etc/openshift/iptables.nat.rules

INFO: running: test_node_quota_bug
INFO: testing for quota creation failure bug
INFO: running: test_vhost_servernames
INFO: checking for vhost interference problems
INFO: running: test_altered_package_owned_configs
INFO: running: test_broken_httpd_version
INFO: running: test_usergroups_enabled
INFO: running: test_mcollective_context
INFO: running: test_mcollective_bad_facts
INFO: running: test_auth_conf_files
INFO: running: test_broker_certificate
WARN: test_broker_certificate
Using a self-signed certificate for the broker
INFO: running: test_abrt_addon_python
INFO: running: test_node_frontend_clash
INFO: running: test_yum_configuration
WARN: test_yum_configuration
        oo-admin-yum-validator reported some possible problems
        with your package source configuration:
--------------------------------------------------------------
      No roles have been specified. Attempting to guess the roles for this system...
If the roles listed below are incorrect or incomplete, please re-run this script with the appropriate --role arguments
    node
    broker
    client
    node-eap
Detected OpenShift Enterprise repository subscription managed by Red Hat Subscription Manager.
Could not determine product version. Please re-run this script with the --oo-version argument.
Please re-run this tool after making any recommended repairs to this system

--------------------------------------------------------------
        Incorrect package source configuration could lead to
        failure to install the correct RPMs.

INFO: running: test_node_env_vars_match
INFO: running: test_apache_can_read_conf_files
3 WARNINGS
2 ERRORS

Comment 8 Brenton Leanhardt 2014-05-16 20:02:49 UTC
Are you able to create a district and add a node to it?  That is likely the reason you are seeing the "No district nodes available." error.  To be honest, I believe even after adding a Node to a district I had to clear the broker cache to get things working:

oo-admin-broker-cache -c

Comment 9 Miciah Dashiel Butler Masters 2014-05-16 20:07:07 UTC
Looking through the attached log some more, I see the following errors:

    + oo-register-dns -d example.com -h ose21 -k /var/named/example.com.key -n 192.168.2.43
    /opt/rh/ruby193/root/usr/share/rubygems/rubygems/custom_require.rb:36:in `require': cannot load such file -- parseconfig (LoadError)
            from /opt/rh/ruby193/root/usr/share/rubygems/rubygems/custom_require.rb:36:in `require'
            from /usr/sbin/oo-register-dns:19:in `<main>'

from oo-register-dns,

    OpenShift: oo-diagnostics output - /opt/rh/ruby193/root/usr/share/rubygems/rubygems/custom_require.rb:36:in `require': cannot load such file -- openshift-origin-common (LoadError)
    OpenShift: oo-diagnostics output -      from /opt/rh/ruby193/root/usr/share/rubygems/rubygems/custom_require.rb:36:in `require'
    OpenShift: oo-diagnostics output -      from /usr/sbin/oo-diagnostics:37:in `<class:OODiag>'
    OpenShift: oo-diagnostics output -      from /usr/sbin/oo-diagnostics:34:in `<main>'

from oo-diagnostics, and

   https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/ose-node/2.1/os/Packages/gdal-1.9.2-8.el6op.x86_64.rpm: [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'cdn.redhat.com'"

from Yum.  It looks like several Yum commands failed because of errors connecting to the package mirror.

It also looks like the installation was retried several times.  Did you run oo-install multiple times?

We could certain improve the error handling in various places.  However, beyond that, it's possible that the errors all go back to the problem with the package mirror, which resulted in several key libraries' being missing.

Comment 10 Jeff McCormick 2014-05-16 20:36:07 UTC
after running the following commands, things seem to be working, I was then able to create my first application on the new ose2.1.

I had to run the installer 3 times to get it to fully download all the 600+ dependencies, I routinely get various connection errors to the cdn.redhat.com site, but typically restarting the installer has worked with ose 2.0.

At this point, I'm not sure if I have all the required packages or not, and I don't have an idea of how to validate that I have a complete install.  Is there a utility that performs this validation?  That would be a very useful utility if it exists.

anyway, thanks for helping me sort this out!


[root@ose21 init.d]# oo-admin-ctl-district -c create --name testdistrict
node_profile not specified.  Using default: small
Successfully created district: 537673a2361631d95a000001

{"_id"=>"537673a2361631d95a000001",
 "uuid"=>"537673a2361631d95a000001",
 "available_uids"=>"<6000 uids hidden>",
 "name"=>"testdistrict",
 "gear_size"=>"small",
 "available_capacity"=>6000,
 "max_uid"=>6999,
 "max_capacity"=>6000,
 "active_servers_size"=>0,
 "updated_at"=>2014-05-16 20:22:58 UTC,
 "created_at"=>2014-05-16 20:22:58 UTC}

[root@ose21 init.d]# oo-admin-ctl-
oo-admin-ctl-app                  oo-admin-ctl-iptables-port-proxy
oo-admin-ctl-authorization        oo-admin-ctl-region
oo-admin-ctl-cartridge            oo-admin-ctl-tc
oo-admin-ctl-district             oo-admin-ctl-team
oo-admin-ctl-domain               oo-admin-ctl-usage
oo-admin-ctl-gears                oo-admin-ctl-user
[root@ose21 init.d]# oo-admin-ctl-district -c add-node -n testdistrict -i ose21.example.com
Success for node 'ose21.example.com'!


{"_id"=>"537673a2361631d95a000001",
 "active_servers_size"=>1,
 "available_capacity"=>6000,
 "available_uids"=>"<6000 uids hidden>",
 "created_at"=>2014-05-16 20:22:58 UTC,
 "gear_size"=>"small",
 "max_capacity"=>6000,
 "max_uid"=>6999,
 "name"=>"testdistrict",
 "servers"=>
  [{"_id"=>"5376741436163179fe000001",
    "active"=>true,
    "name"=>"ose21.example.com",
    "unresponsive"=>false}],
 "updated_at"=>2014-05-16 20:22:58 UTC,
 "uuid"=>"537673a2361631d95a000001"}

[root@ose21 init.d]#

Comment 11 Jeff McCormick 2014-05-22 13:34:21 UTC

*** This bug has been marked as a duplicate of bug 1098544 ***