Bug 1393503 - CFME postgres database does not start in container running OpenShift 3.3
Summary: CFME postgres database does not start in container running OpenShift 3.3
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: cfme-container
Version: 5.6.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: cfme-future
Assignee: Franco Bladilo
QA Contact: Einat Pacifici
Red Hat CloudForms Documentation
URL:
Whiteboard: container:appliance
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-09 17:14 UTC by Akram Ben Aissi
Modified: 2017-12-05 15:08 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-06 19:44:21 UTC
Category: ---
Cloudforms Team: Container Management
Target Upstream Version:


Attachments (Terms of Use)

Description Akram Ben Aissi 2016-11-09 17:14:19 UTC
Description of problem:
CFME postgres database does not start in container running OpenShift 3.3

Version-Release number of selected component (if applicable):


How reproducible:
Always


Steps to Reproduce:
1. create a project
2. oc new-app cfme4:5.6.2.2
3. oc create serviceaccount cfsa -n cloudforms
4. oadm policy add-scc-to-user privileged system:serviceaccount:cloudforms:cfsa
5. oc new-app registry.access.redhat.com/cloudforms/cfme4
6. oc patch dc cfme4 -p '{"spec":{"template":{"spec":{"containers":[{"name":"cfme4","securityContext":{ "privileged":true } }],"serviceAccountName":"cfsa"}}}}'


Actual results:
postgres database never starts
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: rake aborted!
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: Errno::ENOENT: No such file or directory @ rb_sysopen - /var/www/miq/vmdb/config/database.yml
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: /var/www/miq/vmdb/lib/tasks/evm_dba.rake:12:in `read'
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: /var/www/miq/vmdb/lib/tasks/evm_dba.rake:12:in `load_config'
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: /var/www/miq/vmdb/lib/tasks/evm_dba.rake:16:in `local?'
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: /var/www/miq/vmdb/lib/tasks/evm_dba.rake:68:in `block (3 levels) in <top (required)>'
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: Tasks: TOP => evm:start => evm:db:verify_local
Nov 09 12:13:02 manageiq-3-9gty0 sh[278]: (See full trace by running task with --trace)
Nov 09 12:13:03 manageiq-3-9gty0 systemd[1]: evmserverd.service: control process exited, code=exited status=1
Nov 09 12:13:03 manageiq-3-9gty0 systemd[1]: Failed to start EVM server daemon.







Expected results:

CFME appliance should be working

Additional info:

Comment 2 Franco Bladilo 2016-11-10 03:12:50 UTC
Akram,

I was not able to reproduce your issue on OCP 3.3 using the cfme4 (monolithic) container, basically these were my steps : 

----

[root@franco-ocp-eval-master ~]# oc version
oc v3.3.0.34
kubernetes v1.3.0+52492b4
features: Basic-Auth GSSAPI Kerberos SPNEGO


[root@franco-ocp-eval-master /]# oc new-app registry.access.redhat.com/cloudforms/cfme4
--> Found Docker image 40cbd95 (3 weeks old) from registry.access.redhat.com for "registry.access.redhat.com/cloudforms/cfme4"
...

[root@franco-ocp-eval-master ~]# oc rsh cfme4-3-p1zrn bash -l

[root@cfme4-3-p1zrn vmdb]# pidof httpd
2083 2082 2081 2080 2079 2078
[root@cfme4-3-p1zrn vmdb]# systemctl status evmserverd
● evmserverd.service - EVM server daemon
   Loaded: loaded (/usr/lib/systemd/system/evmserverd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-11-09 21:58:42 EST; 9min ago
  Process: 577 ExecStart=/bin/sh -c /bin/evmserver.sh start (code=exited, status=0/SUCCESS)
 Main PID: 601 (ruby)

----

Version cfme4:5.6.2.2 is actually latest , see here : 

[root@franco-ocp-eval-master ~]# docker inspect 9b5baf2780c9 | grep version
                "version": "5.6.2.2"

I already had a service account to run privileged containers so I skipped the creation and went on to patching the OCP created dc.

The database.yml file gets created by the appliance-initialize systemd unit, which will also perform the DB preparation and startup. I would like to get some journal logs from that unit on your deployments, basically a journalctl -u appliance-initialize could help diagnosing your issues.

Comment 3 Akram Ben Aissi 2016-11-10 13:48:30 UTC
Hi,

could you please try with RHEL 7.2 and docker version 1.10

Comment 4 Franco Bladilo 2016-11-10 13:56:20 UTC
Hi Akram,

The env I tested is actually RHEL 7.2 w/ docker 1.10 see below :

[root@franco-ocp-eval-master /]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.2 (Maipo)

[root@franco-ocp-eval-master /]# rpm -q docker
docker-1.10.3-46.el7.14.x86_64

Comment 6 Dave Johnson 2016-12-06 19:44:21 UTC
this works as expected with 5.6.3.3


Note You need to log in before you can comment on or make changes to this bug.