Bug 1700857

Summary: The EVM Service does not start , Failed to detect region_number , uninitialized constant False
Product: Red Hat CloudForms Management Engine Reporter: Gellert Kis <gekis>
Component: ApplianceAssignee: Keenan Brock <kbrock>
Status: CLOSED NOTABUG QA Contact: John Dupuy <jdupuy>
Severity: high Docs Contact: Red Hat CloudForms Documentation <cloudforms-docs>
Priority: high    
Version: 5.10.1CC: abellott, dmetzger, jdeubel, jprause, mshriver, obarenbo
Target Milestone: GA   
Target Release: 5.10.5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
5.10.1.2
Last Closed: 2019-05-21 13:58:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: Bug
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1704905    

Comment 3 Keenan Brock 2019-04-17 14:46:44 UTC
I have access to the machine and rails console seems to be behaving well (detecting the region and accessing the database well)

I'm digging into the errors

Comment 4 Keenan Brock 2019-04-17 15:56:51 UTC
Unfortunately that appliance is working for me.

I was able to run the status, I was able to run rails console, and able to startup the server
I was also able to properly detect the region

One error from the logs that I read here is: 

[----] E, [2019-04-17T15:09:20.503744 #1024:e08f58] ERROR -- : [MiqPassword::MiqPasswordError]: can not decrypt v2_key encrypted string  Method:[block (2 levels) in <class:LogProxy>]

If that log is present in the customer's site, then that would mean that the v2_key on disk is corrupted.
Copying that file from a working appliance to the non-working appliance should get it running again.
I would backup the current config/v2_key to config/v2_key.old - so you can diff the files and ensure that it was indeed the cause.
full path for the file: /var/www/miq/vmdb/config/v2_key

This would also cause a problem detecting the region (since it would not be able to connect to the database)


But to be honest, if the filesystem is corrupted, then scrapping and rebuilding the broken worker appliance seems like a better approach.
the v2_key file is not changed after initial configuration, so it seems at low odds at getting corrupted.


I'll look back at the logs and look for other issues, including the missing user error, but I'm kinda stuck since this appliance doesn't look like a reproducer.

Comment 10 Keenan Brock 2019-05-21 13:58:05 UTC
case was closed.
please let me know if we need to look further into this