Bug 1544854 - Setup fails for HA standby node using appliance_console_cli
Summary: Setup fails for HA standby node using appliance_console_cli
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Appliance
Version: 5.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: GA
: 5.10.0
Assignee: Gregg Tanzillo
QA Contact: Jaroslav Henner
URL:
Whiteboard: black
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-13 15:56 UTC by luke couzens
Modified: 2019-02-07 23:01 UTC (History)
6 users (show)

Fixed In Version: 5.10.0.11
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-07 23:01:05 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:0212 None None None 2019-02-07 23:01:10 UTC

Description luke couzens 2018-02-13 15:56:54 UTC
Description of problem:Can't setup HA configuration using appliance_console_cli


Version-Release number of selected component (if applicable):5.9.0.20


How reproducible:100%


Steps to Reproduce:
1.provision appliances
2.configure primary node
3.create region with external appliance
4.configure standby node

command I used for standby node:

appliance_console_cli --internal -U <user> --p <pass> --replication standby --primary-host <primary node ip> --standby-host <standby node ip> --cluster-node-number 2 --auto-failover --dbname vmdb_production -v --dbdisk /dev/vdb --standalone


Actual results:
create encryption key
configuring internal database
Initialize postgresql disk starting
Initialize postgresql disk complete
Initialize postgresql starting
Initialize postgresql complete
Configuring Server as Standby
Configuring Replication Standby Server...
Initialize postgresql disk starting
Initialize postgresql disk failed with error - /sbin/parted exit code: 1.
See /var/www/miq/vmdb/log/appliance_console.log for details.
/opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/logging.rb:41:in `say_error': ManageIQ::ApplianceConsole::MiqSignalError (ManageIQ::ApplianceConsole::MiqSignalError)
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/logging.rb:82:in `log_and_feedback_exception'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/logging.rb:57:in `rescue in log_and_feedback'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/logging.rb:54:in `log_and_feedback'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/database_replication_standby.rb:155:in `initialize_postgresql_disk'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/database_replication_standby.rb:64:in `activate'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:276:in `set_replication'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:177:in `run'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/cli.rb:425:in `parse'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/bin/appliance_console_cli:7:in `<top (required)>'
	from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `load'
	from /opt/rh/cfme-gemset/bin/appliance_console_cli:23:in `<main>'

Expected results:
standby node configured correctly.

Additional info:

cat /var/www/miq/vmdb/log/appliance_console.log
# Logfile created on 2018-02-13 10:48:31 -0500 by logger.rb/54362
I, [2018-02-13T10:48:31.820213 #13196]  INFO -- : MIQ(ManageIQ::ApplianceConsole::InternalDatabaseConfiguration#initialize_postgresql_disk) : starting
I, [2018-02-13T10:48:32.269729 #13196]  INFO -- : MIQ(ManageIQ::ApplianceConsole::InternalDatabaseConfiguration#initialize_postgresql_disk) : complete
I, [2018-02-13T10:48:32.269885 #13196]  INFO -- : MIQ(ManageIQ::ApplianceConsole::InternalDatabaseConfiguration#initialize_postgresql) : starting
I, [2018-02-13T10:48:38.157530 #13196]  INFO -- : MIQ(ManageIQ::ApplianceConsole::InternalDatabaseConfiguration#initialize_postgresql) : complete
I, [2018-02-13T10:48:39.201530 #13196]  INFO -- : MIQ(ManageIQ::ApplianceConsole::DatabaseReplicationStandby#initialize_postgresql_disk) : starting
E, [2018-02-13T10:48:40.289773 #13196] ERROR -- : MIQ(ManageIQ::ApplianceConsole::DatabaseReplicationStandby#initialize_postgresql_disk)  Command failed: /sbin/parted exit code: 1. Error: Error: Partition(s) 1 on /dev/vdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.
. Output: . At: /opt/rh/cfme-gemset/gems/manageiq-appliance_console-1.2.4/lib/manageiq/appliance_console/logical_volume_management.rb:50:in `create_partition_to_fill_disk'


Configuration from primary node and evm server:
configuring primary database

appliance_console_cli --internal -U <user> -p <pass> --dbname vmdb_production --verbose --dbdisk /dev/vdb --standalone
create encryption key
configuring internal database
Initialize postgresql disk starting
Initialize postgresql disk complete
Initialize postgresql starting
Initialize postgresql complete

Creating region in preimary database from external appliance
appliance_console_cli -r 1 -h <ip> -U <user> -p <pass> --dbname vmdb_production -v -K <ip> --sshlogin <user> -a <pass>
fetch encryption key
configuring external database
Checking for connections to the database...

Configuring primary ha node on primary database
appliance_console_cli -U <user> -p <pass> --dbname vmdb_production --replication primary --primary-host <ip> --cluster-node-number 1 --auto-failover -v
Configuring Server as Primary
Configuring Primary Replication Server...

Comment 2 Dave Johnson 2018-02-13 16:04:51 UTC
Please assess the impact of this issue and update the severity accordingly.  Please refer to https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity for a reminder on each severity's definition.

If it's something like a tracker bug where it doesn't matter, please set the severity to Low.

Comment 5 CFME Bot 2018-03-13 21:06:38 UTC
New commit detected on ManageIQ/manageiq/master:

https://github.com/ManageIQ/manageiq/commit/9bd0f5db65c1c02b1e9eea2e5a11b7a3bb6caf0b
commit 9bd0f5db65c1c02b1e9eea2e5a11b7a3bb6caf0b
Author:     Nick Carboni <ncarboni@redhat.com>
AuthorDate: Mon Mar 12 16:51:06 2018 -0400
Commit:     Nick Carboni <ncarboni@redhat.com>
CommitDate: Mon Mar 12 16:51:06 2018 -0400

    Add encryption key validation rake task

    This task will use the currently configured encryption key to
    attempt to decrypt the seeded values from the miq_datbases entry.

    If the database is not migrated and seeded yet, it will return true

    https://bugzilla.redhat.com/show_bug.cgi?id=1544854

 lib/tasks/evm.rake | 6 +
 lib/tasks/evm_application.rb | 14 +
 spec/lib/tasks/evm_application_spec.rb | 6 +
 3 files changed, 26 insertions(+)

Comment 7 CFME Bot 2018-05-07 19:33:58 UTC
New commit detected on ManageIQ/manageiq-appliance_console/master:

https://github.com/ManageIQ/manageiq-appliance_console/commit/89a8a0de526886b2624d6cd5b4204aff10a7cd08
commit 89a8a0de526886b2624d6cd5b4204aff10a7cd08
Author:     Nick Carboni <ncarboni@redhat.com>
AuthorDate: Mon Mar 12 16:51:33 2018 -0400
Commit:     Nick Carboni <ncarboni@redhat.com>
CommitDate: Mon Mar 12 16:51:33 2018 -0400

    Validate the encryption key when configuring the database

    https://bugzilla.redhat.com/show_bug.cgi?id=1544854

 lib/manageiq/appliance_console/database_configuration.rb | 5 +
 1 file changed, 5 insertions(+)

Comment 9 CFME Bot 2018-08-06 19:56:38 UTC
New commit detected on ManageIQ/manageiq-appliance/master:

https://github.com/ManageIQ/manageiq-appliance/commit/8aba2f893cfbe878eea55c1da0de0b98f60024ab
commit 8aba2f893cfbe878eea55c1da0de0b98f60024ab
Author:     Nick Carboni <ncarboni@redhat.com>
AuthorDate: Wed Aug  1 16:53:52 2018 -0400
Commit:     Nick Carboni <ncarboni@redhat.com>
CommitDate: Wed Aug  1 16:53:52 2018 -0400

    Bump versions of the console and HA admin gem

    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1544854
    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1418080
    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1535345
    Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1586186

    https://www.pivotaltracker.com/story/show/135779733
    https://www.pivotaltracker.com/story/show/141523501
    https://www.pivotaltracker.com/story/show/121849185

 manageiq-appliance-dependencies.rb | 4 +-
 1 file changed, 2 insertions(+), 2 deletions(-)

Comment 10 Jaroslav Henner 2018-10-04 12:12:16 UTC
When using a loopback for partition, I am getting an error when issuing the command in description.

Initialize postgresql disk failed with error - /sbin/pvcreate exit code: 5.
See /var/www/miq/vmdb/log/appliance_console.log for details.
/opt/rh/cfme-gemset/gems/manageiq-appliance_console-3.2.0/lib/manageiq/appliance_console/logging.rb:41:in `say_error': ManageIQ::ApplianceConsole::MiqSignalError (ManageIQ::ApplianceConsole::MiqSignalError)
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-3.2.0/lib/manageiq/appliance_console/logging.rb:82:in `log_and_feedback_exception'
	from /opt/rh/cfme-gemset/gems/manageiq-appliance_console-3.2.0/lib/manageiq/appliance_console/logging.rb:57:in `rescue in log_and_feedback'

The problem is that the appliance_console is expecting the partition file /dev/loop01 while correct partition file is /dev/loop0p1

Comment 11 Jaroslav Henner 2018-10-15 08:16:40 UTC
With real drive (/dev/vdb instead of a loopback) it works.

Comment 12 errata-xmlrpc 2019-02-07 23:01:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0212


Note You need to log in before you can comment on or make changes to this bug.