Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1660595 - Hosted Engine Deploy fails with SSO authentication errors
Summary: Hosted Engine Deploy fails with SSO authentication errors
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 4.2.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.3.3
: 4.3.0
Assignee: Simone Tiraboschi
QA Contact: Nikolai Sednev
URL:
Whiteboard:
: 1664123 1674540 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-18 18:54 UTC by Anitha Udgiri
Modified: 2020-02-25 13:37 UTC (History)
13 users (show)

Fixed In Version: ovirt-ansible-hosted-engine-setup-1.0.14
Doc Type: Bug Fix
Doc Text:
During a self-hosted engine deployment, SSO authentication errors may occur stating that a valid profile cannot be found in credentials and to check the logs for more details. The interim workaround is to retry the authentication attempt more than once. See BZ#1695523 for a specific example involving Kerberos SSO and engine-backup.
Clone Of:
Environment:
Last Closed: 2019-05-08 12:32:03 UTC
oVirt Team: Integration
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-ansible-hosted-engine-setup pull 149 0 'None' closed Retry engine access on failures 2021-02-18 09:27:42 UTC

Internal Links: 1674540

Description Anitha Udgiri 2018-12-18 18:54:10 UTC
Description of problem:


When the Customer tries to deploy a hosted engine, the following error occurs :

# hosted-engine --deploy 
...
[ INFO  ] TASK [Reconfigure OVN central address]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Check for the local bootstrap VM]
[ ERROR ] AuthError: Error during SSO authentication access_denied : Cannot authenticate user 'None@N/A': No valid profile found in credentials..
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": [{"msg": "The 'ovirt_vms_facts' module is being renamed 'ovirt_vm_facts'", "version": 2.8}], "msg": "Error during SSO authentication access_denied : Cannot authenticate user 'None@N/A': No valid profile found in credentials.."}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch logs from the engine VM]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set destination directory path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create destination directory]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Find the local appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set local_vm_disk_path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Give the vm time to flush dirty buffers]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Copy engine logs]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Remove temporary entry in /etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20181214132433.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
          Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20181214125408-p7x3u1.log
[root@rhv-2 ovirt-hosted-engine]#

Comment 6 Sandro Bonazzola 2018-12-19 07:37:35 UTC
This has been opened on 4.2.5. Does this reproduce with 4.2.7 too?

Comment 7 Anitha Udgiri 2019-01-03 19:57:59 UTC
(In reply to Sandro Bonazzola from comment #6)
> This has been opened on 4.2.5. Does this reproduce with 4.2.7 too?

Sandro,
   Here is Customer's response :

Translated :

I was able to continue with the deployment:

1. Correct DNS entry (the second octet was incorrect on DNS)
2. Clean previous hosted-engine installation
3. Clean /var/tmp on host where the rhv-m image was
4. Re-try the installation, everything went ok. Everything under 4.2.7 
RHV version

Comment 9 Sandro Bonazzola 2019-02-18 07:54:57 UTC
Moving to 4.3.2 not being identified as blocker for 4.3.1.

Comment 10 Sandro Bonazzola 2019-02-20 08:59:57 UTC
*** Bug 1664123 has been marked as a duplicate of this bug. ***

Comment 11 Umashankar 2019-03-18 11:44:22 UTC
Hi, I'm using ovirt 4.3.1 and i'm facing same issue, when tried to deploy self-hosted-engine on my server, It is blocking me from hosting the engine, at the same step and error initially reported.

Comment 12 Simone Tiraboschi 2019-03-25 12:40:38 UTC
(In reply to Umashankar from comment #11)
> Hi, I'm using ovirt 4.3.1 and i'm facing same issue, when tried to deploy
> self-hosted-engine on my server, It is blocking me from hosting the engine,
> at the same step and error initially reported.

For now I can only suggest to simply try again: the issue is not systematic at all.

Comment 13 Simone Tiraboschi 2019-03-25 13:36:28 UTC
*** Bug 1674540 has been marked as a duplicate of this bug. ***

Comment 14 André Liebe 2019-03-26 08:38:28 UTC
okay, I tried again and failed again.

- ovirt-hosted-engine-cleanup
- rm -rf /var/tmp*
- re-run: hosted-engine --deploy --restore-from-file=/mnt/backups/engine/ovirt-engine-backup-full.tar.gz

fails at same step.

host is up to date to current 4.3.2
ovirt-hosted-engine-ha-2.3.1-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.13-1.el7.noarch

Comment 15 Simone Tiraboschi 2019-03-26 08:44:05 UTC
André, can you please try locally applying https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/149/files on your /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/03_hosted_engine_final_tasks.yml ?

Honestly I never managed to reproduce this in a systematic way.

Comment 16 André Liebe 2019-03-26 10:03:09 UTC
patch -u -p1 < /root/093f02a.patch
patching file tasks/create_target_vm/03_hosted_engine_final_tasks.yml
Hunk #1 succeeded at 321 (offset -3 lines).

But it fails again.

I tried to authenticate against temporary reachable web gui on https://lvh3:6900/hosted-engine, but failed (like ansible script)

While looking through engine.log I found a major problem, which may cause the trouble:
2019-03-26 10:21:05,136+01 ERROR [org.ovirt.engine.core.sso.utils.SsoExtensionsManager] (ServerService Thread Pool -- 49) [] Could not load extension based on configuration file '/etc/ovirt-engine/extensions.d/kerberos-http-authn.properties'. Please check the configuration file is valid. Exception message is: Error loading extension 'kerberos-http-authn': The module 'org.ovirt.engine-extensions.aaa.misc' cannot be loaded: org.ovirt.engine-extensions.aaa.misc
2019-03-26 10:21:05,136+01 ERROR [org.ovirt.engine.core.sso.utils.SsoExtensionsManager] (ServerService Thread Pool -- 49) [] Could not load extension based on configuration file '/etc/ovirt-engine/extensions.d/kerberos-http-mapping.properties'. Please check the configuration file is valid. Exception message is: Error loading extension 'kerberos-http-mapping': The module 'org.ovirt.engine-extensions.aaa.misc' cannot be loaded: org.ovirt.engine-extensions.aaa.misc
...
2019-03-26 10:21:05,575+01 WARN  [org.ovirt.engineextensions.aaa.ldap.Framework] (ServerService Thread Pool -- 49) [] Error while connecting to 'ucs1.lab.gematik.de': LDAPException(resultCode=82 (local error), errorMessage='The connection reader was unable to successfully complete TLS negotiation:  SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.7, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58')
2019-03-26 10:21:05,575+01 WARN  [org.ovirt.engineextensions.aaa.ldap.AuthnExtension] (ServerService Thread Pool -- 49) [] [ovirt-engine-extension-aaa-ldap.authn::lab.gematik.de-authn] Cannot initialize LDAP framework, deferring initialization. Error: The connection reader was unable to successfully complete TLS negotiation:  SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.7, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58

Side note: engine was configured with aaa to authenticate throgh kerberos and LDAPs against a Domain Controller before
- internal CA certificate was deployed manually to /etc/pki/ca-trust/source/anchors/internal-ca.pem and installed globally by update-ca-trust extract
- Kerberos was konfigured (krb5.keytab was deployed to /etc/httpd/http.keytab, httpd configuration extensions etc) according to https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/configuring_ldap_and_kerberos_for_single_sign-on

Comment 17 Simone Tiraboschi 2019-03-26 11:27:28 UTC
(In reply to André Liebe from comment #16)
> Side note: engine was configured with aaa to authenticate throgh kerberos
> and LDAPs against a Domain Controller before
> - internal CA certificate was deployed manually to
> /etc/pki/ca-trust/source/anchors/internal-ca.pem and installed globally by
> update-ca-trust extract
> - Kerberos was konfigured (krb5.keytab was deployed to
> /etc/httpd/http.keytab, httpd configuration extensions etc) according to
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/
> html/administration_guide/configuring_ldap_and_kerberos_for_single_sign-on

Didi,
are we confident that we are correctly covering also such cases in engine-backup?

Comment 18 Yedidyah Bar David 2019-03-26 11:35:41 UTC
(In reply to Simone Tiraboschi from comment #17)
> (In reply to André Liebe from comment #16)
> > Side note: engine was configured with aaa to authenticate throgh kerberos
> > and LDAPs against a Domain Controller before
> > - internal CA certificate was deployed manually to
> > /etc/pki/ca-trust/source/anchors/internal-ca.pem and installed globally by
> > update-ca-trust extract
> > - Kerberos was konfigured (krb5.keytab was deployed to
> > /etc/httpd/http.keytab, httpd configuration extensions etc) according to
> > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/
> > html/administration_guide/configuring_ldap_and_kerberos_for_single_sign-on
> 
> Didi,
> are we confident that we are correctly covering also such cases in
> engine-backup?

We do not. Please open a bug, thanks. That said, not sure where the border is
between "engine backup" and "engine machine backup". User can have all kinds
of local modifications (backup agents, monitoring, whatever) that we do not
backup/restore.

Comment 19 Simone Tiraboschi 2019-03-26 11:44:40 UTC
(In reply to Yedidyah Bar David from comment #18)
> > Didi,
> > are we confident that we are correctly covering also such cases in
> > engine-backup?
> 
> We do not. Please open a bug, thanks. That said, not sure where the border is
> between "engine backup" and "engine machine backup". User can have all kinds
> of local modifications (backup agents, monitoring, whatever) that we do not
> backup/restore.

Yes, of course we cannot cover every possible user changes without really taking a VM snapshot of something like that.
I think that we should instead probably focus more on a kind of safe mode for the engine where we are sure that the engine can always start with bare minimal functionalities letting then the user fix what's still missing.

Comment 20 André Liebe 2019-03-26 12:29:01 UTC
Normally I would have setup/prepared the virtual machine myself, like before ansible setup was the one and only option. From my point of view engine-backup should at least contain all neccessary files it was configured with, if file/folder path was suggested by documentation [1],[2]. Or at least a a strong warning should go into every customization part of documentation which will break restore procedure

[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/configuring_ldap_and_kerberos_for_single_sign-on
[2] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/appe-red_hat_enterprise_virtualization_and_ssl

Comment 21 André Liebe 2019-03-28 08:24:40 UTC
So, whats the best way to include a customization step (show ssh connection details and wait for user interaction to continue) with ansible, so I can customize the new engine VM after everything is installed but before starting engine services within the new vm?

Comment 22 Yedidyah Bar David 2019-03-28 08:50:35 UTC
(In reply to André Liebe from comment #21)
> So, whats the best way to include a customization step (show ssh connection
> details and wait for user interaction to continue) with ansible, so I can
> customize the new engine VM after everything is installed but before
> starting engine services within the new vm?

Simone and I discussed this recently, but I do not remember the conclusion. IMO you can already do that in principle, because we do ask questions after the engine is already up, e.g. storage. So when prompted, you can find the local IP address of the engine vm (it will be in libvirt's default network), ssh there and/or connect to the web admin ui, customize stuff, then reply to the question prompt.

I agree that we should make this more user-friendly, and also discussed allowing doing this seamlessly using a 'ssh -w' tunnel, so that you can connect to the engine web ui right from your laptop. Simone - any more details? Do we have a bug for this?

Comment 23 André Liebe 2019-03-28 09:00:58 UTC
Hmm, isn't it already too late when web ui available? The ca certificate needs to be deployed before engine/wildfly is started, so it is able to connect to LDAPs (or remote postgre with tls).

Simone could you help me out with an ansible patch that waits for user interaction after setup?

Comment 24 Simone Tiraboschi 2019-03-28 09:02:52 UTC
(In reply to Yedidyah Bar David from comment #22)
> Simone and I discussed this recently, but I do not remember the conclusion.
> IMO you can already do that in principle, because we do ask questions after
> the engine is already up, e.g. storage. So when prompted, you can find the
> local IP address of the engine vm (it will be in libvirt's default network),
> ssh there and/or connect to the web admin ui, customize stuff, then reply to
> the question prompt.
> 
> I agree that we should make this more user-friendly, and also discussed
> allowing doing this seamlessly using a 'ssh -w' tunnel, so that you can
> connect to the engine web ui right from your laptop. Simone - any more
> details? Do we have a bug for this?

Yes, and the ssh tunnel to reach the engine over the bootstrap VM is already there now.
But this is a different case: here the user has to customise the engine VM after engine-backup but before engine-setup and we already have an hook mechanism for that.

Creating a custom ansible tasks file with all the missing steps and saving it under /usr/share/ansible/roles/ovirt.hosted_engine_setup/hooks/enginevm_before_engine_setup will be enough here.

Comment 25 Simone Tiraboschi 2019-03-28 09:03:26 UTC
(In reply to André Liebe from comment #23)
> Hmm, isn't it already too late when web ui available? 

yes, exactly.

Comment 27 Simone Tiraboschi 2019-03-28 17:11:13 UTC
(In reply to Yedidyah Bar David from comment #18)
> We do not. Please open a bug, thanks. That said, not sure where the border is
> between "engine backup" and "engine machine backup". User can have all kinds
> of local modifications (backup agents, monitoring, whatever) that we do not
> backup/restore.

Done: https://bugzilla.redhat.com/1693816

Comment 28 Yedidyah Bar David 2019-03-31 05:51:07 UTC
(In reply to Simone Tiraboschi from comment #24)
> (In reply to Yedidyah Bar David from comment #22)
> > Simone and I discussed this recently, but I do not remember the conclusion.
> > IMO you can already do that in principle, because we do ask questions after
> > the engine is already up, e.g. storage. So when prompted, you can find the
> > local IP address of the engine vm (it will be in libvirt's default network),
> > ssh there and/or connect to the web admin ui, customize stuff, then reply to
> > the question prompt.
> > 
> > I agree that we should make this more user-friendly, and also discussed
> > allowing doing this seamlessly using a 'ssh -w' tunnel, so that you can
> > connect to the engine web ui right from your laptop. Simone - any more
> > details? Do we have a bug for this?
> 
> Yes, and the ssh tunnel to reach the engine over the bootstrap VM is already
> there now.
> But this is a different case: here the user has to customise the engine VM
> after engine-backup but before engine-setup and we already have an hook
> mechanism for that.

In theory this is enough, in practice it requires lots of testing (also
routinely, on new versions) to make sure such a playbook keeps working
as expected.

IMO we should also (perhaps optionally) prompt between restore and setup,
saying "Restore finished. Press Enter when ready to continue and run Setup".

Comment 29 Yedidyah Bar David 2019-03-31 05:55:12 UTC
(In reply to André Liebe from comment #23)
> Hmm, isn't it already too late when web ui available? The ca certificate
> needs to be deployed before engine/wildfly is started, so it is able to
> connect to LDAPs (or remote postgre with tls).

OK, I agree. But in this specific case, if the version used to take the
backup and the version used during restore are identical, engine-setup
should not need to do very much, and it's probably safe to simply try
manually fixing what's needed and then run it again manually. That is,
if we indeed prompt at that step (instead of abort).

Comment 30 Simone Tiraboschi 2019-04-01 08:20:59 UTC
(In reply to Yedidyah Bar David from comment #28)
> IMO we should also (perhaps optionally) prompt between restore and setup,
> saying "Restore finished. Press Enter when ready to continue and run Setup".

Unfortunately we cannot easily pause Ansible execution in the middle.

In theory we have two ways to freeze Ansible execution:
https://docs.ansible.com/ansible/latest/modules/pause_module.html
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html

In practice, pause is not really going to work if executed via ansible-tower or ansible-runner:
see "Note: Playbooks should not use the pause feature of Ansible without a timeout, as Tower does not allow for interactively cancelling a pause. If you must use pause, ensure that you set a timeout." from
https://docs.ansible.com/ansible-tower/latest/html/userguide/best_practices.html

ovirt-hosted-engine-setup is currently just wrapping ansible-playbook via subprocess.Popen:
https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/ovirt_hosted_engine_setup/ansible_utils.py#L198

but even in that case a pause task is going to be skipped with a:
[WARNING]: Not waiting for response to prompt as stdin is not interactive

The second option is wait_for:
in that case we could for instance wait until a specific lock file is removed or something like that.
But exiting the paused status is not just as simple as pressing a key and we should eventually think about an "unpause" utility command (something like 'hosted-engine --unpause-deploy' ) to be executed over a second shell.
Not really sure about that.

Comment 31 Yedidyah Bar David 2019-04-01 09:11:32 UTC
(In reply to Simone Tiraboschi from comment #30)
> (In reply to Yedidyah Bar David from comment #28)
> > IMO we should also (perhaps optionally) prompt between restore and setup,
> > saying "Restore finished. Press Enter when ready to continue and run Setup".
> 
> Unfortunately we cannot easily pause Ansible execution in the middle.
> 
> In theory we have two ways to freeze Ansible execution:
> https://docs.ansible.com/ansible/latest/modules/pause_module.html
> https://docs.ansible.com/ansible/latest/modules/wait_for_module.html
> 
> In practice, pause is not really going to work if executed via ansible-tower
> or ansible-runner:
> see "Note: Playbooks should not use the pause feature of Ansible without a
> timeout, as Tower does not allow for interactively cancelling a pause. If
> you must use pause, ensure that you set a timeout." from
> https://docs.ansible.com/ansible-tower/latest/html/userguide/best_practices.
> html
> 
> ovirt-hosted-engine-setup is currently just wrapping ansible-playbook via
> subprocess.Popen:
> https://github.com/oVirt/ovirt-hosted-engine-setup/blob/master/src/
> ovirt_hosted_engine_setup/ansible_utils.py#L198
> 
> but even in that case a pause task is going to be skipped with a:
> [WARNING]: Not waiting for response to prompt as stdin is not interactive
> 
> The second option is wait_for:
> in that case we could for instance wait until a specific lock file is
> removed or something like that.
> But exiting the paused status is not just as simple as pressing a key and we
> should eventually think about an "unpause" utility command (something like
> 'hosted-engine --unpause-deploy' ) to be executed over a second shell.
> Not really sure about that.

Two other options:

1. Create some temp file, tell the user to remove it when ready, wait until it's gone (or until some timeout, if we want).

2. Split the playbook to two and prompt in between.

Comment 32 Simone Tiraboschi 2019-04-01 09:25:36 UTC
(In reply to Yedidyah Bar David from comment #31)
> Two other options:
> 
> 1. Create some temp file, tell the user to remove it when ready, wait until
> it's gone (or until some timeout, if we want).

- name: Wait until the lock file is removed
  wait_for:
    path: /var/lock/file.lock
    state: absent

will do exactly this

> 2. Split the playbook to two and prompt in between.

this is more complex since the whole logic is now packaged in a role and the playbook is just a two lines wrapper over the role

Comment 33 Yedidyah Bar David 2019-04-01 09:38:18 UTC
André, is the example in comment 32 good enough for your current needs?

We might want to include it in the docs. I think it will serve 95% of the cases.

Comment 34 André Liebe 2019-04-01 10:01:06 UTC
I already worked a round this issue by adding a customization file in here /usr/share/ansible/roles/ovirt.hosted_engine_setup/hooks/enginevm_before_engine_setup (which copies keytab, ca-cert and runs a trust-extract) to run just into another problem: bug 1694116 

I'd definitley favour the wait for lock step to be included (in a sane way like /root/DELETE-TO-CONTINUE) in `hosted-engine --deplopy` which could be a toggle (e.g. --manual-custimization)

So, yeah this suggestion from comment 32 will definitley work for me

Comment 35 André Liebe 2019-04-01 11:47:41 UTC
And of course one needs to bee quick to use the workaround from comment 32 as it will timeout after 300 seconds

[ INFO  ] TASK [ovirt.hosted_engine_setup : Wait until the lock file is removed]
[ ERROR ] fatal: [localhost -> engine.lab.gematik.de]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for /root/DELETE_TO_CONTINUE to be absent."}

Comment 36 Simone Tiraboschi 2019-04-01 12:10:52 UTC
Yes, sorry, of course we can simply enter a longer timeout value or remove it at all.

Comment 37 Nikolai Sednev 2019-04-02 15:06:47 UTC
Deployment over NFS on clean environment had succeeded using these components on hosts:
Works for me on these components:
ovirt-hosted-engine-setup-2.3.7-1.el7ev.noarch
ovirt-hosted-engine-ha-2.3.1-1.el7ev.noarch
rhvm-appliance-4.3-20190328.1.el7.x86_64
Linux 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.6 (Maipo)

Tested on RHEL hosts.

Moving to verified.

Comment 38 Yedidyah Bar David 2019-04-03 09:35:07 UTC
(In reply to André Liebe from comment #34)
> I already worked a round this issue by adding a customization file in here
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/hooks/
> enginevm_before_engine_setup (which copies keytab, ca-cert and runs a
> trust-extract) to run just into another problem: bug 1694116 
> 
> I'd definitley favour the wait for lock step to be included (in a sane way
> like /root/DELETE-TO-CONTINUE) in `hosted-engine --deplopy` which could be a
> toggle (e.g. --manual-custimization)
> 
> So, yeah this suggestion from comment 32 will definitley work for me

Filed for this bug 1695523.

Comment 39 Steve Goodman 2019-04-07 08:48:25 UTC
Is there a clear Action Item for docs here? Looking through this, it's not clear to me.

Comment 32 has what appears to me to be a workaround, and it's not clear if there is consensus on docs addressing something specific.

Comment 40 Simone Tiraboschi 2019-04-08 08:57:02 UTC
(In reply to Steve Goodman from comment #39)
> Comment 32 has what appears to me to be a workaround, and it's not clear if
> there is consensus on docs addressing something specific.

Since comment 14 we are talking with André about a specific subcase: Kerberso SSO was configured on the original environment but engine-backup is not correctly handling it.
We filed https://bugzilla.redhat.com/show_bug.cgi?id=1695523 on that specific case, something on doc side will be probably required there.

Comment 42 errata-xmlrpc 2019-05-08 12:32:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1050

Comment 43 Daniel Gur 2019-08-28 13:12:01 UTC
sync2jira

Comment 44 Daniel Gur 2019-08-28 13:16:14 UTC
sync2jira


Note You need to log in before you can comment on or make changes to this bug.