Bug 1316135 - [Docs][VMM] Add additional step to allow virtual machine SSO to work with RHEL 7.2
Summary: [Docs][VMM] Add additional step to allow virtual machine SSO to work with RHE...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: Documentation
Version: 3.6.0
Hardware: Unspecified
OS: Linux
high
urgent
Target Milestone: ovirt-3.6.8
: ---
Assignee: Byron Gravenorst
QA Contact: Tahlia Richardson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-09 13:53 UTC by Jiri Belka
Modified: 2019-04-28 13:21 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
With RHEL 7.2 a new version of SSSD has been introduced with Two factor authentication - this new feature has introduced a difference in the configuration which is by default incompatible with the RHEVM Guest Agent SSO implementation. After the joining the virtual machine to the domain via 'ipa-client-install' customers are encouraged to run the following command to ensure the single sign on feature to work: # authconfig --enablenis --update By running this command the SSO will be restored and works again.
Clone Of:
Environment:
Last Closed: 2016-08-01 03:27:16 UTC
oVirt Team: Docs
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1327085 0 unspecified CLOSED Don't prompt for password if there is already one on the stack 2021-02-22 00:41:40 UTC

Internal Links: 1327085

Description Jiri Belka 2016-03-09 13:53:02 UTC
Description of problem:

Guest agent SSO does not work with recent RHEL 7.2. Seems like something between GA and PAM/SSSD.

...
Dummy-2::INFO::2016-03-09 14:44:31,246::OVirtAgentLogic::270::root::Received an external command: login...
Dummy-2::DEBUG::2016-03-09 14:44:31,246::OVirtAgentLogic::304::root::User log-in (credentials = '\x00\x00\x00)admin.com********\x00')
Dummy-2::INFO::2016-03-09 14:44:31,247::CredServer::207::root::The following users are allowed to connect: [0]
Dummy-2::DEBUG::2016-03-09 14:44:31,247::CredServer::272::root::Token: 749289
Dummy-2::INFO::2016-03-09 14:44:31,247::CredServer::273::root::Opening credentials channel...
Dummy-2::INFO::2016-03-09 14:44:31,248::CredServer::132::root::Emitting user authenticated signal (749289).
CredChannel::DEBUG::2016-03-09 14:44:31,600::CredServer::166::root::Receiving user's credential ret = 2 errno = 0
CredChannel::DEBUG::2016-03-09 14:44:31,601::CredServer::177::root::cmsgp: len=28 level=1 type=2
CredChannel::INFO::2016-03-09 14:44:31,601::CredServer::225::root::Incomming connection from user: 0 process: 3179
CredChannel::INFO::2016-03-09 14:44:31,601::CredServer::232::root::Sending user's credential (token: 749289)
Dummy-2::INFO::2016-03-09 14:44:31,602::CredServer::277::root::Credentials channel was closed.
Dummy-2::DEBUG::2016-03-09 14:44:31,602::OVirtAgentLogic::256::root::AgentLogicBase::doListen() - in loop before vio.read
...

and sssd log for domain

...
     1  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [sdap_fill_memberships] (0x1000):     member #0 (uid=admin,cn=users,cn=accounts,dc=brq-ipa,dc=example,dc=com): [name=admin,cn=users,cn=brq-ipa.example.com,cn=sysdb]
     2  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [acctinfo_callback] (0x0100): Request processed. Returned 0,0,Success (Success)
     3  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler] (0x0100): Got request with the following data
     4  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): command: SSS_PAM_PREAUTH
     5  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): domain: brq-ipa.example.com
     6  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): user: admin
     7  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): service: gdm-ovirtcred
     8  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): tty: 
     9  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): ruser: 
    10  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): rhost: 
    11  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): authtok type: 0
    12  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): newauthtok type: 0
    13  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): priv: 1
    14  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): cli_pid: 3179
    15  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): logon name: not set
    16  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_queue_send] (0x1000): Wait queue of user [admin] is empty, running request [0x7f1977418280] immediately.
    17  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'IPA'
    18  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_server_status] (0x1000): Status of server 'brq-ipa.example.com' is 'working'
    19  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_port_status] (0x1000): Port status of port 389 for server 'brq-ipa.example.com' is 'working'
    20  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [resolve_srv_send] (0x0200): The status of SRV lookup is resolved
    21  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_server_status] (0x1000): Status of server 'brq-ipa.example.com' is 'working'
    22  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_resolve_server_process] (0x1000): Saving the first resolved server
    23  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_resolve_server_process] (0x0200): Found address for server brq-ipa.example.com: [10.34.63.130] TTL 3600
    24  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [child_sig_handler] (0x1000): Waiting for child [3183].
    25  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [child_sig_handler] (0x0100): child [3183] finished successfully.
    26  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [fo_set_port_status] (0x0100): Marking port 389 of server 'brq-ipa.example.com' as 'working'
    27  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [set_server_common_status] (0x0100): Marking server 'brq-ipa.example.com' as 'working'
    28  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_store_creds] (0x0010): unsupported PAM command [249].
    29  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_store_creds] (0x0010): password not available, offline auth may not work.
    30  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [check_wait_queue] (0x1000): Wait queue for user [admin] is empty.
    31  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_queue_done] (0x1000): krb5_auth_queue request [0x7f1977418280] done.
    32  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Backend returned: (0, 0, <NULL>) [Success (Success)]
    33  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Sending result [0][brq-ipa.example.com]
    34  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Sent result [0][brq-ipa.example.com]
    35  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler] (0x0100): Got request with the following data
    36  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): command: PAM_AUTHENTICATE
    37  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): domain: brq-ipa.example.com
    38  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): user: admin
    39  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): service: gdm-ovirtcred
    40  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): tty: 
    41  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): ruser: 
    42  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): rhost: 
    43  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): authtok type: 1
    44  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): newauthtok type: 0
    45  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): priv: 1
    46  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): cli_pid: 3179
    47  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [pam_print_data] (0x0100): logon name: not set
    48  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_queue_send] (0x1000): Wait queue of user [admin] is empty, running request [0x7f1977418280] immediately.
    49  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [fo_resolve_service_send] (0x0100): Trying to resolve service 'IPA'
    50  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_server_status] (0x1000): Status of server 'brq-ipa.example.com' is 'working'
    51  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_port_status] (0x1000): Port status of port 389 for server 'brq-ipa.example.com' is 'working'
    52  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [resolve_srv_send] (0x0200): The status of SRV lookup is resolved
    53  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [get_server_status] (0x1000): Status of server 'brq-ipa.example.com' is 'working'
    54  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_resolve_server_process] (0x1000): Saving the first resolved server
    55  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_resolve_server_process] (0x0200): Found address for server brq-ipa.example.com: [10.34.63.130] TTL 3600
    56  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [child_sig_handler] (0x1000): Waiting for child [3184].
    57  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [child_sig_handler] (0x0100): child [3184] finished successfully.
    58  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [check_wait_queue] (0x1000): Wait queue for user [admin] is empty.
    59  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [krb5_auth_queue_done] (0x1000): krb5_auth_queue request [0x7f1977418280] done.
    60  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [ipaMigrationEnabled]
    61  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [ipaSELinuxUserMapDefault]
    62  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [ipaSELinuxUserMapOrder]
    63  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [sdap_parse_entry] (0x1000): OriginalDN: [cn=ipaConfig,cn=etc,dc=brq-ipa,dc=example,dc=com].
    64  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [ipa_get_migration_flag_done] (0x0100): Password migration is not enabled.
    65  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Backend returned: (0, 17, <NULL>) [Success (Failure setting user credentials)]
    66  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Sending result [17][brq-ipa.example.com]
    67  (Wed Mar  9 14:44:31 2016) [sssd[be[brq-ipa.example.com]]] [be_pam_handler_callback] (0x0100): Sent result [17][brq-ipa.example.com]

...

Version-Release number of selected component (if applicable):
rhevm-guest-agent-gdm-plugin-1.0.10-2.el7.noarch 3.5.8
rhevm-guest-agent-gdm-plugin-1.0.11-3.el7ev.noarch 3.6.x

How reproducible:
100%

Steps to Reproduce:
1. install up-to-date rhel 7.2 x86_64
2. join os into IPA domain (check 'getent passwd $domainuser' if domain services
   work)
3. assign an user with UserRole in Admin Portal to this VM, open console in User Portal with this domain user (ipa domain should be known to engine env too)

Actual results:
rhevm sso does not work

Expected results:
should work

Additional info:

Comment 2 Jiri Belka 2016-03-10 14:59:02 UTC
It works find with latest RHEL 6.7 and same GA version. It used to work with RHEL 7 in the past, I'll make test with older RHEL 7.

Comment 3 Jakub Hrozek 2016-03-11 12:59:15 UTC
This is not an issue in SSSD itself, but rather in a way the PAM stack is set up with 7.2 or newer in order for 2FA to be supported. The gdm-ovirtcred reads the password, puts it into the stack and then proceeds to password-auth:
# cat /etc/pam.d/gdm-ovirtcred 
#%PAM-1.0
auth        required    pam_ovirt_cred.so
auth        include     password-auth
[snip]

Now, the password-auth module looks like this with 7.2:
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        [default=1 success=ok] pam_localuser.so
auth        [success=done ignore=ignore default=die] pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        sufficient    pam_sss.so forward_pass
auth        required      pam_deny.so

Note the pam_sss.so module has only the forward_pass option which means it wouldn't read the previous password on the stack, unlike the pam_unix.so module which has try_first_pass. This is to ensure that IPA users would be queries for password or password+token depending on their IPA configuration.

When I added the use_first_pass option instead of forward_pass, auth started working.

If you want to generate the old-style PAM stack, you can use the --enablenis option of authconfig (or you can generate the PAM stack yourself, but that might not be future-proof..)

Comment 4 Jakub Hrozek 2016-03-14 08:44:23 UTC
It would be nice to open a bug against SSSD to handle this configuration more gracefully. I talked to vfeenstra on IRC about that, but I also understand that it was Friday afternoon and these things can slip through :-)

Comment 5 Jiri Belka 2016-03-14 12:35:45 UTC
FYI I used `ipa-client-install --mkhomedir', thus '--enablenic' is not available here.

Comment 6 Vinzenz Feenstra [evilissimo] 2016-04-06 10:42:31 UTC
Well as Jakub suggested running 

`authconfig --enablenis --update` 

after you have been running ipa-client-install works

Comment 7 Michal Skrivanek 2016-04-07 07:27:35 UTC
so, what's the outcome?

Comment 8 Jakub Hrozek 2016-04-07 07:59:24 UTC
(In reply to Michal Skrivanek from comment #7)
> so, what's the outcome?

For a quick fix, you can use the modified authconfig invocation which Vinzenz verified to be working.

For the proper long-term fix, please file a bug against SSSD to handle these situations more gracefully without resorting to workarounds.

Thank you!

Comment 9 Jakub Hrozek 2016-04-07 10:22:17 UTC
Upstream ticket: https://fedorahosted.org/sssd/ticket/2984

We'll clone to RHBZ later..

Comment 10 Jakub Hrozek 2016-04-14 15:28:47 UTC
Here is the SSSD bug: https://bugzilla.redhat.com/show_bug.cgi?id=1327085

I would like to know from you how urgent the SSSD issue is and how fast you'd like to have it fix?

Comment 11 Vinzenz Feenstra [evilissimo] 2016-04-14 17:41:35 UTC
Well since that there is a feasible workaround by running the command above we'll hope to have a knowledge base article for it it's not something that needs to be fixed by tomorrow, but well as soon as you can have it in.

Comment 12 Jakub Hrozek 2016-04-15 06:31:22 UTC
(In reply to Vinzenz Feenstra [evilissimo] from comment #11)
> Well since that there is a feasible workaround by running the command above
> we'll hope to have a knowledge base article for it it's not something that
> needs to be fixed by tomorrow, but well as soon as you can have it in.

OK, I'll talk to our QE colleagues if we can schedule this for 7.2.z.


Note You need to log in before you can comment on or make changes to this bug.