Bug 1314266
Summary: | Unable to log in with valid credentials | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Daniel Horák <dahorak> |
Component: | core | Assignee: | gowtham <gshanmug> |
core sub component: | authentication | QA Contact: | Daniel Horák <dahorak> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | mbukatov, nthomas, sankarshan |
Version: | 2 | ||
Target Milestone: | --- | ||
Target Release: | 2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | rhscon-ceph-0.0.23-1.el7scon.x86_64, rhscon-core-0.0.24-1.el7scon.x86_64, rhscon-ui-0.0.39-1.el7scon.noarch | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-08-23 19:47:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Daniel Horák
2016-03-03 10:12:47 UTC
Additional info =============== I just hit this issue right now. All I did with the usm was: * accept all nodes * create cluster #1, but this task failed * remove cluster #1 * create cluster #2, this finished, but the task is still reported as running * create cluster #3 and #4 at the same time All seemed to be ok, but after a while, I hit refresh of the task page and it redirected me back to login page, which refused to let me in even though I used correct credentials. rhscon-core-0.0.8-10.el7.x86_64 rhscon-ui-0.0.19-1.el7.noarch rhscon-ceph-0.0.6-10.el7.x86_64 rhscon-agent-0.0.3-3.el7.noarch We didn't noticed this issue for the last weeks neither during manual or
automated testing, so closing this Bug as VERIFIED.
Last time tested on:
USM Server (RHEL 7.2):
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
mongodb-2.6.5-4.1.el7.x86_64
mongodb-server-2.6.5-4.1.el7.x86_64
rhscon-ceph-0.0.33-1.el7scon.x86_64
rhscon-core-0.0.34-1.el7scon.x86_64
rhscon-core-selinux-0.0.34-1.el7scon.noarch
rhscon-ui-0.0.48-1.el7scon.noarch
Ceph MON (RHEL 7.2):
calamari-server-1.4.7-1.el7cp.x86_64
ceph-base-10.2.2-24.el7cp.x86_64
ceph-common-10.2.2-24.el7cp.x86_64
ceph-mon-10.2.2-24.el7cp.x86_64
ceph-selinux-10.2.2-24.el7cp.x86_64
libcephfs1-10.2.2-24.el7cp.x86_64
python-cephfs-10.2.2-24.el7cp.x86_64
rhscon-agent-0.0.15-1.el7scon.noarch
rhscon-core-selinux-0.0.34-1.el7scon.noarch
Ceph OSD (RHEL 7.2):
ceph-base-10.2.2-24.el7cp.x86_64
ceph-common-10.2.2-24.el7cp.x86_64
ceph-osd-10.2.2-24.el7cp.x86_64
ceph-selinux-10.2.2-24.el7cp.x86_64
libcephfs1-10.2.2-24.el7cp.x86_64
python-cephfs-10.2.2-24.el7cp.x86_64
rhscon-agent-0.0.15-1.el7scon.noarch
rhscon-core-selinux-0.0.34-1.el7scon.noarch
>> VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754 |