|Summary:||wrong number of events about gained nodes|
|Product:||Red Hat Storage Console||Reporter:||Martin Kudlej <mkudlej>|
|Component:||core||Assignee:||Nishanth Thomas <nthomas>|
|core sub component:||events||QA Contact:||sds-qe-bugs|
|Status:||CLOSED CURRENTRELEASE||Docs Contact:|
|Fixed In Version:||rhscon-ceph-0.0.23-1.el7scon.x86_64, rhscon-core-0.0.24-1.el7scon.x86_64, rhscon-ui-0.0.39-1.el7scon.noarch||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|Last Closed:||2018-11-19 05:30:27 UTC||Type:||Bug|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description Martin Kudlej 2016-03-02 12:45:25 UTC
Created attachment 1132296 [details] logs from server Description of problem: I have 1 monitor and 4 osds in cluster. So I expect that there are 5 events about gained machines. There is only one. I see this one event directly in DB, so there is only one event in API and UI. Also sometime there is one event twice. Version-Release number of selected component (if applicable): rhscon-core-0.0.8-10.el7.x86_64 rhscon-ui-0.0.19-1.el7.noarch rhscon-ceph-0.0.6-10.el7.x86_64 How reproducible: 80% Steps to Reproduce: 1. install machines 2. accept nodes Actual results: There is different number of events about gaining node in DB, UI, API. Expected results: There will be the same number of events about gaining machine in DB, UI, API as there are accepted nodes.
Comment 2 Darshan 2016-04-28 08:42:24 UTC
After accepting any node successfully, there will be and event saying "Node <node name> accepted successfully". The number of these events will be same as number of nodes accepted successfully.
Comment 3 Darshan 2016-05-02 10:57:24 UTC
The event saying "node gained contact" will appear only after the node looses contact from skyring and after that if it gains contact. Because "node gained contact" is a recovery event and it is raised only if there is an active "node lost contact" alert available. Now in order for user to know that a node is accepted and initialized he will see events like "node accepted successfully" and "node initialized successfully".
Comment 4 Martin Kudlej 2016-07-22 10:31:41 UTC
Tested with ceph-ansible-1.0.5-27.el7scon.noarch ceph-installer-1.0.14-1.el7scon.noarch rhscon-ceph-0.0.33-1.el7scon.x86_64 rhscon-core-0.0.34-1.el7scon.x86_64 rhscon-core-selinux-0.0.34-1.el7scon.noarch rhscon-ui-0.0.48-1.el7scon.noarch and it works. -->VERIFIED