Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1108168

Summary: storage node name in cluster alerts is null
Product: [JBoss] JBoss Operations Network Reporter: Armine Hovsepyan <ahovsepy>
Component: UIAssignee: Jay Shaughnessy <jshaughn>
Status: CLOSED CURRENTRELEASE QA Contact: Armine Hovsepyan <ahovsepy>
Severity: low Docs Contact:
Priority: unspecified    
Version: JON 3.2CC: ahovsepy, jshaughn, mfoley, snegrea
Target Milestone: ER04   
Target Release: JON 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-12-11 14:01:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
storage_alert_name.png
none
server.log
none
storage_alert_name_notNull
none
storage_alert_name_null
none
full_server.log
none
agent1.log
none
agent2.log
none
cluster_alert_names none

Description Armine Hovsepyan 2014-06-11 13:52:10 UTC
Created attachment 907679 [details]
storage_alert_name.png

Description of problem:
storage node name in cluster alerts is null

Version-Release number of selected component (if applicable):
rhq master

How reproducible:
once

Steps to Reproduce:
1. run rhqctl install --start on IP1
2. run rhqctl install --storage on IP2 and connect to server on IP1
3. in the middle of synchronization stop storage on IP1

Actual results:
storage alert is created with storage node name "null"

Expected results:
storage node name is visible correctly 


Additional info:
screen-shot attached

taking into account the non usual reproduction steps - marking bug as low severity

Comment 2 Jay Shaughnessy 2014-07-02 01:43:00 UTC
I fired that alert def in a different way and could not reproduce the issue.  Perhaps the reproduction steps need to be followed explicitly, or perhaps this is working now.

Comment 3 Stefan Negrea 2014-09-05 21:59:40 UTC
Was the new storage node in the inventory at all? Can you please provide additional details like: the number of nodes in the Nodes tab, any errors in the agent logs?

Can you please retest this and update the replication steps?

Comment 4 Armine Hovsepyan 2014-09-10 11:36:48 UTC
The issue is not visible anymore.
version checked : jon 3.3 er02

screen-shot ant fragment from server.log attached

Comment 5 Armine Hovsepyan 2014-09-10 11:38:16 UTC
Created attachment 936110 [details]
server.log

Comment 6 Armine Hovsepyan 2014-09-10 11:38:46 UTC
Created attachment 936111 [details]
storage_alert_name_notNull

Comment 7 Armine Hovsepyan 2014-09-10 11:42:25 UTC
re-assigning
just-reproduced again 

screen-shot, server.log and agent.log for foth agents attached

Comment 8 Armine Hovsepyan 2014-09-10 11:42:53 UTC
Created attachment 936113 [details]
storage_alert_name_null

Comment 9 Armine Hovsepyan 2014-09-10 11:46:53 UTC
Created attachment 936116 [details]
full_server.log

Comment 10 Armine Hovsepyan 2014-09-10 11:48:17 UTC
Created attachment 936117 [details]
agent1.log

Comment 11 Armine Hovsepyan 2014-09-10 11:49:46 UTC
Created attachment 936118 [details]
agent2.log

Comment 12 Armine Hovsepyan 2014-09-10 11:58:30 UTC
reproduction steps:
1. install server with agent and storage on IP1
2. install storage and agent on IP2 and connect to IP1
3. As soon as add-to-maintenance starts, stop storage on IP1

after step3. storage cluster becomes down

4. restart storage node in IP1
5. navigate to storage nodes administration page
6. select storage node in IP2 and click Deploy Selected
7. During the add-to-maintenance, stop storage on IP1

After step7. storage cluster becomes down and immediately come back ON as visible in server.log attached in comment #5

8. Navigate to storage nodes administration page 
9. Select storage node in IP2 (the only running currently) and click Deploy Selected

After step9. Storage cluster goes down
alerts in clsuter alerts loose names - see scree-shot under comment#8

Comment 17 Jay Shaughnessy 2014-09-19 15:29:21 UTC
I saw this happen once, it's intermittent and, I think, timing related.  I don't think it is based on any sequence of steps.  I think I see why it can happen. looking into it...

Comment 18 Jay Shaughnessy 2014-09-19 18:50:32 UTC
master commit a4322b44c72b1f19633574bad685706e0af0886f
Author: Jay Shaughnessy <jshaughn>
Date:   Fri Sep 19 14:44:56 2014 -0400

   Ensure async initialization complete before rendering


commit 7ff1cedfbe3d5de02e19038ccb13ede546f9deb6
Author: Jay Shaughnessy <jshaughn>
Date:   Fri Sep 19 14:44:56 2014 -0400

    (cherry picked from commit a4322b44c72b1f19633574bad685706e0af0886f)
    Signed-off-by: Jay Shaughnessy <jshaughn>

Comment 19 Simeon Pinder 2014-10-01 21:33:29 UTC
Moving to ON_QA as available for test with build:
https://brewweb.devel.redhat.com/buildinfo?buildID=388959

Comment 20 Armine Hovsepyan 2014-10-07 09:44:59 UTC
Created attachment 944490 [details]
cluster_alert_names

Comment 21 Armine Hovsepyan 2014-10-07 09:45:16 UTC
verified in JON 3.3 ER04