Bug 1108168
| Summary: | storage node name in cluster alerts is null | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [JBoss] JBoss Operations Network | Reporter: | Armine Hovsepyan <ahovsepy> | ||||||||||||||||||
| Component: | UI | Assignee: | Jay Shaughnessy <jshaughn> | ||||||||||||||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Armine Hovsepyan <ahovsepy> | ||||||||||||||||||
| Severity: | low | Docs Contact: | |||||||||||||||||||
| Priority: | unspecified | ||||||||||||||||||||
| Version: | JON 3.2 | CC: | ahovsepy, jshaughn, mfoley, snegrea | ||||||||||||||||||
| Target Milestone: | ER04 | ||||||||||||||||||||
| Target Release: | JON 3.3.0 | ||||||||||||||||||||
| Hardware: | Unspecified | ||||||||||||||||||||
| OS: | Unspecified | ||||||||||||||||||||
| Whiteboard: | |||||||||||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||||||||
| Clone Of: | Environment: | ||||||||||||||||||||
| Last Closed: | 2014-12-11 14:01:37 UTC | Type: | Bug | ||||||||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||||||||
| Documentation: | --- | CRM: | |||||||||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||
| Embargoed: | |||||||||||||||||||||
| Attachments: |
|
||||||||||||||||||||
I fired that alert def in a different way and could not reproduce the issue. Perhaps the reproduction steps need to be followed explicitly, or perhaps this is working now. Was the new storage node in the inventory at all? Can you please provide additional details like: the number of nodes in the Nodes tab, any errors in the agent logs? Can you please retest this and update the replication steps? The issue is not visible anymore. version checked : jon 3.3 er02 screen-shot ant fragment from server.log attached Created attachment 936110 [details]
server.log
Created attachment 936111 [details]
storage_alert_name_notNull
re-assigning just-reproduced again screen-shot, server.log and agent.log for foth agents attached Created attachment 936113 [details]
storage_alert_name_null
Created attachment 936116 [details]
full_server.log
Created attachment 936117 [details]
agent1.log
Created attachment 936118 [details]
agent2.log
reproduction steps: 1. install server with agent and storage on IP1 2. install storage and agent on IP2 and connect to IP1 3. As soon as add-to-maintenance starts, stop storage on IP1 after step3. storage cluster becomes down 4. restart storage node in IP1 5. navigate to storage nodes administration page 6. select storage node in IP2 and click Deploy Selected 7. During the add-to-maintenance, stop storage on IP1 After step7. storage cluster becomes down and immediately come back ON as visible in server.log attached in comment #5 8. Navigate to storage nodes administration page 9. Select storage node in IP2 (the only running currently) and click Deploy Selected After step9. Storage cluster goes down alerts in clsuter alerts loose names - see scree-shot under comment#8 I saw this happen once, it's intermittent and, I think, timing related. I don't think it is based on any sequence of steps. I think I see why it can happen. looking into it...
master commit a4322b44c72b1f19633574bad685706e0af0886f
Author: Jay Shaughnessy <jshaughn>
Date: Fri Sep 19 14:44:56 2014 -0400
Ensure async initialization complete before rendering
commit 7ff1cedfbe3d5de02e19038ccb13ede546f9deb6
Author: Jay Shaughnessy <jshaughn>
Date: Fri Sep 19 14:44:56 2014 -0400
(cherry picked from commit a4322b44c72b1f19633574bad685706e0af0886f)
Signed-off-by: Jay Shaughnessy <jshaughn>
Moving to ON_QA as available for test with build: https://brewweb.devel.redhat.com/buildinfo?buildID=388959 Created attachment 944490 [details]
cluster_alert_names
verified in JON 3.3 ER04 |
Created attachment 907679 [details] storage_alert_name.png Description of problem: storage node name in cluster alerts is null Version-Release number of selected component (if applicable): rhq master How reproducible: once Steps to Reproduce: 1. run rhqctl install --start on IP1 2. run rhqctl install --storage on IP2 and connect to server on IP1 3. in the middle of synchronization stop storage on IP1 Actual results: storage alert is created with storage node name "null" Expected results: storage node name is visible correctly Additional info: screen-shot attached taking into account the non usual reproduction steps - marking bug as low severity