Bug 1053330
| Summary: | [RHEVM-RHS] RHSS Node doesn't come up after reinstalling it using RHEVM UI, | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> | ||||||
| Component: | vdsm | Assignee: | Timothy Asir <tjeyasin> | ||||||
| Status: | CLOSED WONTFIX | QA Contact: | Sudhir D <sdharane> | ||||||
| Severity: | high | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 2.1 | CC: | acathrow, ecohen, gklein, grajaiya, iheim, nlevinki, Rhev-m-bugs, sabose, tjeyasin, yeylon | ||||||
| Target Milestone: | --- | Keywords: | ZStream | ||||||
| Target Release: | RHGS 2.1.2 | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | gluster | ||||||||
| Fixed In Version: | 4.13.0-24 | Doc Type: | Bug Fix | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2014-01-16 10:46:43 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
SATHEESARAN
2014-01-15 03:27:05 UTC
VDSM host deploy logs shows no error and I see iptables rules are up, though the state of RHSS Node was shown as down. It means bootstrapping has again tookplace adding those rules Created attachment 850329 [details]
ovirt host deploy log from RHEVM
Created attachment 850331 [details]
RHEVM Screenshot showing "re-install" option
This bug is the manifestation of this bug, https://bugzilla.redhat.com/show_bug.cgi?id=1038038 Since gateway is not configured correctly, DNS resolution for bricks were not happening successfully when glusterd was restarted This is evident from glusterd logs, <snip> [2014-01-15 09:52:43.444950] I [glusterd.c:140:glusterd_uuid_init] 0-management: retrieved UUID: 1650fc10-5365-40e3-8fea-1e87908a9f55 [2014-01-15 09:52:43.445290] E [glusterd-store.c:2600:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-01-15 09:52:43.445318] E [xlator.c:423:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-01-15 09:52:43.445335] E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed [2014-01-15 09:52:43.445345] E [graph.c:479:glusterfs_graph_activate] 0-graph: init failed [2014-01-15 09:52:43.445768] W [glusterfsd.c:1099:cleanup_and_exit] (-->/usr/sbin/glusterd(main+0x6b1) [0x4069c1] (-->/usr/sbin/glusterd(glusterfs_vo lumes_init+0xb7) [0x405177] (-->/usr/sbin/glusterd(glusterfs_process_volfp+0x106) [0x405086]))) 0-: received signum (0), shutting down [2014-01-15 10:12:06.572837] I [glusterfsd.c:2026:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.4.0.57rhs (/usr/sbin/glust erd --pid-file=/var/run/glusterd.pid) </snip> I have changed DEFROUTE to YES, in '/etc/sysconfig/network-scripts/ifcfg-rhevm' and restarted network and that solved the problem. In this case, re-installation of RHSS Node using RHEVM UI comes back online Patch sent to downstream: https://code.engineering.redhat.com/gerrit/#/c/18372/ Is this not a dupe of bug 1038038 ? I mean I do not see a fix particularly for this bug or did I miss anything obvious? Yes, the fix for bug 1038038, fixes this well. Though the fix is the same, the test scenarios of both bugs are different. Thanks for confirming Sahina. This test scenario is covered at https://tcms.engineering.redhat.com/run/107332/#caserun_4176587 which will be executed as part of regression cycle. Giving qa_ack- since there is no separate fix for this case. Quality Engineering Management has reviewed and declined this request. You may appeal this decision by reopening this request. |