Bug 971458
Summary: | [RHSC] Adding Anshi node to a cluster via the Console fails for the first time, and then on re-install, the node comes UP | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shruti Sampat <ssampat> | ||||||||||||||
Component: | rhsc | Assignee: | Timothy Asir <tjeyasin> | ||||||||||||||
Status: | CLOSED WORKSFORME | QA Contact: | Shruti Sampat <ssampat> | ||||||||||||||
Severity: | medium | Docs Contact: | |||||||||||||||
Priority: | medium | ||||||||||||||||
Version: | 2.1 | CC: | dtsang, knarra, mmahoney, pprakash, rhs-bugs, sabose, sdharane | ||||||||||||||
Target Milestone: | --- | ||||||||||||||||
Target Release: | --- | ||||||||||||||||
Hardware: | Unspecified | ||||||||||||||||
OS: | Unspecified | ||||||||||||||||
Whiteboard: | |||||||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||||
Clone Of: | Environment: | ||||||||||||||||
Last Closed: | 2013-06-19 07:24:31 UTC | Type: | Bug | ||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||
Embargoed: | |||||||||||||||||
Attachments: |
|
Description
Shruti Sampat
2013-06-06 14:59:17 UTC
Created attachment 757719 [details]
engine logs
Created attachment 757720 [details]
host deploy logs 1
Created attachment 757722 [details]
host deploy logs 2
Created attachment 757725 [details]
vdsm logs
Could you provide me the following details/files from the node before and after re-install. ~~~~~~ ~~~~~ i) /etc/vdsm/vdsm.conf, ii) log files, iii) the output of "rpm -qa | grep vdsm" and iv) the output of "service vdsmd status" Before reinstall - [root@rhs-new-anshi2 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321 [vars] ssl = true [root@rhs-new-anshi2 ~]# rpm -qa|grep vdsm vdsm-python-4.9.6-23.el6rhs.x86_64 vdsm-debug-plugin-4.9.6-23.el6rhs.noarch vdsm-debuginfo-4.9.6-23.el6rhs.x86_64 vdsm-cli-4.9.6-23.el6rhs.noarch vdsm-gluster-4.9.6-23.el6rhs.noarch vdsm-reg-4.9.6-23.el6rhs.noarch vdsm-4.9.6-23.el6rhs.x86_64 vdsm-hook-faqemu-4.9.6-23.el6rhs.noarch [root@rhs-new-anshi2 ~]# service vdsmd status VDS daemon server is running After reinstall - [root@rhs-new-anshi2 ~]# cat /etc/vdsm/vdsm.conf [addresses] management_port = 54321 [vars] ssl = true [root@rhs-new-anshi2 ~]# rpm -qa|grep vdsm vdsm-python-4.9.6-23.el6rhs.x86_64 vdsm-debug-plugin-4.9.6-23.el6rhs.noarch vdsm-debuginfo-4.9.6-23.el6rhs.x86_64 vdsm-cli-4.9.6-23.el6rhs.noarch vdsm-gluster-4.9.6-23.el6rhs.noarch vdsm-reg-4.9.6-23.el6rhs.noarch vdsm-4.9.6-23.el6rhs.x86_64 vdsm-hook-faqemu-4.9.6-23.el6rhs.noarch [root@rhs-new-anshi2 ~]# service vdsmd status VDS daemon server is running Created attachment 760083 [details]
vdsm logs before reinstall
Created attachment 760085 [details]
vdsm logs after reinstall
Unable to reproduce this issue. So closing as WORKSFORME. Please reopen if issue arises again |