Description of problem: --------------------------------------- When adding an Anshi node to a 3.1 cluster via the Console the following is seen in the engine logs - 2013-06-18 14:42:17,123 WARN [org.ovirt.engine.core.bll.AddVdsCommand] (ajp-/127.0.0.1:8702-10) [7dc28c7b] Failed to initiate vdsm-id request on host: java.io.IOException: Command returned failure code 1 during SSH session 'root.35.107' Seen for both Anshi and Big Bend nodes. Version-Release number of selected component (if applicable): Red Hat Storage Console Version: 2.1.0-0.bb4.el6rhs vdsm on Big Bend nodes - vdsm-4.10.2-22.4.el6rhs.x86_64 vdsm on Anshi nodes - vdsm-4.9.6-24.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Add a RHS Big Bend node to 3.2 cluster or an Anshi node to a 3.1 cluster via the Console. Actual results: The above described warning is seen in the engine logs. Expected results: Additional info: Document URL: Section Number and Name: Describe the issue: Suggestions for improvement: Additional information: Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Created attachment 763745 [details] engine logs
Created attachment 763760 [details] vdsm logs from Anshi logs
Created attachment 763761 [details] vdsm logs from Big Bend node
This is not a Severe bug. We need to reduce the bug severity and take it out of Corbett. Only fix can be, instead of the log showing up as a Warning, it should be Info.
Hi, This issue also causes a mail to be sent to root user on the RHSS node as follows - ------------------------------------------------------------------------------ Message 1: From user.eng.blr.redhat.com Fri Jan 16 11:48:18 2015 Return-Path: <user.eng.blr.redhat.com> X-Original-To: root@localhost Delivered-To: root.eng.blr.redhat.com Date: Fri, 16 Jan 2015 11:48:18 +0530 From: user.eng.blr.redhat.com To: root.eng.blr.redhat.com Subject: [abrt] full crash report User-Agent: Heirloom mailx 12.4 7/29/08 Content-Type: text/plain; charset=us-ascii Status: RO abrt_version: 2.0.8 cmdline: /usr/bin/python /usr/bin/vdsm-tool vdsm-id executable: /usr/bin/vdsm-tool kernel: 2.6.32-358.49.1.el6.x86_64 time: Fri 16 Jan 2015 11:47:57 AM IST uid: 0 username: root sosreport.tar.xz: Binary file, 389308 bytes backtrace: :vdsm-id.py:32:getUUID:RuntimeError: Cannot retrieve host UUID : :Traceback (most recent call last): : File "/usr/bin/vdsm-tool", line 143, in <module> : sys.exit(main()) : File "/usr/bin/vdsm-tool", line 140, in main : return tool_command[cmd]["command"](*args[1:]) : File "/usr/lib64/python2.6/site-packages/vdsm/tool/vdsm-id.py", line 32, in getUUID : raise RuntimeError('Cannot retrieve host UUID') :RuntimeError: Cannot retrieve host UUID : :Local variables in innermost frame: :hostUUID: None Proposing for 3.1 release. Reducing severity to low as this does not seem to have any obvious impact on functionality.
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.