Bug 683905 - VDSM fails to start in newly installed RHEV-H in /usr/lib64/python2.6/site-packages/M2Crypto/SSL/Context.py
VDSM fails to start in newly installed RHEV-H in /usr/lib64/python2.6/site-pa...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
6.1
Unspecified Unspecified
urgent Severity urgent
: rc
: ---
Assigned To: Yotam Oron
Rami Vaknin
: Regression, TestBlocker
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-03-10 10:57 EST by Rami Vaknin
Modified: 2016-04-26 09:28 EDT (History)
17 users (show)

See Also:
Fixed In Version: vdsm-4.9-58.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
rhev-hypervisor-6.1-8.el6
Last Closed: 2011-08-19 11:16:15 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
VDSM adn VDSM-REG Logs (27.95 KB, application/x-compressed-tar)
2011-03-10 10:57 EST, Rami Vaknin
no flags Details
Full vdsm.log (218.56 KB, text/plain)
2011-04-14 23:35 EDT, Guohua Ouyang
no flags Details

  None (edit)
Description Rami Vaknin 2011-03-10 10:57:11 EST
Created attachment 483495 [details]
VDSM adn VDSM-REG Logs

Here is the failure taken from /var/log/vdsm/vdsm.log

MainThread::INFO::2011-03-10 15:35:59,924::vdsm::71::vds::(run) <_MainThread(MainThread, started 140632379709184)>
MainThread::INFO::2011-03-10 15:36:00,001::vdsm::63::vds::(run) I am the actual vdsm 4.9-49
MainThread::DEBUG::2011-03-10 15:36:00,158::multipath::290::Storage.Misc.excCmd::(getMPDevNamesIter) '/usr/bin/sudo -n /sbin/dmsetup ls --target multipath' (cwd None)
MainThread::INFO::2011-03-10 15:36:00,167::multipath::305::Storage.Multipath::(getMPDevNamesIter) Multipath device /dev/mapper/1ATA doesn't exist
MainThread::DEBUG::2011-03-10 15:36:00,169::lvm::347::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_count,pe_alloc_count,mda_count' (cwd None)
MainThread::DEBUG::2011-03-10 15:36:00,287::lvm::347::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2011-03-10 15:36:00,288::lvm::347::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags' (cwd None)
MainThread::DEBUG::2011-03-10 15:36:00,402::lvm::347::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  No volume groups found\n'; <rc> = 0
MainThread::DEBUG::2011-03-10 15:36:00,403::lvm::347::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 filter = [ \\"r%.*%\\" ] }  global {  locking_type=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None)
MainThread::DEBUG::2011-03-10 15:36:00,514::lvm::347::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  No volume groups found\n'; <rc> = 0
MainThread::ERROR::2011-03-10 15:36:00,535::vdsm::66::vds::(run) Traceback (most recent call last):
  File "/usr/share/vdsm//vdsm", line 64, in run
    serve_clients(log)
  File "/usr/share/vdsm//vdsm", line 44, in serve_clients
    cif = clientIF.clientIF(log)
  File "/usr/share/vdsm/clientIF.py", line 81, in __init__
  File "/usr/share/vdsm/clientIF.py", line 216, in _createXMLRPCServer
  File "/usr/share/vdsm/SecureXMLRPCServer.py", line 111, in __init__
  File "/usr/lib64/python2.6/site-packages/M2Crypto/SSL/Context.py", line 100, in load_cert_chain
SSLError: No such file or directory

MainThread::INFO::2011-03-10 15:36:00,535::vdsm::68::vds::(run) VDSM main thread ended. Waiting for 0 other threads...
Comment 2 Mike Burns 2011-03-10 11:11:11 EST
vdsm included in that rhev-h build is vdsm-4.9-49.el6.src.rpm
Comment 3 Dan Kenigsberg 2011-03-10 17:22:08 EST
the clue is in vds_bootstrap_gen.20110310_145802. we try to put the keys in the old 2.2 location instead under /etc/pki/vdsm.

Thu, 10 Mar 2011 14:58:02 ERROR    Traceback (most recent call last):
  File "/usr/share/vdsm-reg/vdsm-gen-cert", line 80, in main
    CSR(subject, random_num, orgName).runTest()
  File "/usr/share/vdsm-reg/vdsm-gen-cert", line 62, in runTest
    deployUtil.createCSR(self.orgName, self.subject, self.random_num, self.tsDir, self.VDSMKEY, self.DHKEY)
  File "/usr/share/vdsm-reg/deployUtil.py", line 1252, in createCSR
OSError: [Errno 2] No such file or directory: '/var/vdsm/ts/keys'

Thu, 10 Mar 2011 14:58:02 ERROR    Usage: vdsm-gen-cert [-O organizationName] <subject> <random_num>
Thu, 10 Mar 2011 14:58:02 DEBUG    <BSTRAP component='RHEV_INSTALL' status='FAIL'/>
Comment 5 Guohua Ouyang 2011-04-14 23:34:09 EDT
Checked on rhev hypervisor 6.1-13.1,  vdsm version: vdsm-4.9-58.el6
after register to rhevm IC108, the rhevh host cannot set to up. 
Check log /var/log/vdsm/vdsm.log, can get the same error in description: 
................................................
  File "/usr/share/vdsm//vdsm", line 64, in run
    serve_clients(log)
  File "/usr/share/vdsm//vdsm", line 44, in serve_clients
    cif = clientIF.clientIF(log)
  File "/usr/share/vdsm/clientIF.py", line 81, in __init__
  File "/usr/share/vdsm/clientIF.py", line 216, in _createXMLRPCServer
  File "/usr/share/vdsm/SecureXMLRPCServer.py", line 111, in __init__
  File "/usr/lib64/python2.6/site-packages/M2Crypto/SSL/Context.py", line 100,
in load_cert_chain
SSLError: No such file or director


# rhevm.log
2011-04-15 18:30:32,987 INFO  [com.redhat.rhevm.bll.VdsInstaller] (pool-15-thread-359) Installation of 10.66.10.185. Successfully connected to server ssh. (Stage: Connecting to Host)
2011-04-15 18:30:32,988 INFO  [com.redhat.rhevm.bll.CBCInstaller] (pool-15-thread-359) Installation of 10.66.10.185. Executing oVirt installation stage. (Stage: Running first installation script on Host)
2011-04-15 18:30:32,988 INFO  [com.redhat.rhevm.utils.hostinstall.MinaInstallWrapper] (pool-15-thread-359) Invoking /usr/share/vdsm-reg/vdsm-gen-cert -O "redhat" 10.66.10.185 36f24600-502e-4933-815f-91c931e5407c on 10.66.10.185
2011-04-15 18:30:33,066 ERROR [com.redhat.rhevm.bll.VdsInstaller] (pool-15-thread-359) Installation of 10.66.10.185. Recieved message: <BSTRAP component='RHEV_INSTALL' status='FAIL'/>
. Error occured. (Stage: Running first installation script on Host)



It seems not fixed in vdsm-4.9-58.el6, pls check.
Comment 6 Guohua Ouyang 2011-04-14 23:35:02 EDT
Created attachment 492265 [details]
Full vdsm.log
Comment 7 Yotam Oron 2011-04-17 01:39:01 EDT
Indeed, this doesn't work on 6.1 (I tested it to work on 6.0, naively thinking it would be enough).
In the rhevm.py file (called on RHEVM setup), the vdsm.conf file is written with the wrong trusted store values which is than used by the vdsm-reg and vdsm.
Patch:

diff --git a/scripts/rhevm.py b/scripts/rhevm.py
index 0b3b174..de3f345 100755
--- a/scripts/rhevm.py
+++ b/scripts/rhevm.py
@@ -31,7 +31,7 @@ fWriteConfig = 0
 def set_defaults():
     vdsm_config_file = open(VDSM_CONFIG, "w")
     vdsm_config = """[vars]
-trust_store_path = /var/vdsm/ts
+trust_store_path = /etc/pki/vdsm/
 ssl = true
 
 [netconsole]
Comment 8 Andrew Cathrow 2011-04-19 11:47:43 EDT
If this fails QA (and we have a patch) why is this still flagged as on_qa, shouldn't this go back to assigned  .....

Note You need to log in before you can comment on or make changes to this bug.