Bug 1204535 - Incorrect file permissions prevent VM from starting after RHEV-H TUI side registration (auto-install or manually)
Summary: Incorrect file permissions prevent VM from starting after RHEV-H TUI side reg...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.1
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Douglas Schilling Landgraf
QA Contact: Chaofeng Wu
URL:
Whiteboard:
Depends On:
Blocks: 1206537
TreeView+ depends on / blocked
 
Reported: 2015-03-22 23:30 UTC by Douglas Schilling Landgraf
Modified: 2016-03-09 14:19 UTC (History)
15 users (show)

Fixed In Version: ovirt-node-3.3.0-0.4.20150906git14a6024.el7ev
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1206537 (view as bug list)
Environment:
Last Closed: 2016-03-09 14:19:38 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (402.05 KB, application/x-gzip)
2015-03-22 23:34 UTC, Douglas Schilling Landgraf
no flags Details
pki file permissions after vdsm-reg but before approval (6.61 KB, text/plain)
2015-03-23 13:06 UTC, Fabian Deutsch
no flags Details
pki file permissions after approval and node up (7.61 KB, text/plain)
2015-03-23 13:06 UTC, Fabian Deutsch
no flags Details
/etc and /var/logs from a run with incorrect permissions (5.09 MB, application/x-xz)
2015-03-23 13:10 UTC, Fabian Deutsch
no flags Details
host-deploy logs from the run with the wrong permissions (261.11 KB, text/plain)
2015-03-23 13:12 UTC, Fabian Deutsch
no flags Details
Logs from a failed run, including permissions before and after (5.96 MB, application/x-xz)
2015-03-23 14:56 UTC, Fabian Deutsch
no flags Details
host-deploy logs from a failed run (261.15 KB, text/plain)
2015-03-23 14:57 UTC, Fabian Deutsch
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1003488 0 unspecified CLOSED VM with SPICE console fails to start due to : Spice-Warning **: reds.c:3247:reds_init_ssl: Could not use private key fil... 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2016:0378 0 normal SHIPPED_LIVE ovirt-node bug fix and enhancement update for RHEV 3.6 2016-03-09 19:06:36 UTC
oVirt gerrit 39009 0 master MERGED semodule: Add svirt_t ovirt_t:unix_stream_socket connectto 2020-09-16 14:41:10 UTC
oVirt gerrit 39068 0 master MERGED persist: fix owner/group copy to /config 2020-09-16 14:41:10 UTC
oVirt gerrit 39212 0 master MERGED logger: replace .debug statement for .warning 2020-09-16 14:41:10 UTC
oVirt gerrit 39262 0 ovirt-3.5 MERGED semodule: Add svirt_t ovirt_t:unix_stream_socket connectto 2020-09-16 14:41:10 UTC
oVirt gerrit 39265 0 ovirt-3.5 MERGED persist: fix owner/group copy to /config 2020-09-16 14:41:10 UTC

Internal Links: 1003488

Description Douglas Schilling Landgraf 2015-03-22 23:30:56 UTC
Description of problem:

Cannot start virtual machine.

# cat /etc/redhat-release 
Red Hat Enterprise Virtualization Hypervisor 6.6 (20150319.42.el6ev)

# rpm -qa | grep -i libvirt
libvirt-client-0.10.2-46.el6_6.3.x86_64
libvirt-python-0.10.2-46.el6_6.3.x86_64
libvirt-lock-sanlock-0.10.2-46.el6_6.3.x86_64
libvirt-0.10.2-46.el6_6.3.x86_64
libvirt-cim-0.6.1-12.el6.x86_64

# rpm -qa | grep -i vdsm
vdsm-reg-4.16.12.1-3.el6ev.noarch
ovirt-node-plugin-vdsm-0.2.0-20.el6ev.noarch
vdsm-python-zombiereaper-4.16.12.1-3.el6ev.noarch
vdsm-jsonrpc-4.16.12.1-3.el6ev.noarch
vdsm-python-4.16.12.1-3.el6ev.noarch
vdsm-cli-4.16.12.1-3.el6ev.noarch
vdsm-hook-vhostmd-4.16.12.1-3.el6ev.noarch
vdsm-4.16.12.1-3.el6ev.x86_64
vdsm-yajsonrpc-4.16.12.1-3.el6ev.noarch
vdsm-hook-ethtool-options-4.16.12.1-3.el6ev.noarch
vdsm-xmlrpc-4.16.12.1-3.el6ev.noarch

Red Hat Enterprise Virtualization Manager Version: 3.5.1-0.2.el6ev

# vdsClient -s 0 getVdsCaps
	HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:b37466b21b8e'}]}
	ISCSIInitiatorName = 'iqn.1994-05.com.redhat:b37466b21b8e'
	autoNumaBalancing = 2
	bondings = {'bond0': {'addr': '',
	                      'cfg': {},
	                      'hwaddr': '00:00:00:00:00:00',
	                      'mtu': '1500',
	                      'netmask': '',
	                      'slaves': []},
	            'bond1': {'addr': '',
	                      'cfg': {},
	                      'hwaddr': '00:00:00:00:00:00',
	                      'mtu': '1500',
	                      'netmask': '',
	                      'slaves': []},
	            'bond2': {'addr': '',
	                      'cfg': {},
	                      'hwaddr': '00:00:00:00:00:00',
	                      'mtu': '1500',
	                      'netmask': '',
	                      'slaves': []},
	            'bond3': {'addr': '',
	                      'cfg': {},
	                      'hwaddr': '00:00:00:00:00:00',
	                      'mtu': '1500',
	                      'netmask': '',
	                      'slaves': []},
	            'bond4': {'addr': '',
<snip>

# service libvirtd status
libvirtd (pid  16380) is running...

# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
Please enter your authentication name: vdsm@rhevh
Please enter your password: 
 Id    Name                           State
----------------------------------------------------

virsh # 


How reproducible:

#1) Execute auto-install on RHEV-H 6.6 (20150319.42.el6ev)

Use the params: 
firstboot storage_init=/dev/sda adminpw=RHhwCLrQXB8zE management_server=192.168.122.70 BOOTIF=link

#2) Add the data storage via RHEV-M admin page (In my case,ISCSI)
#3) Add the ISO storage via RHEV-M admin page 
#4) Copy a .iso file to ISO storage
#5) Create a virtual machine and try to run it.

Actual results:
VM vm is down with error. Exit message: Child quit during startup handshake: Input/output error.

Expected results:
Virtual machine run without issue.

Additional info:
This error is only noticed when using autoinstall on RHEV-H.

Comment 1 Douglas Schilling Landgraf 2015-03-22 23:34:03 UTC
Created attachment 1005094 [details]
logs

Comment 4 Douglas Schilling Landgraf 2015-03-23 00:13:57 UTC
This initial error seems related to selinux, setting setenfor to 0 we go to a different error. For now, I am moving this bug to rhev-hypervisor.


audit.log
=============
<snip>
type=AVC msg=audit(1427068608.315:2153): avc:  denied  { connectto } for  pid=32549 comm="libvirtd" path="/var/run/sanlock/sanlock.sock" scontext=unconfined_u:system_r:svirt_t:s0:c326,c682 tcontext=system_u:system_r:ovirt_t:s0 tclass=unix_stream_socket
type=SYSCALL msg=audit(1427068608.315:2153): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7f4c3d3e9890 a2=6e a3=0 items=0 ppid=1 pid=32549 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="libvirtd" exe="/usr/sbin/libvirtd" subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 key=(null)
type=ANOM_ABEND msg=audit(1427068608.315:2154): auid=0 uid=0 gid=0 ses=2 subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 pid=32549 comm="libvirtd" sig=11

#audit2allow -a
#============= svirt_t ==============
allow svirt_t ovirt_t:unix_stream_socket connectto; 


However, now I am getting the below error trying to start the virtual machine:
2015-03-22 23:59:50.069+0000: 16382: error : qemuProcessWaitForMonitor:1858 : internal error process exited while connecting to monitor: ((null):728): Spice-Warning **: reds.c:3269:reds_init_ssl: Could not use private key file
failed to initialize spice server
2015-03-23 00:00:21.244+0000: 16383: error : qemuMonitorOpenUnix:294 : failed to connect to monitor socket: No such process
2015-03-23 00:00:21.244+0000: 16383: error : qemuProcessWaitForMonitor:1858 : internal error process exited while connecting to monitor: ((null):1122): Spice-Warning **: reds.c:3269:reds_init_ssl: Could not use private key file
failed to initialize spice server

# pwd
/etc/pki/libvirt
[root@localhost libvirt]# ls -la -R
.:
total 3
drwxr-xr-x.  3 vdsm kvm    80 2015-03-22 22:23 .
drwxr-xr-x. 12 root root  240 2015-03-22 22:23 ..
-rw-r--r--.  1 root root 1554 2015-03-22 22:23 clientcert.pem
drwxr-xr-x.  2 vdsm kvm    60 2015-03-22 22:23 private

./private:
total 3
drwxr-xr-x. 2 vdsm kvm    60 2015-03-22 22:23 .
drwxr-xr-x. 3 vdsm kvm    80 2015-03-22 22:23 ..
-r--r-----. 1 root root 1679 2015-03-22 22:23 clientkey.pem

@Jiri, Could you please review the above error from libvirt/spice? Ideas? Should we open a different bug?

Thanks!

Comment 5 Jiri Denemark 2015-03-23 08:27:30 UTC
Well, unless qemu is a member of root group, it can't read clientkey.pem. I don't see any bug in libvirt here.

Comment 7 Fabian Deutsch 2015-03-23 10:05:40 UTC
This looks similar to bug 1188255

Comment 8 Fabian Deutsch 2015-03-23 11:01:36 UTC
Running chown -R vdsm:kvm /etc/pki/vdsm fixes the issue for me.

Dan, can you tell what the correct owner of the files in /etc/pki/vdsm should be? And who is taking care of setting the correct permissions?

Comment 9 Dan Kenigsberg 2015-03-23 11:14:41 UTC
/etc/pki/vdsm is installed vdsm:kvm by vdsm.rpm. Nothing should have changed that, if I correctly recall.

Comment 10 Fabian Deutsch 2015-03-23 11:20:15 UTC
(In reply to Fabian Deutsch from comment #8)
> Running chown -R vdsm:kvm /etc/pki/vdsm fixes the issue for me.

Before that call, the permissions were as follows:

/etc/pki/vdms/keys
root:root vdsmkey.pem

/etc/pki/vdsm/libvirt-spice
root:root ca-cert.pem
root:root server-cert.pem
root:root server-key.pem

And for other pki related files it was the same (root:root)

Comment 11 Fabian Deutsch 2015-03-23 13:06:24 UTC
Created attachment 1005374 [details]
pki file permissions after vdsm-reg but before approval

Comment 12 Fabian Deutsch 2015-03-23 13:06:57 UTC
Created attachment 1005376 [details]
pki file permissions after approval and node up

Comment 13 Fabian Deutsch 2015-03-23 13:10:07 UTC
Created attachment 1005377 [details]
/etc and /var/logs from a run with incorrect permissions

Comment 14 Fabian Deutsch 2015-03-23 13:12:31 UTC
Created attachment 1005388 [details]
host-deploy logs from the run with the wrong permissions

The last few attachments include node and engine side logs from a run where the final file permissions of files in /etc/pki/vdsm (maybe /etc/pki/libvirt as well), where incorrect.

The steps to reproduce were:
1. Install latest vt engine
2. Install Latest 6.6 based RHEV-H in TUI mode
3. Register RHEV-H to Engine using the TUI (vdsm-reg)
4. Approve node in Engine
5. Configure node with local storage domain
6. Create a PXE booting VM and launch it

Comment 15 Fabian Deutsch 2015-03-23 14:56:59 UTC
Created attachment 1005437 [details]
Logs from a failed run, including permissions before and after

Comment 16 Fabian Deutsch 2015-03-23 14:57:28 UTC
Created attachment 1005438 [details]
host-deploy logs from a failed run

Comment 17 haiyang,dong 2015-03-24 06:46:22 UTC
I could reproduce this bug in the follow version:
rhev-hypervisor6-6.6-20150319.42.iso
ovirt-node-3.2.1-11.el6.noarch
vdsm-reg-4.16.12.1-3.el6ev.noarch
ovirt-node-plugin-vdsm-0.2.0-20.el6ev.noarch
vdsm-4.16.12.1-3.el6ev.x86_64
Red Hat Enterprise Virtualization Manager Version: 3.5.1-0.2.el6ev

Test steps:
1. Clean install latest vt14.1 engine
2. Install rhev-hypervisor6-6.6-20150319.42.iso in TUI mode
3. Register RHEV-H to Engine using the TUI (vdsm-reg)
4. Approve node in Engine
5. Configure node with local storage domain
6. Create a VM and launch it.

Test result:
After step6, VM vm is down with the follow error info:

2015-Mar-24, 14:13	
Failed to run VM vm1 (User: admin@internal).
			
2015-Mar-24, 14:13	
Failed to run VM vm1 on Host dhcp-8-165.nay.redhat.com.
		
2015-Mar-24, 14:13
	
VM vm1 is down with error. Exit message: Child quit during startup handshake: Input/output error.

Comment 18 haiyang,dong 2015-03-25 11:51:14 UTC
No this issue for RHEV-H 6.6 for RHEV 3.5 GA build version:
rhev-hypervisor6-6.6-20150128.0.el6ev.noarch.rpm 
Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.34.el6ev

so it should be a regression bug.

Comment 21 Ying Cui 2015-03-30 07:41:18 UTC
See comment 20, here already cloned this bug to 3.5.z bug #1206537, so remove this 3.5.1 tracker bug 1193058 from this bug.

Comment 25 Chaofeng Wu 2015-11-13 08:56:10 UTC
According comment 17, verify this bug with the following steps on build rhev-hypervisor7-7.2-20151104.0

Version:
rhev-hypervisor7-7.2-20151104.0.iso
ovirt-node-3.6.0-0.20.20151103git3d3779a.el7ev.noarch

Steps:
1. Install rhev-hypervisor7-7.2-20151104.0.iso in TUI mode and configure network successful.
2. Register rhevm to rhevm3.6.0.3 in RHEV-M page
3. Approve rhevh on RHEV-M
4. Configure rhevh with NFS storage.
5. Create a VM and launch it.

Result:
After step5, VM install and launch successfully.

This bug has been fixed, so change the status to VERIFIED.

Comment 27 errata-xmlrpc 2016-03-09 14:19:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0378.html


Note You need to log in before you can comment on or make changes to this bug.