Bug 1204535
Summary: | Incorrect file permissions prevent VM from starting after RHEV-H TUI side registration (auto-install or manually) | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Douglas Schilling Landgraf <dougsland> | |
Component: | ovirt-node | Assignee: | Douglas Schilling Landgraf <dougsland> | |
Status: | CLOSED ERRATA | QA Contact: | Chaofeng Wu <cwu> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 3.5.1 | CC: | cshao, danken, fdeutsch, gklein, huiwa, jdenemar, leiwang, lsurette, pstehlik, rbalakri, vkaigoro, yaniwang, ycui, yeylon, ykaul | |
Target Milestone: | ovirt-3.6.0-rc | Keywords: | Regression, Security, ZStream | |
Target Release: | 3.6.0 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | ovirt-node-3.3.0-0.4.20150906git14a6024.el7ev | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1206537 (view as bug list) | Environment: | ||
Last Closed: | 2016-03-09 14:19:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1206537 | |||
Attachments: |
Description
Douglas Schilling Landgraf
2015-03-22 23:30:56 UTC
Created attachment 1005094 [details]
logs
This initial error seems related to selinux, setting setenfor to 0 we go to a different error. For now, I am moving this bug to rhev-hypervisor. audit.log ============= <snip> type=AVC msg=audit(1427068608.315:2153): avc: denied { connectto } for pid=32549 comm="libvirtd" path="/var/run/sanlock/sanlock.sock" scontext=unconfined_u:system_r:svirt_t:s0:c326,c682 tcontext=system_u:system_r:ovirt_t:s0 tclass=unix_stream_socket type=SYSCALL msg=audit(1427068608.315:2153): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7f4c3d3e9890 a2=6e a3=0 items=0 ppid=1 pid=32549 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="libvirtd" exe="/usr/sbin/libvirtd" subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 key=(null) type=ANOM_ABEND msg=audit(1427068608.315:2154): auid=0 uid=0 gid=0 ses=2 subj=unconfined_u:system_r:virtd_t:s0-s0:c0.c1023 pid=32549 comm="libvirtd" sig=11 #audit2allow -a #============= svirt_t ============== allow svirt_t ovirt_t:unix_stream_socket connectto; However, now I am getting the below error trying to start the virtual machine: 2015-03-22 23:59:50.069+0000: 16382: error : qemuProcessWaitForMonitor:1858 : internal error process exited while connecting to monitor: ((null):728): Spice-Warning **: reds.c:3269:reds_init_ssl: Could not use private key file failed to initialize spice server 2015-03-23 00:00:21.244+0000: 16383: error : qemuMonitorOpenUnix:294 : failed to connect to monitor socket: No such process 2015-03-23 00:00:21.244+0000: 16383: error : qemuProcessWaitForMonitor:1858 : internal error process exited while connecting to monitor: ((null):1122): Spice-Warning **: reds.c:3269:reds_init_ssl: Could not use private key file failed to initialize spice server # pwd /etc/pki/libvirt [root@localhost libvirt]# ls -la -R .: total 3 drwxr-xr-x. 3 vdsm kvm 80 2015-03-22 22:23 . drwxr-xr-x. 12 root root 240 2015-03-22 22:23 .. -rw-r--r--. 1 root root 1554 2015-03-22 22:23 clientcert.pem drwxr-xr-x. 2 vdsm kvm 60 2015-03-22 22:23 private ./private: total 3 drwxr-xr-x. 2 vdsm kvm 60 2015-03-22 22:23 . drwxr-xr-x. 3 vdsm kvm 80 2015-03-22 22:23 .. -r--r-----. 1 root root 1679 2015-03-22 22:23 clientkey.pem @Jiri, Could you please review the above error from libvirt/spice? Ideas? Should we open a different bug? Thanks! Well, unless qemu is a member of root group, it can't read clientkey.pem. I don't see any bug in libvirt here. This looks similar to bug 1188255 Running chown -R vdsm:kvm /etc/pki/vdsm fixes the issue for me. Dan, can you tell what the correct owner of the files in /etc/pki/vdsm should be? And who is taking care of setting the correct permissions? /etc/pki/vdsm is installed vdsm:kvm by vdsm.rpm. Nothing should have changed that, if I correctly recall. (In reply to Fabian Deutsch from comment #8) > Running chown -R vdsm:kvm /etc/pki/vdsm fixes the issue for me. Before that call, the permissions were as follows: /etc/pki/vdms/keys root:root vdsmkey.pem /etc/pki/vdsm/libvirt-spice root:root ca-cert.pem root:root server-cert.pem root:root server-key.pem And for other pki related files it was the same (root:root) Created attachment 1005374 [details]
pki file permissions after vdsm-reg but before approval
Created attachment 1005376 [details]
pki file permissions after approval and node up
Created attachment 1005377 [details]
/etc and /var/logs from a run with incorrect permissions
Created attachment 1005388 [details]
host-deploy logs from the run with the wrong permissions
The last few attachments include node and engine side logs from a run where the final file permissions of files in /etc/pki/vdsm (maybe /etc/pki/libvirt as well), where incorrect.
The steps to reproduce were:
1. Install latest vt engine
2. Install Latest 6.6 based RHEV-H in TUI mode
3. Register RHEV-H to Engine using the TUI (vdsm-reg)
4. Approve node in Engine
5. Configure node with local storage domain
6. Create a PXE booting VM and launch it
Created attachment 1005437 [details]
Logs from a failed run, including permissions before and after
Created attachment 1005438 [details]
host-deploy logs from a failed run
I could reproduce this bug in the follow version: rhev-hypervisor6-6.6-20150319.42.iso ovirt-node-3.2.1-11.el6.noarch vdsm-reg-4.16.12.1-3.el6ev.noarch ovirt-node-plugin-vdsm-0.2.0-20.el6ev.noarch vdsm-4.16.12.1-3.el6ev.x86_64 Red Hat Enterprise Virtualization Manager Version: 3.5.1-0.2.el6ev Test steps: 1. Clean install latest vt14.1 engine 2. Install rhev-hypervisor6-6.6-20150319.42.iso in TUI mode 3. Register RHEV-H to Engine using the TUI (vdsm-reg) 4. Approve node in Engine 5. Configure node with local storage domain 6. Create a VM and launch it. Test result: After step6, VM vm is down with the follow error info: 2015-Mar-24, 14:13 Failed to run VM vm1 (User: admin@internal). 2015-Mar-24, 14:13 Failed to run VM vm1 on Host dhcp-8-165.nay.redhat.com. 2015-Mar-24, 14:13 VM vm1 is down with error. Exit message: Child quit during startup handshake: Input/output error. No this issue for RHEV-H 6.6 for RHEV 3.5 GA build version: rhev-hypervisor6-6.6-20150128.0.el6ev.noarch.rpm Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.34.el6ev so it should be a regression bug. See comment 20, here already cloned this bug to 3.5.z bug #1206537, so remove this 3.5.1 tracker bug 1193058 from this bug. According comment 17, verify this bug with the following steps on build rhev-hypervisor7-7.2-20151104.0 Version: rhev-hypervisor7-7.2-20151104.0.iso ovirt-node-3.6.0-0.20.20151103git3d3779a.el7ev.noarch Steps: 1. Install rhev-hypervisor7-7.2-20151104.0.iso in TUI mode and configure network successful. 2. Register rhevm to rhevm3.6.0.3 in RHEV-M page 3. Approve rhevh on RHEV-M 4. Configure rhevh with NFS storage. 5. Create a VM and launch it. Result: After step5, VM install and launch successfully. This bug has been fixed, so change the status to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0378.html |