RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1373588 - [Doc] [Docs Audit] Verify content in A.18. Common libvirt errors and troubleshooting
Summary: [Doc] [Docs Audit] Verify content in A.18. Common libvirt errors and troubles...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: doc-Virtualization_Deployment_and_Administration_Guide
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Jiri Herrmann
QA Contact: jiyan
Jiri Herrmann
URL:
Whiteboard:
Depends On: 1195617
Blocks: 1201058
TreeView+ depends on / blocked
 
Reported: 2016-09-06 16:18 UTC by Jiri Herrmann
Modified: 2019-06-11 08:48 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1195617
Environment:
Last Closed: 2019-06-11 08:48:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 3 Jaroslav Suchanek 2017-09-19 14:13:45 UTC
A.20.1. libvirtd failed to start - http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Enterprise_Linux-7-Virtualization_Deployment_and_Administration_Guide%20(html-single)/lastStableBuild/artifact/tmp/en-US/html-single/index.html#sect-libvirtd_failed_to_start

In the 'Symptom' part:
- systemctl should be used instead
- systemctl start libvirtd.service

In the 'Investigation' part:
- the error example is fine, but may be updated with something following:

<example>
$ sudo systemctl restart libvirtd
Job for libvirtd.service failed because the control process exited with
error code. See "systemctl status libvirtd.service" and "journalctl -xe"
for details.

zář 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: info : libvirt version: 3.7.0, package: 1.el7 (Unknown, 2017-09-06-09:01:55, js
zář 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: info : hostname: jsrh
zář 19 16:06:02 jsrh libvirtd[30708]: 2017-09-19 14:06:02.097+0000: 30708: error : daemonSetupNetworking:502 : unsupported configuration: No server certif
zář 19 16:06:02 jsrh systemd[1]: libvirtd.service: main process exited, code=exited, status=6/NOTCONFIGURED
zář 19 16:06:02 jsrh systemd[1]: Failed to start Virtualization daemon.
-- Subject: Unit libvirtd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit libvirtd.service has failed.
-- 
-- The result is failed.
</example>

Other than that, all is still valid.

Comment 5 Jaroslav Suchanek 2017-09-19 14:42:46 UTC
A.20.2.2. Failed to connect socket ... : Permission denied

Missing 'Solution' part.

polkit should be the default rhel's authentication agent. I recommend related troubleshooting section for the used authentication agent, or just leave out this part.

Comment 6 Jaroslav Suchanek 2017-09-19 14:44:36 UTC
libvirt-3.7.0-1.el7.x86_64

A.20.2.3. Other Connectivity Errors

This is fine.

Comment 7 Jaroslav Suchanek 2017-09-20 06:49:05 UTC
libvirt-3.7.0-1.el7.x86_64

A.20.3. The Guest Virtual Machine Cannot be Started: internal error guest CPU is not compatible with host CPU

Leave out the whole section. It is an unreal situation which should not be handled this way. Starting with rhel-7.4 libvirt has new cpu driver and handles it differently. We can add a new section if there is common misuse. (consulted with Jiri Denemark)

Comment 9 jiyan 2018-11-15 07:43:26 UTC
Hi Jiri

==> ⁠A.19.13. Migration Fails with Unable to allow access for disk path: No such file or directory

As for this question, another solution can also be considered to provide: using '--copy-storage-all' together with 'virsh' cmd.

Comment 10 jiyan 2018-11-16 08:26:30 UTC
Hi Jiri

==> A.19.5. Guest Virtual Machine Booting Stalls with Error: No boot device

In the "⁠Investigation" part, the following "<emphasis role="bold">" source code is displayed instead of emphasizing the 'target' element.
<emphasis role="bold"><target dev='hda' bus='ide'/></emphasis>

Comment 12 jiyan 2018-11-28 01:41:13 UTC
Hi Jirka

These 2 changes solve the problems in "comment 9" and "comment 10".

And I will do some further review this week and update here once there should be some further improvement.

Comment 13 jiyan 2018-11-28 07:35:36 UTC
Hi Jirka

There is another issue.

Q1:
In 19.2 part:
⁠A.19.2. The URI Failed to Connect to the Hypervisor
        ⁠A.19.2.1. Cannot read CA certificate
        A.19.2.2. Other Connectivity Errors
              (eg)Unable to connect to server at server:port: Connection refused

In 19.15 part:
⁠A.19.15. unable to connect to server at 'host:16509': Connection refused ... error: failed to connect to the hypervisor

Actually, I think the issue in "19.15" is exactly the example in "19.2.2", so it may be much better to merge "19.5" into "19.2".

Q2:
Besides, There are some other failure about "Connectivity Errors".
Eg-1: Authentication failed
      # virsh -c qemu+tcp://localhost/system
      error: failed to connect to the hypervisor
      error: authentication failed: authentication failed

Solution: Configuration error (SASL authentication is not configured)
Modify the following conf ; restart libvirtd daemon and set sasl username/password

      # cat /etc/libvirt/libvirtd.conf
      auth_tcp = "sasl"
      # cat /etc/sasl2/libvirt.conf
      mech_list: digest-md5
      sasldb_path: /etc/libvirt/passwd.db
      # yum install cyrus-sasl-md5 -y
      # systemctl restart libvirtd
      # saslpasswd2 -a libvirt 1

Eg-2: Permission denied for non-root user
      $ virsh -c qemu:///system list
      error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
      error: failed to connect to the hypervisor

Solution: Configuration error (Restriction for the permissions, group ownership of UNIX socket )
Modify the conf as following and restart libvirtd daemon

      # cat /etc/libvirt/libvirtd.conf |grep unix_sock
      #unix_sock_group = "libvirt"
      #unix_sock_ro_perms = "0777"
      #unix_sock_rw_perms = "0770"

      # systemcrl restart libvirtd

Would you like to take them into consideration and update them to the doc?

Comment 15 jiyan 2018-12-03 03:35:13 UTC
⁠A.19.4. internal error cannot find character device (null)

As for this issue, I can not reproduce it indeed.

# virsh domstate test1
shut off

# virsh dumpxml test1 |grep "<serial" -A10
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>

# virsh start test1
Domain test1 started


# virsh console test1
Connected to domain test1
Escape character is ^]

Red Hat Enterprise Linux Server 7.6 (Maipo)
Kernel 3.10.0-957.el7.x86_64 on an x86_64

localhost login: root
Password: 
Last login: Mon Dec  3 11:22:10 on ttyS0

** Delete 'console' related info in "/etc/default/grub" **

# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_CMDLINE_LINUX="reboot=pci biosdevname=0 crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

# grub2-mkconfig –o /boot/grub2/grub.cfg

[root@localhost ~]# reboot
dracut Warning: Killing all remaining processes
Rebooting.
[  139.282007] Restarting system.

Red Hat Enterprise Linux Server 7.6 (Maipo)
Kernel 3.10.0-957.el7.x86_64 on an x86_64

localhost login: root
Password: 
Last failed login: Mon Dec  3 11:32:10 CST 2018 on ttyS0
There was 1 failed login attempt since the last successful login.
Last login: Mon Dec  3 11:29:47 on ttyS0

# cat /etc/default/grub 
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_CMDLINE_LINUX="reboot=pci biosdevname=0 crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

# cat /proc/cmdline 
BOOT_IMAGE=/vmlinuz-3.10.0-957.el7.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet LANG=en_US.UTF-8


It seems that there is no need to configure "console" info to the guest kernel command line. Could you please help to check this issue? Thx. :D

Comment 16 jiyan 2018-12-03 03:46:11 UTC
Reproducing the issue in the comment above on the following components.
Version:
kernel-3.10.0-957.el7.x86_64
qemu-kvm-rhev-2.12.0-19.el7_6.2.x86_64
libvirt-4.5.0-10.virtcov.el7_6.3.x86_64

A.19.5. Guest Virtual Machine Booting Stalls with Error: No boot device

As for this issue, it can not be reproduced either.

Version:
kernel-3.10.0-957.el7.x86_64
qemu-kvm-rhev-2.12.0-19.el7_6.2.x86_64
libvirt-4.5.0-10.virtcov.el7_6.3.x86_64

Steps:
1. Install a VM named 'd' and choose 'virtio' bus for the virtual disk. **
# virsh domstate d
shut off

# virsh dumpxml d |grep "<disk" -A8
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/d.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

2. Create the VM named 'diskimport' through the image above. **

# virt-install --connect qemu:///system --ram 2048 -n diskimport --os-type=linux --os-variant=rhel7 --disk  path=/var/lib/libvirt/images/d.qcow2,device=disk,format=qcow2 --vcpus=2 --graphics spice --noautoconsole --import --check path_in_use=off
WARNING  Disk /var/lib/libvirt/images/d.qcow2 is already in use by other guests ['d'].

Starting install...
Domain creation completed.

# virsh domstate diskimport
running

# ps -ef |grep diskimport |sed 's/-device/\n-device/g'
qemu     29624     1  8 22:39 ?        00:00:16 /usr/libexec/qemu-kvm -name guest=diskimport,debug-threads=on 
...
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/var/lib/libvirt/images/d.qcow2,format=qcow2,**if=none**,id=drive-virtio-disk0 
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 

# virsh dumpxml diskimport |grep "<disk" -A7
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/d.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

So refer to the steps above, I can not reproduce this issue.
But I find another related bug:
Bug 1127151 - rhel 7 guest installed on virtio-blk device cannot boot from scsi-hd (won’t fix)

Comment 17 jiyan 2018-12-03 06:19:14 UTC
A.19.11. Guest is Unable to Start with Error: warning: could not open /dev/net/tun

As for this issue, it seems that it only occurs before libvirt-2.0.0-6.el7.x86_64.

I tried the following steps, it works well.

Version:
libvirt-4.5.0-10.virtcov.el7_6.3.x86_64
qemu-kvm-rhev-2.12.0-19.el7_6.2.x86_64
kernel-3.10.0-957.el7.x86_64

Steps:
1. Prepare the following script
# cat /etc/qemu-ifup 
#!/bin/sh 
# script to bring up the tun device in QEMU in bridged mode 
# first parameter is name of tap device (e.g. tap0) 
ETH0IPADDR=10.73.72.148 
GATEWAY=10.73.75.254
BROADCAST=10.73.75.255

/sbin/ifconfig eno24 down
/sbin/ifconfig eno24 0.0.0.0 promisc up 
/sbin/ifconfig $1 0.0.0.0 promisc up 

/usr/sbin/brctl addbr br0 
/usr/sbin/brctl addif br0 eno24
/usr/sbin/brctl addif br0 $1 

/usr/sbin/brctl stp br0 off 

/sbin/ifconfig br0 $ETH0IPADDR netmask 255.255.252.0 broadcast $BROADCAST 
/sbin/route add default gw $GATEWAY

2. Configure VM as following and start VM
# virsh domstate test1
shut off

# virsh dumpxml test1 |grep "<interface" -A10
    <interface type='ethernet'>
      <mac address='52:54:00:30:84:6d'/>
      <script path='/etc/qemu-ifup'/>
      <model type='rtl8139'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

# virsh start test1
Domain test1 started

If so, please update the special version of libvirt for this issue.

Comment 19 jiyan 2018-12-04 06:44:50 UTC
** ⁠A.19.3. Guest Starting Fails with Error: monitor socket did not show up

	As it has pointed out, this situation only happens on libvirt version prior to 0.9.5, so maybe it is not necessary, either. 

** ⁠A.19.6. Virtual network default has not been started
	As for this subject, I found the following link, which points out that this issue happens on RHEL-6.
	https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_virt_net_not_starting
	
	Besides, in RHEL-7, in this conf file, the following 3 lins are remarked with "#".
	# cat /etc/redhat-release 
	Red Hat Enterprise Linux Server release 7.6 (Maipo)

	# cat /etc/dnsmasq.conf 
	#bind-interfaces
	#interface=
	#listen-address=
	
	So maybe this issue is for RHEL-6, it seems I can not hit this issue in RHEL-7 after several times trying.

** A.19.7. PXE Boot (or DHCP) on Guest Failed
	There is another method can make virtual interface of VM acquire IP address, when the 'ifcfg-interface' conf file is right, running 'dhclient' cmd in VM will work. Do you think whether this method should be added?



Hi Jiri, That is all the comments after I finished reviewing this doc. 
You could modify the 3 items above in the doc according to your own judgement. Always thanks for your timely reply. :-)

Comment 21 jiyan 2018-12-12 03:03:37 UTC
Hi jiri
It is okay for me now.
BTW, another mistake.
Since you have deleted some title, such as A19.5 (the original A19.6 should be A19.5), please pay attention to the order of title.


Note You need to log in before you can comment on or make changes to this bug.