Bug 1591732

Summary: libvirt 4.3 virDevMapperGetTargets fails
Product: [Community] Virtualization Tools Reporter: Krastyu <kr_karakolev>
Component: libvirtAssignee: Michal Privoznik <mprivozn>
Status: CLOSED NEXTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: berrange, crobinso, gscrivan, jajacomp, kr_karakolev, libvirt-maint, mprivozn, rbalakri, redhat
Target Milestone: ---Keywords: Upstream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-4.6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-13 14:22:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Krastyu 2018-06-15 11:58:25 UTC
Description of problem: Virt manager cannot find my physical disks.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
error: Unable to get devmapper targets for /dev/disk/by-uuid/04BAEE282ED5E6B1: No such file or directory


Expected results:


Additional info:

Comment 1 Cole Robinson 2018-06-19 18:11:16 UTC
Please provide 'virt-manager --debug' output when reproducing

Comment 2 Krastyu 2018-06-19 18:23:13 UTC
(In reply to Cole Robinson from comment #1)
> Please provide 'virt-manager --debug' output when reproducing

virt-manager --debug
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (cli:265) Launched with command line: /usr/share/virt-manager/virt-manager --debug
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (virt-manager:185) virt-manager version: 1.5.1
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (virt-manager:186) virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.pyc'>
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (virt-manager:216) PyGObject version: 3.24.1
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (virt-manager:220) GTK version: 3.22.29

(virt-manager:25149): GLib-GIO-CRITICAL **: g_dbus_proxy_new_sync: assertion 'G_IS_DBUS_CONNECTION (connection)' failed
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:500) libguestfs inspection support: False
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (systray:156) Showing systray: False
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:1038) processing cli command uri= show_window= domain=
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:1040) No cli action requested, launching default window

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (manager:207) Showing manager
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:405) window counter incremented to 1
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:162) Loading stored URIs:
qemu:///system
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (engine:141) Initial gtkapplication activated
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:602) conn=qemu:///system changed to state=Connecting
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1019) Scheduling background open thread for qemu:///system
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1071) libvirt version=4003000
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1073) daemon version=4003000
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1074) conn version=2012000
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1076) qemu:///system capabilities:
<capabilities>

  <host>
    <uuid>c02fb620-c021-11d3-9c18-3497f6db726f</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>Haswell-noTSX</model>
      <vendor>Intel</vendor>
      <microcode version="56"/>
      <topology sockets="1" cores="8" threads="2"/>
      <feature name="vme"/>
      <feature name="ds"/>
      <feature name="acpi"/>
      <feature name="ss"/>
      <feature name="ht"/>
      <feature name="tm"/>
      <feature name="pbe"/>
      <feature name="dtes64"/>
      <feature name="monitor"/>
      <feature name="ds_cpl"/>
      <feature name="vmx"/>
      <feature name="est"/>
      <feature name="tm2"/>
      <feature name="xtpr"/>
      <feature name="pdcm"/>
      <feature name="dca"/>
      <feature name="osxsave"/>
      <feature name="f16c"/>
      <feature name="rdrand"/>
      <feature name="arat"/>
      <feature name="tsc_adjust"/>
      <feature name="cmt"/>
      <feature name="xsaveopt"/>
      <feature name="pdpe1gb"/>
      <feature name="abm"/>
      <feature name="invtsc"/>
      <pages unit="KiB" size="4"/>
    </cpu>
    <power_management/>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num="1">
        <cell id="0">
          <memory unit="KiB">16367640</memory>
          <cpus num="16">
            <cpu id="0" socket_id="0" core_id="0" siblings="0"/>
            <cpu id="1" socket_id="0" core_id="0" siblings="1"/>
            <cpu id="2" socket_id="0" core_id="1" siblings="2"/>
            <cpu id="3" socket_id="0" core_id="1" siblings="3"/>
            <cpu id="4" socket_id="0" core_id="2" siblings="4"/>
            <cpu id="5" socket_id="0" core_id="2" siblings="5"/>
            <cpu id="6" socket_id="0" core_id="3" siblings="6"/>
            <cpu id="7" socket_id="0" core_id="3" siblings="7"/>
            <cpu id="8" socket_id="0" core_id="4" siblings="8"/>
            <cpu id="9" socket_id="0" core_id="4" siblings="9"/>
            <cpu id="10" socket_id="0" core_id="5" siblings="10"/>
            <cpu id="11" socket_id="0" core_id="5" siblings="11"/>
            <cpu id="12" socket_id="0" core_id="6" siblings="12"/>
            <cpu id="13" socket_id="0" core_id="6" siblings="13"/>
            <cpu id="14" socket_id="0" core_id="7" siblings="14"/>
            <cpu id="15" socket_id="0" core_id="7" siblings="15"/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <cache>
      <bank id="0" level="3" type="both" size="20" unit="MiB" cpus="0-15"/>
    </cache>
    <secmodel>
      <model>none</model>
      <doi>0</doi>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
      <baselabel type="kvm">+77:+77</baselabel>
      <baselabel type="qemu">+77:+77</baselabel>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name="armv7l">
      <wordsize>32</wordsize>
      <emulator>/usr/bin/qemu-system-arm</emulator>
      <machine maxCpus="1">integratorcp</machine>
      <machine maxCpus="2">nuri</machine>
      <machine maxCpus="1">mps2-an511</machine>
      <machine maxCpus="1">verdex</machine>
      <machine maxCpus="1">mps2-an505</machine>
      <machine maxCpus="1">ast2500-evb</machine>
      <machine maxCpus="2">smdkc210</machine>
      <machine maxCpus="1">collie</machine>
      <machine maxCpus="1">imx25-pdk</machine>
      <machine maxCpus="1">spitz</machine>
      <machine maxCpus="4">realview-pbx-a9</machine>
      <machine maxCpus="1">realview-eb</machine>
      <machine maxCpus="1">realview-pb-a8</machine>
      <machine maxCpus="1">versatilepb</machine>
      <machine maxCpus="1">emcraft-sf2</machine>
      <machine maxCpus="255">virt-2.9</machine>
      <machine maxCpus="1">musicpal</machine>
      <machine maxCpus="1">z2</machine>
      <machine maxCpus="1">akita</machine>
      <machine maxCpus="255">virt-2.7</machine>
      <machine maxCpus="1">kzm</machine>
      <machine maxCpus="255">virt-2.8</machine>
      <machine maxCpus="4">realview-eb-mpcore</machine>
      <machine maxCpus="2">mcimx7d-sabre</machine>
      <machine maxCpus="1">sx1</machine>
      <machine maxCpus="1">sx1-v1</machine>
      <machine maxCpus="255">virt-2.6</machine>
      <machine maxCpus="1">cubieboard</machine>
      <machine maxCpus="4">highbank</machine>
      <machine maxCpus="4">raspi2</machine>
      <machine maxCpus="1">netduino2</machine>
      <machine maxCpus="1">terrier</machine>
      <machine maxCpus="1">n810</machine>
      <machine maxCpus="1">mainstone</machine>
      <machine maxCpus="1">palmetto-bmc</machine>
      <machine maxCpus="4">sabrelite</machine>
      <machine maxCpus="4">midway</machine>
      <machine maxCpus="1">romulus-bmc</machine>
      <machine maxCpus="1">cheetah</machine>
      <machine maxCpus="1">tosa</machine>
      <machine maxCpus="1">borzoi</machine>
      <machine maxCpus="1">versatileab</machine>
      <machine maxCpus="1">lm3s6965evb</machine>
      <machine maxCpus="1">n800</machine>
      <machine maxCpus="255">virt-2.10</machine>
      <machine maxCpus="255">virt-2.11</machine>
      <machine maxCpus="1">connex</machine>
      <machine maxCpus="255">virt-2.12</machine>
      <machine canonical="virt-2.12" maxCpus="255">virt</machine>
      <machine maxCpus="1">xilinx-zynq-a9</machine>
      <machine maxCpus="1">mps2-an385</machine>
      <machine maxCpus="4">vexpress-a9</machine>
      <machine maxCpus="4">vexpress-a15</machine>
      <machine maxCpus="1">canon-a1100</machine>
      <machine maxCpus="1">lm3s811evb</machine>
      <domain type="qemu">
        <emulator>/usr/bin/qemu-system-arm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default="on" toggle="no"/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name="i686">
      <wordsize>32</wordsize>
      <emulator>/usr/bin/qemu-system-x86_64</emulator>
      <machine maxCpus="255">pc-i440fx-2.12</machine>
      <machine canonical="pc-i440fx-2.12" maxCpus="255">pc</machine>
      <machine maxCpus="1">isapc</machine>
      <machine maxCpus="255">pc-1.1</machine>
      <machine maxCpus="255">pc-1.2</machine>
      <machine maxCpus="255">pc-1.3</machine>
      <machine maxCpus="255">pc-i440fx-2.8</machine>
      <machine maxCpus="255">pc-1.0</machine>
      <machine maxCpus="255">pc-i440fx-2.9</machine>
      <machine maxCpus="255">pc-i440fx-2.6</machine>
      <machine maxCpus="255">pc-i440fx-2.7</machine>
      <machine maxCpus="255">pc-i440fx-2.3</machine>
      <machine maxCpus="255">pc-i440fx-2.4</machine>
      <machine maxCpus="255">pc-i440fx-2.5</machine>
      <machine maxCpus="255">pc-i440fx-2.1</machine>
      <machine maxCpus="255">pc-i440fx-2.2</machine>
      <machine maxCpus="255">pc-i440fx-2.0</machine>
      <machine maxCpus="288">pc-q35-2.11</machine>
      <machine maxCpus="288">pc-q35-2.12</machine>
      <machine canonical="pc-q35-2.12" maxCpus="288">q35</machine>
      <machine maxCpus="288">pc-q35-2.10</machine>
      <machine maxCpus="255">pc-i440fx-1.7</machine>
      <machine maxCpus="288">pc-q35-2.9</machine>
      <machine maxCpus="255">pc-0.15</machine>
      <machine maxCpus="255">pc-i440fx-1.5</machine>
      <machine maxCpus="255">pc-q35-2.7</machine>
      <machine maxCpus="255">pc-i440fx-1.6</machine>
      <machine maxCpus="255">pc-i440fx-2.11</machine>
      <machine maxCpus="288">pc-q35-2.8</machine>
      <machine maxCpus="255">pc-0.13</machine>
      <machine maxCpus="255">pc-0.14</machine>
      <machine maxCpus="255">pc-q35-2.4</machine>
      <machine maxCpus="255">pc-q35-2.5</machine>
      <machine maxCpus="255">pc-q35-2.6</machine>
      <machine maxCpus="255">pc-i440fx-1.4</machine>
      <machine maxCpus="255">pc-i440fx-2.10</machine>
      <machine maxCpus="255">pc-0.11</machine>
      <machine maxCpus="255">pc-0.12</machine>
      <machine maxCpus="255">pc-0.10</machine>
      <domain type="qemu">
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
      </domain>
      <domain type="kvm">
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default="on" toggle="no"/>
      <acpi default="on" toggle="yes"/>
      <apic default="on" toggle="no"/>
      <pae/>
      <nonpae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name="sparc">
      <wordsize>32</wordsize>
      <emulator>/usr/bin/qemu-system-sparc</emulator>
      <machine maxCpus="1">SS-5</machine>
      <machine maxCpus="4">SS-10</machine>
      <machine maxCpus="1">LX</machine>
      <machine maxCpus="4">SS-20</machine>
      <machine maxCpus="1">SPARCClassic</machine>
      <machine maxCpus="4">SS-600MP</machine>
      <machine maxCpus="1">Voyager</machine>
      <machine maxCpus="1">SPARCbook</machine>
      <machine maxCpus="1">SS-4</machine>
      <machine maxCpus="1">leon3_generic</machine>
      <domain type="qemu">
        <emulator>/usr/bin/qemu-system-sparc</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <disksnapshot default="on" toggle="no"/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name="x86_64">
      <wordsize>64</wordsize>
      <emulator>/usr/bin/qemu-system-x86_64</emulator>
      <machine maxCpus="255">pc-i440fx-2.12</machine>
      <machine canonical="pc-i440fx-2.12" maxCpus="255">pc</machine>
      <machine maxCpus="1">isapc</machine>
      <machine maxCpus="255">pc-1.1</machine>
      <machine maxCpus="255">pc-1.2</machine>
      <machine maxCpus="255">pc-1.3</machine>
      <machine maxCpus="255">pc-i440fx-2.8</machine>
      <machine maxCpus="255">pc-1.0</machine>
      <machine maxCpus="255">pc-i440fx-2.9</machine>
      <machine maxCpus="255">pc-i440fx-2.6</machine>
      <machine maxCpus="255">pc-i440fx-2.7</machine>
      <machine maxCpus="255">pc-i440fx-2.3</machine>
      <machine maxCpus="255">pc-i440fx-2.4</machine>
      <machine maxCpus="255">pc-i440fx-2.5</machine>
      <machine maxCpus="255">pc-i440fx-2.1</machine>
      <machine maxCpus="255">pc-i440fx-2.2</machine>
      <machine maxCpus="255">pc-i440fx-2.0</machine>
      <machine maxCpus="288">pc-q35-2.11</machine>
      <machine maxCpus="288">pc-q35-2.12</machine>
      <machine canonical="pc-q35-2.12" maxCpus="288">q35</machine>
      <machine maxCpus="288">pc-q35-2.10</machine>
      <machine maxCpus="255">pc-i440fx-1.7</machine>
      <machine maxCpus="288">pc-q35-2.9</machine>
      <machine maxCpus="255">pc-0.15</machine>
      <machine maxCpus="255">pc-i440fx-1.5</machine>
      <machine maxCpus="255">pc-q35-2.7</machine>
      <machine maxCpus="255">pc-i440fx-1.6</machine>
      <machine maxCpus="255">pc-i440fx-2.11</machine>
      <machine maxCpus="288">pc-q35-2.8</machine>
      <machine maxCpus="255">pc-0.13</machine>
      <machine maxCpus="255">pc-0.14</machine>
      <machine maxCpus="255">pc-q35-2.4</machine>
      <machine maxCpus="255">pc-q35-2.5</machine>
      <machine maxCpus="255">pc-q35-2.6</machine>
      <machine maxCpus="255">pc-i440fx-1.4</machine>
      <machine maxCpus="255">pc-i440fx-2.10</machine>
      <machine maxCpus="255">pc-0.11</machine>
      <machine maxCpus="255">pc-0.12</machine>
      <machine maxCpus="255">pc-0.10</machine>
      <domain type="qemu">
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
      </domain>
      <domain type="kvm">
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default="on" toggle="no"/>
      <acpi default="on" toggle="yes"/>
      <apic default="on" toggle="no"/>
    </features>
  </guest>

</capabilities>

[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:876) Using domain events
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:916) Error registering network events: this function is not supported by the connection driver: virConnectNetworkEventRegisterAny
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:933) Using storage pool events
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:952) Using node device events
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:495) Connection doesn't seem to support network APIs. Skipping all network polling.
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1198) interface=eno1 status=Active added
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=tux
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1198) interface=wlp0s20u9 status=Inactive added
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1198) interface=lo status=Inactive added
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (connection:1198) domain=WIN10 status=Shutoff added
Entity: line 2: parser error : Input is not proper UTF-8, indicate encoding !
Bytes: 0xCD 0xEE 0xE2 0x20
  <name>.��� ������ ����.un~</name>
         ^
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (xmlbuilder:733) Error parsing xml=
<volume type='file'>
  <name>.��� ������ ����.un~</name>
  <key>/home/tux/.��� ������ ����.un~</key>
  <source>
  </source>
  <capacity unit='bytes'>523</capacity>
  <allocation unit='bytes'>4096</allocation>
  <physical unit='bytes'>523</physical>
  <target>
    <path>/home/tux/.��� ������ ����.un~</path>
    <format type='raw'/>
    <permissions>
      <mode>0777</mode>
      <owner>1000</owner>
      <group>100</group>
    </permissions>
    <timestamps>
      <atime>1479649349.100008552</atime>
      <mtime>1462028615.118988610</mtime>
      <ctime>1509884430.248049729</ctime>
    </timestamps>
  </target>
</volume>

[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (libvirtobject:201) Error initializing libvirt state for <vmmStorageVolume name=.��� ������ ����.un~ id=0x7fc4cc0691e0>
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 198, in init_libvirt_state
    self._init_libvirt_state()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 62, in _init_libvirt_state
    self.ensure_latest_xml()
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 300, in ensure_latest_xml
    self.__force_refresh_xml(nosignal=nosignal)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 317, in __force_refresh_xml
    parsexml=active_xml)
  File "/usr/share/virt-manager/virtinst/storage.py", line 601, in __init__
    _StorageObject.__init__(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 842, in __init__
    relative_object_xpath)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 713, in __init__
    self._parse(parsexml, parentxmlstate)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 731, in _parse
    doc = libxml2.parseDoc(parsexml)
  File "/usr/lib64/python2.7/site-packages/libxml2.py", line 1327, in parseDoc
    if ret is None:raise parserError('xmlParseDoc() failed')
parserError: xmlParseDoc() failed
Entity: line 2: parser error : Input is not proper UTF-8, indicate encoding !
Bytes: 0xCD 0xEE 0xE2 0x20
  <name>��� ������ ����~</name>
        ^
[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (xmlbuilder:733) Error parsing xml=
<volume type='file'>
  <name>��� ������ ����~</name>
  <key>/home/tux/��� ������ ����~</key>
  <source>
  </source>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <target>
    <path>/home/tux/��� ������ ����~</path>
    <format type='raw'/>
    <permissions>
      <mode>0777</mode>
      <owner>1000</owner>
      <group>100</group>
    </permissions>
    <timestamps>
      <atime>1462028584.138991476</atime>
      <mtime>1462028582.671991612</mtime>
      <ctime>1509884430.475049732</ctime>
    </timestamps>
  </target>
</volume>

[Tue, 19 Jun 2018 21:22:22 virt-manager 25149] DEBUG (libvirtobject:201) Error initializing libvirt state for <vmmStorageVolume name=��� ������ ����~ id=0x7fc4cc069780>
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 198, in init_libvirt_state
    self._init_libvirt_state()
  File "/usr/share/virt-manager/virtManager/storagepool.py", line 62, in _init_libvirt_state
    self.ensure_latest_xml()
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 300, in ensure_latest_xml
    self.__force_refresh_xml(nosignal=nosignal)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 317, in __force_refresh_xml
    parsexml=active_xml)
  File "/usr/share/virt-manager/virtinst/storage.py", line 601, in __init__
    _StorageObject.__init__(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 842, in __init__
    relative_object_xpath)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 713, in __init__
    self._parse(parsexml, parentxmlstate)
  File "/usr/share/virt-manager/virtinst/xmlbuilder.py", line 731, in _parse
    doc = libxml2.parseDoc(parsexml)
  File "/usr/lib64/python2.7/site-packages/libxml2.py", line 1327, in parseDoc
    if ret is None:raise parserError('xmlParseDoc() failed')
parserError: xmlParseDoc() failed
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=tux status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=tmp
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=tmp status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=Windows_8.1_AIO_20in1_x64_Pre-Activated
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=Windows_8.1_AIO_20in1_x64_Pre-Activated status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=Microsoft.Windows.XP.Professional.SP3.x86.Integrated.January.2015 status=Inactive added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=default
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=Downloads
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=default status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=Downloads status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=Desktop
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=Desktop status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=WINVM
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=WINVM status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=nvram
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=nvram status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=desk
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=desk status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:831) storage pool refresh event: pool=windows_home
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:1198) pool=windows_home status=Active added
[Tue, 19 Jun 2018 21:22:23 virt-manager 25149] DEBUG (connection:602) conn=qemu:///system changed to state=Active
[Tue, 19 Jun 2018 21:22:25 virt-manager 25149] DEBUG (serialcon:37) Using VTE API 2.91

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed
[Tue, 19 Jun 2018 21:22:25 virt-manager 25149] DEBUG (details:646) Showing VM details: <vmmDomain name=WIN10 id=0x7fc4ccc69500>
[Tue, 19 Jun 2018 21:22:25 virt-manager 25149] DEBUG (engine:405) window counter incremented to 2

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed
Entity: line 2: parser error : Input is not proper UTF-8, indicate encoding !
Bytes: 0xCD 0xEE 0xE2 0x20
  <name>.��� ������ ����.un~</name>
         ^
[Tue, 19 Jun 2018 21:22:28 virt-manager 25149] DEBUG (xmlbuilder:733) Error parsing xml=
<volume type='file'>
  <name>.��� ������ ����.un~</name>
  <key>/home/tux/.��� ������ ����.un~</key>
  <source>
  </source>
  <capacity unit='bytes'>523</capacity>
  <allocation unit='bytes'>4096</allocation>
  <physical unit='bytes'>523</physical>
  <target>
    <path>/home/tux/.��� ������ ����.un~</path>
    <format type='raw'/>
    <permissions>
      <mode>0777</mode>
      <owner>1000</owner>
      <group>100</group>
    </permissions>
    <timestamps>
      <atime>1479649349.100008552</atime>
      <mtime>1462028615.118988610</mtime>
      <ctime>1509884430.248049729</ctime>
    </timestamps>
  </target>
</volume>

[Tue, 19 Jun 2018 21:22:28 virt-manager 25149] DEBUG (connection:590) Error looking up volume from path=/dev/disk/by-uuid/04BAEE282ED5E6B1: xmlParseDoc() failed
Entity: line 2: parser error : Input is not proper UTF-8, indicate encoding !
Bytes: 0xCD 0xEE 0xE2 0x20
  <name>��� ������ ����~</name>
        ^
[Tue, 19 Jun 2018 21:22:28 virt-manager 25149] DEBUG (xmlbuilder:733) Error parsing xml=
<volume type='file'>
  <name>��� ������ ����~</name>
  <key>/home/tux/��� ������ ����~</key>
  <source>
  </source>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <target>
    <path>/home/tux/��� ������ ����~</path>
    <format type='raw'/>
    <permissions>
      <mode>0777</mode>
      <owner>1000</owner>
      <group>100</group>
    </permissions>
    <timestamps>
      <atime>1462028584.138991476</atime>
      <mtime>1462028582.671991612</mtime>
      <ctime>1509884430.475049732</ctime>
    </timestamps>
  </target>
</volume>

[Tue, 19 Jun 2018 21:22:28 virt-manager 25149] DEBUG (connection:590) Error looking up volume from path=/dev/disk/by-uuid/04BAEE282ED5E6B1: xmlParseDoc() failed
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (engine:1164) Starting vm 'WIN10'
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:701) There are 1 node devices with vendorId: 0x090c, productId: 0x1000
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:701) There are 1 node devices with vendorId: 0x2357, productId: 0x0107
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:701) There are 1 node devices with vendorId: 0x04d9, productId: 0xa073
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:701) There are 1 node devices with vendorId: 0x1c4f, productId: 0x0002
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:847) node device lifecycle event: device=net_macvtap0_52_54_00_8a_15_bd event=0 reason=0
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:847) node device lifecycle event: device=net_macvtap0_52_54_00_8a_15_bd event=1 reason=0
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (connection:1159) nodedev=net_macvtap0_52_54_00_8a_15_bd removed
[Tue, 19 Jun 2018 21:22:31 virt-manager 25149] DEBUG (error:99) error dialog message:
summary=Error starting domain: Unable to get devmapper targets for /dev/disk/by-uuid/04BAEE282ED5E6B1: No such device
details=Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 125, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 82, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1508, in startup
    self._backend.create()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: Unable to get devmapper targets for /dev/disk/by-uuid/04BAEE282ED5E6B1: No such device

[Tue, 19 Jun 2018 21:22:37 virt-manager 25149] DEBUG (details:682) Closing VM details: <vmmDomain name=WIN10 id=0x7fc4ccc69500>
[Tue, 19 Jun 2018 21:22:37 virt-manager 25149] DEBUG (engine:409) window counter decremented to 1
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (manager:218) Closing manager
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (engine:409) window counter decremented to 0
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (engine:471) No windows found, requesting app exit

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed

(virt-manager:25149): dconf-WARNING **: failed to commit changes to dconf: The connection is closed
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (connection:968) conn.close() uri=qemu:///system
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (connection:602) conn=qemu:///system changed to state=Disconnected
[Tue, 19 Jun 2018 21:22:39 virt-manager 25149] DEBUG (engine:495) Exiting app normally.

Comment 3 Krastyu 2018-06-19 18:28:18 UTC
blkid 
/dev/sdc1: UUID="be9b8595-f3c1-4b7d-b1d7-bbb6153a2714" TYPE="ext4" PARTUUID="7a2deb37-01"
/dev/sda1: UUID="051B7B403515BAA2" TYPE="ntfs" PARTUUID="e920de7e-01"
/dev/sda3: UUID="5d137a53-d182-4fb6-9c71-acd817092749" TYPE="ext4" PARTUUID="e920de7e-03"
/dev/sdb1: UUID="de2bde00-49dd-4517-9e73-118bf4c3b1f0" TYPE="swap" PARTUUID="5bb72942-01"
/dev/sdb2: UUID="77a975ce-53aa-46f5-8ecd-14076bbc2006" TYPE="ext4" PARTUUID="5bb72942-02"
/dev/sdb3: LABEL="WINHOME" UUID="04BAEE282ED5E6B1" TYPE="ntfs" PARTUUID="5bb72942-03"
/dev/sdd1: LABEL="GSP1RMCULFRER_BG_DVD" UUID="D6566F48566F2907" TYPE="ntfs" PARTUUID="0005552c-01"

Comment 4 Cole Robinson 2018-06-19 19:02:51 UTC
Can you start the WIN10 VM with sudo virsh start WIN10?
What's ls -l /dev/disk/by-uuid

Comment 5 Krastyu 2018-06-19 19:09:34 UTC
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 10 Jun 16 18:19 04BAEE282ED5E6B1 -> ../../sdb3
lrwxrwxrwx 1 root root 10 Jun 16 21:19 051B7B403515BAA2 -> ../../sda1
lrwxrwxrwx 1 root root 10 Jun 16 18:19 5d137a53-d182-4fb6-9c71-acd817092749 -> ../../sda3
lrwxrwxrwx 1 root root 10 Jun 16 18:19 77a975ce-53aa-46f5-8ecd-14076bbc2006 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Jun 16 18:19 be9b8595-f3c1-4b7d-b1d7-bbb6153a2714 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Jun 19 21:17 D6566F48566F2907 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Jun 16 18:19 de2bde00-49dd-4517-9e73-118bf4c3b1f0 -> ../../sdb1


No i cant start it .
 virsh start WIN10
error: Failed to start domain WIN10
error: Unable to get devmapper targets for /dev/disk/by-uuid/04BAEE282ED5E6B1: No such file or directory

Comment 6 Krastyu 2018-06-19 19:11:17 UTC
Also this setup was working and suddenly ,while i was rendering my 3ds max scene my pc powers of.And when i try to start my virtual machine i got stucked with this

Comment 7 Cole Robinson 2018-06-19 19:15:52 UTC
Can you manually mount /dev/sdb3 ? Maybe somethings wrong with that drive?
What's the XML for that VM? sudo virsh dumpxml WIN10

Comment 8 Krastyu 2018-06-19 19:29:15 UTC
That hdd is mounted right now.I cant attach any hdd to VM.I try creating new one and trying to attach external hdd and virt-manager trows that error


<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>WIN10</name>
  <uuid>35f581d3-f5b6-4105-b1d8-f209eaaeb4f7</uuid>
  <memory unit='KiB'>12582912</memory>
  <currentMemory unit='KiB'>12582912</currentMemory>
  <vcpu placement='static'>12</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
    <vcpupin vcpu='8' cpuset='8'/>
    <vcpupin vcpu='9' cpuset='9'/>
    <vcpupin vcpu='10' cpuset='10'/>
    <vcpupin vcpu='11' cpuset='11'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.10'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/edk2-ovmf/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/Win10_Nvidia_VARS.fd</nvram>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='6' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/disk/by-uuid/04BAEE282ED5E6B1'/>
      <target dev='sdb' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/tux/desk/virtio-win-0.1.149.iso'/>
      <target dev='sdl' bus='sata'/>
      <readonly/>
      <boot order='4'/>
      <address type='drive' controller='1' bus='0' target='0' unit='5'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/tux/Win10_1803_EnBg_x64.iso'/>
      <target dev='sdm' bus='sata'/>
      <readonly/>
      <boot order='3'/>
      <address type='drive' controller='2' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='threads' discard='unmap'/>
      <source file='/windows_boot/WIN10.qcow2'/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' io='threads'/>
      <source file='/home/tux/WINVM/guest.qcow2'/>
      <target dev='sdn' bus='scsi'/>
      <address type='drive' controller='1' bus='0' target='0' unit='6'/>
    </disk>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='sata' index='1'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </controller>
    <controller type='sata' index='2'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0xc'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0xd'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='15' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='15' port='0xe'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </controller>
    <controller type='scsi' index='1' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/>
    </controller>
    <controller type='scsi' index='2' model='lsilogic'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </controller>
    <interface type='direct'>
      <mac address='52:54:00:8a:15:bd'/>
      <source dev='eno1' mode='bridge'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </sound>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x090c'/>
        <product id='0x1000'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x2357'/>
        <product id='0x0107'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x04d9'/>
        <product id='0xa073'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1c4f'/>
        <product id='0x0002'/>
      </source>
      <address type='usb' bus='0' port='4'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null'/>
  </qemu:commandline>
</domain>

Comment 9 Sergey 2018-07-02 13:21:25 UTC
Confirm problem after upgrade from 4.1 to 4.3 or 4.4 version.
OS = Gentoo
Error:
# virsh start win7_3
"Unable to get devmapper targets for /dev/sdd1"
# ls -la /dev/sdd1
brw-rw---- 1 root disk 8, 49 июл  2 16:14 /dev/sdd1

If downgrade version to 4.1 - all fine.

Comment 10 Sergey 2018-07-02 13:23:59 UTC
<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/sdd1'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

Comment 11 Sergey 2018-07-02 20:49:08 UTC
 # virsh start win7_3
error: Failed to start domain win7_3
error: Unable to get devmapper targets for /dev/sdd1: No such file or directory

Comment 12 Krastyu 2018-07-02 20:53:30 UTC
(In reply to Sergey from comment #9)
> Confirm problem after upgrade from 4.1 to 4.3 or 4.4 version.
> OS = Gentoo
> Error:
> # virsh start win7_3
> "Unable to get devmapper targets for /dev/sdd1"
> # ls -la /dev/sdd1
> brw-rw---- 1 root disk 8, 49 июл  2 16:14 /dev/sdd1
> 
> If downgrade version to 4.1 - all fine.

That didn't solve my problem

Comment 13 Krastyu 2018-07-02 21:00:47 UTC
(In reply to Sergey from comment #10)
> <disk type='block' device='disk'>
>       <driver name='qemu' type='raw' cache='none' io='native'/>
>       <source dev='/dev/sdd1'/>
>       <target dev='vda' bus='virtio'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> function='0x0'/>
>     </disk>

How did you emerge that older version ? I'm also using Gentoo .

Comment 14 Sergey 2018-07-02 21:35:12 UTC
(In reply to Krastyu from comment #13)
> (In reply to Sergey from comment #10)
> > <disk type='block' device='disk'>
> >       <driver name='qemu' type='raw' cache='none' io='native'/>
> >       <source dev='/dev/sdd1'/>
> >       <target dev='vda' bus='virtio'/>
> >       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> > function='0x0'/>
> >     </disk>
> 
> How did you emerge that older version ? I'm also using Gentoo .

For downgrade to version 4.2 use next files:

app-emulation/libvirt
https://gitweb.gentoo.org/repo/gentoo.git/plain/app-emulation/libvirt/libvirt-4.2.0.ebuild?id=b020d282c47dde0b954824565fd92da72c793397

dev-python/libvirt-python
https://gitweb.gentoo.org/repo/gentoo.git/plain/dev-python/libvirt-python/libvirt-python-4.2.0-r1.ebuild?id=cb340501bf2b04d923bc1e58147274facdf62fcc

Manual:
https://wiki.gentoo.org/wiki/Custom_repository

I just check it. All fine.

Comment 15 Krastyu 2018-07-03 10:47:08 UTC
(In reply to Sergey from comment #14)
> (In reply to Krastyu from comment #13)
> > (In reply to Sergey from comment #10)
> > > <disk type='block' device='disk'>
> > >       <driver name='qemu' type='raw' cache='none' io='native'/>
> > >       <source dev='/dev/sdd1'/>
> > >       <target dev='vda' bus='virtio'/>
> > >       <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
> > > function='0x0'/>
> > >     </disk>
> > 
> > How did you emerge that older version ? I'm also using Gentoo .
> 
> For downgrade to version 4.2 use next files:
> 
> app-emulation/libvirt
> https://gitweb.gentoo.org/repo/gentoo.git/plain/app-emulation/libvirt/
> libvirt-4.2.0.ebuild?id=b020d282c47dde0b954824565fd92da72c793397
> 
> dev-python/libvirt-python
> https://gitweb.gentoo.org/repo/gentoo.git/plain/dev-python/libvirt-python/
> libvirt-python-4.2.0-r1.ebuild?id=cb340501bf2b04d923bc1e58147274facdf62fcc
> 
> Manual:
> https://wiki.gentoo.org/wiki/Custom_repository
> 
> I just check it. All fine.

Ok but this dindt download files that are in files directory and im unable to emerge this.Did i miss something ?

Comment 16 Sergey 2018-07-03 12:41:25 UTC
Links have two lines. You use both?

Comment 17 Krastyu 2018-07-03 19:05:32 UTC
Thank you.Downgrading libvirt resolve my problem

Comment 18 Cole Robinson 2018-07-03 19:26:12 UTC
Hmm the only libvirt changes in src/storage/ between 4.1 and 4.3 are things involving the split out storage driver... I wonder if that's triggering the issue somehow? Unfortunately I don't know enough about those changes to say offhand

Comment 19 Sergey 2018-07-03 19:35:01 UTC
(In reply to Cole Robinson from comment #18)
> Hmm the only libvirt changes in src/storage/ between 4.1 and 4.3 are things
> involving the split out storage driver... I wonder if that's triggering the
> issue somehow? Unfortunately I don't know enough about those changes to say
> offhand

This problem between versions 4.2 and 4.3

Comment 20 Cole Robinson 2018-07-12 20:24:37 UTC
Now that I look at the error message closer, it's probably due to this change:

commit fd9d1e686db64fa9481b9eab4dabafa46713e2cf
Author: Michal Privoznik <mprivozn>
Date:   Mon Mar 26 14:48:07 2018 +0200

    util: Introduce virDevMapperGetTargets

Something is going wrong calling that function. Maybe Michal has ideas

Comment 21 Michal Privoznik 2018-07-13 07:24:18 UTC
Guys, this really looks like problem caused by the commit Cole's referring to. However, the code is written so that ENOENT is ignored. Therefore I don't understand how come you can get "No such file or directory" in the commit message.

https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/util/virdevmapper.c;hb=HEAD#l94

Also, I'm unable to reproduce. Can you try to attach a debugger and get exact location where virDevMapperGetTargetsImpl() fails? And what is your kernel version?

Comment 22 Sergey 2018-07-13 12:25:25 UTC
Thanks to Michal for help. For solve problem just need activate in kernel option CONFIG_BLK_DEV_DM.
Problem is solved.

Comment 23 Michal Privoznik 2018-07-13 13:49:43 UTC
Turns out, the problem is libvirt expected DM support to be compiled into kernel. If that's not the case this problem occurs. Patches posted upstream:

https://www.redhat.com/archives/libvir-list/2018-July/msg00870.html

Comment 24 Krastyu 2018-07-13 13:55:48 UTC
(In reply to Sergey from comment #22)
> Thanks to Michal for help. For solve problem just need activate in kernel
> option CONFIG_BLK_DEV_DM.
> Problem is solved.

This didnt solve my problem.


grep CONFIG_BLK_DEV_DM config-4.16.16-ck-1 
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=y

Comment 25 Krastyu 2018-07-13 14:04:48 UTC
(In reply to Sergey from comment #22)
> Thanks to Michal for help. For solve problem just need activate in kernel
> option CONFIG_BLK_DEV_DM.
> Problem is solved.

What virtual motherboard you are using ? Im with q35 i will try with i440fx

Comment 26 Michal Privoznik 2018-07-13 14:22:13 UTC
And I've just pushed the patches upstream:

commit 8d2a9f0994b301f847f9d2084195e4c15da5e76b
Author:     Michal Privoznik <mprivozn>
AuthorDate: Fri Jul 13 14:34:28 2018 +0200
Commit:     Michal Privoznik <mprivozn>
CommitDate: Fri Jul 13 16:01:16 2018 +0200

    qemu_cgroup: Allow/disallow devmapper control iff available
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1591732
    
    On kernels without device mapper support there won't be
    /dev/mapper/control. Therefore it doesn't make much sense to
    put it into devices CGroup.
    
    Signed-off-by: Michal Privoznik <mprivozn>
    Reviewed-by: Ján Tomko <jtomko>

commit 170d1e31df064108d064910c77f6316eb6726985
Author:     Michal Privoznik <mprivozn>
AuthorDate: Fri Jul 13 14:31:16 2018 +0200
Commit:     Michal Privoznik <mprivozn>
CommitDate: Fri Jul 13 16:01:05 2018 +0200

    virDevMapperGetTargetsImpl: Be tolerant to kernels without DM support
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1591732
    
    If kernel is compiled without CONFIG_BLK_DEV_DM enabled, there is
    no /dev/mapper/control device and since dm_task_create() actually
    does some ioctl() over it creating a task may fail.
    To cope with this handle ENOENT and ENODEV gracefully.
    
    Signed-off-by: Michal Privoznik <mprivozn>
    Reviewed-by: Ján Tomko <jtomko>

v4.5.0-118-g8d2a9f0994

Comment 27 Michal Privoznik 2018-07-13 14:23:44 UTC
(In reply to Krastyu from comment #24)
> (In reply to Sergey from comment #22)
> > Thanks to Michal for help. For solve problem just need activate in kernel
> > option CONFIG_BLK_DEV_DM.
> > Problem is solved.
> 
> This didnt solve my problem.
> 
> 
> grep CONFIG_BLK_DEV_DM config-4.16.16-ck-1 
> CONFIG_BLK_DEV_DM_BUILTIN=y
> CONFIG_BLK_DEV_DM=y

Try my patches anyway. They should fix the problem for you.

Comment 28 Krastyu 2018-07-13 19:30:36 UTC
Cant apply second patch

Comment 29 Krastyu 2018-07-13 19:32:06 UTC
patching file src/qemu/qemu_cgroup.c
Hunk #1 FAILED at 129.
Hunk #2 FAILED at 163.
2 out of 2 hunks FAILED -- saving rejects to file src/qemu/qemu_cgroup.c.rej

Comment 30 Michal Privoznik 2018-07-14 06:30:06 UTC
(In reply to Krastyu from comment #28)
> Cant apply second patch

So if you're running an older libvirt then yeah, applying those patches might be a problem for you (I'd let your distro maintainers do their job and backport the patches and ship fixed package for you). Meanwhile, you can clone libvirt repo (or if you already have one just pull so that you have the latest HEAD), build it and run libvirtd from there.

https://libvirt.org/compiling.html#building

Comment 31 Krastyu 2018-07-14 07:31:46 UTC
Thank you

Comment 32 Krastyu 2018-07-14 08:03:58 UTC
On 4.5.0 version patches applied perfectly and now i'm able to use my system again.Thank you all.Great support,great community.

Comment 33 Michael Jones 2018-07-19 00:53:27 UTC
(In reply to Michal Privoznik from comment #26)
> And I've just pushed the patches upstream:

I was also experiencing this problem. In my case I was using /dev/disk/by-partlabel/*

These patches fix the issue for me as well.