Bug 1195882

Summary: Can't create VM with 7.1 hosts because of "QEMU doesn't support virtio scsi controller" error
Product: Red Hat Enterprise Linux 7 Reporter: Bill Sanford <bsanford>
Component: libvirtAssignee: John Ferlan <jferlan>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 7.1CC: agedosier, amureini, berrange, bmcclain, bsanford, clalancette, derez, dyuan, ecohen, fdeutsch, gklein, iheim, itamar, jdenemar, jferlan, jforbes, jkurik, jsuchane, jtomko, laine, libvirt-maint, lpeer, lsurette, michal.skrivanek, mzhan, pvine, rbalakri, Rhev-m-bugs, shyu, snagar, tnisan, tpelka, ukalifon, veillard, vipatel, virt-maint, whayutin, xuzhang, yeylon, yisun, ylavi
Target Milestone: rcKeywords: Regression, TestBlocker, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: libvirt-1.2.16-1.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1279965 (view as bug list) Environment:
Last Closed: 2015-11-19 06:16:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1193659, 1205796, 1241099, 1279965    
Attachments:
Description Flags
VDSM log of the added host
none
Engine log file none

Description Bill Sanford 2015-02-24 19:07:35 UTC
Created attachment 994804 [details]
VDSM log of the added host

Description of problem: 
I just installed RHEV-M 3.5 (vt13.11/el6) and added a RHEL 7.1 host.

I first installed RHEL 6.6 and 7.1 and did a yum update. I then ran yum install rhevm then rhevm-setup. During this, I added SCSI sendtargets to my host. I then ran the rhevm-manage-domains, restarted the ovirt-engine and added everything within the admin portal, Host, SCSI and ISO storage. Everything was working. I then tried to add a VM and got an error message right away:

Event Details

ID 119
Time 2015-Feb-24, 13:39
Message VM win7x64 is down with error. Exit message: unsupported configuration: This QEMU doesn't support virtio scsi controller.


Version-Release number of selected component (if applicable):
RHEL 6.6 GA - Engine
RHEL-7.1-20150219.1 - Host
Windows 7x64 - Guest

How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Libvirt log wasn't there 

[root@salusa qemu]# pwd
/var/log/libvirt/qemu
[root@salusa qemu]# ls
[root@salusa qemu]#

Comment 1 Bill Sanford 2015-02-24 19:08:59 UTC
Created attachment 994805 [details]
Engine log file

Comment 2 Allon Mureinik 2015-02-25 13:18:59 UTC
Daniel, is there anything here from our side, or is this purely a qemu issue?

Comment 3 Daniel Erez 2015-02-25 13:56:09 UTC
It could be a duplicate of bug 998692. @Bill - which version of qemu is running on the host?

Comment 4 Bill Sanford 2015-02-25 14:17:20 UTC
[root@salusa ~]# rpm -qa | grep qemu
qemu-img-rhev-2.1.2-23.el7_1.1.x86_64
libvirt-daemon-driver-qemu-1.2.8-16.el7.x86_64
qemu-kvm-tools-rhev-2.1.2-23.el7_1.1.x86_64
qemu-guest-agent-2.1.0-4.el7.x86_64
ipxe-roms-qemu-20130517-6.gitc4bce43.el7.noarch
qemu-kvm-rhev-2.1.2-23.el7_1.1.x86_64
qemu-kvm-common-rhev-2.1.2-23.el7_1.1.x86_64
[root@salusa ~]#

Comment 6 John Ferlan 2015-02-27 01:47:36 UTC
I need libvirt relevant data rather than vdsm/rhev. I don't view things from the vdsm/rhev viewpoint - I'm lower in the call stack and it makes it easier for me to debug if I have libvirt specific information.

For starters:

virsh version --daemon
virsh -V
virsh capabilities

virsh dumpxml $dom


The message "This QEMU doesn't support virtio scsi controller." is displayed when libvirt cannot find the "virtio-scsi-pci" in the output of a search of the devices supported by a specific emulator.

In order to know "which" emulator is being used, one could take the entry of the "<emulator>" in the domain XML file as follows (for example assuming /usr/bin/qemu-kvm was the emulator value):

/usr/bin/qemu-kvm -device ? 2>&1 | grep virtio-scsi-pci

If that shows something, then perhaps more detailed debugging will be necessary. If it doesn't, then the emulator being used doesn't support the hardware.  Why that is would require some more qemu debugging.

In order to get more libvirt details if it does exist, getting libvirtd debug information would help...  I add the following lines to my /etc/libvirt/libvirtd.conf:

 log_level = 1
 log_filters="3:remote 4:event 3:json 3:rpc"
 log_outputs="1:file:/var/log/libvirt/libvirtd.log"

and restart libvirt (service libvirtd restart),

then rerun whatever command caused the problem.  The resulting /var/log/libvirt/libvirtd.log should help me a bit more data. May not result in a simple resolution, but it should help.

Comment 7 John Ferlan 2015-02-27 13:07:14 UTC
First, as an aside/observation - prior to passing this onto libvirt it would have been better for my triage if someone from vdsm and ovirt had provided some triage to those log files. I'm not sure what is relevant in each...

After reading the bz from comment 3 and it's related/duplicated bz - perhaps whatever was installed on this system has a similar issue, but with some other library not being present.  Running the qemu-kvm as shown above without the 2>&1 and piping through grep would help determine that or as pointed out in the other bz running "qemu-kvm -M none" and checking if there's an error...  or even more direct the "<emulator> -M none".

Based purely on history of seeing similar type errors - there's some sort of mismatched shared library or a missing shared library as a result of something going wrong in an installation.

Since the keyword "Regression" was added, I have to assume this worked at some point in time, but I don't see a reference as to when it did work and what versions it did work on. All I get is some new version was installed and things didn't work. When adding Regression there has to be something "concrete" where someone could start and validate that it works.  Then by adding/change/upgrading to something else the failure occurs. Without that - this is very difficult to triage and quite frankly it shouldn't have that keyword added. Of course adding Regression caused the bot to add the Blocker flag, but since it's not clear what we're regressing from I'm not sure what's blocked.

Comment 8 wes hayutin 2015-02-27 14:16:25 UTC
Possible workaround
rm -Rf  /var/cache/libvirt/qemu/capabilities/
service libvirtd restart

works w/ possible related bug
https://bugzilla.redhat.com/show_bug.cgi?id=1193659
https://bugzilla.redhat.com/show_bug.cgi?id=1177245

Comment 9 wes hayutin 2015-02-27 14:38:56 UTC
*** Bug 1193659 has been marked as a duplicate of this bug. ***

Comment 10 wes hayutin 2015-02-27 14:38:59 UTC
*** Bug 1177245 has been marked as a duplicate of this bug. ***

Comment 11 Bill Sanford 2015-02-27 15:16:27 UTC
virsh version --daemon
virsh -V
virsh capabilities

http://pastebin.test.redhat.com/266114

Comment 14 John Ferlan 2015-02-27 16:30:46 UTC
With respect to the recently discovered cache question - hopefully you haven't deleted anything yet...

Although, I'll have to dig into that code a bit to understand more what it does, I think if you

# grep x86_64 /var/cache/libvirt/qemu/capabilities/*.xml | cut -f 1 -d ":" | xargs grep virtio-scsi-pci

You'll see 'cached' copies of capabilities that may be being used instead of getting the perhaps perhaps getting a "more recent" copy.  I found 3 .xml files with X86_64 in them each with some slight differences. Mostly differences in "new" flags for more recent cache file data, but also a difference in the "version" of qemu sourced from.  I change qemu versions relatively infrequently.

Whether changes have occurred in the cache space since 1.2.8 (eg. what's in RHEL7.1) was created will take some research.

Additionally I'll check if it's possible to have a cached copy without any flags in it...  That is - if the qemu-kvm device help returns a failure, what do we do and how does that effect the cached copy...

Comment 15 John Ferlan 2015-02-27 16:41:48 UTC
Given the pastebin output - I have a feeling this may be a cache issue... Obviously omitting the '|xargs grep virtio-scsi-pci' will show you how many cache files you have... I changed my xargs to be 'ls -al' and then diff'd the various files.  

If the most recent x86_64 file doesn't have the virtio-scsi-pci, then it's a cache problem.  If it's empty, then it's some sort of write out to cache issue. I think we're narrowing things down, but it's certainly not what I expected when I read this last night.

Comment 22 Jaroslav Suchanek 2015-03-31 12:31:12 UTC
It has not been yet properly discussed upstream. Setting a CondNAK Design for now.

Comment 23 Michal Skrivanek 2015-05-04 07:44:59 UTC
strangely, this has not been reproduced by RHEV-M QE nor countinuous integration tests on the same version

Comment 24 John Ferlan 2015-05-20 14:15:55 UTC
w/r/t: Comment 23

Again the key is change in date of libvirtd (or QEMU binary). So if you're not changing versions of libvirtd or QEMU, then the cache won't be updated. If my theory is correct it'll happen primarily around release end games or in testing environments where someone is checking a backported release for a fix/patch first and then applying the main/primary release after.  If the process is create main/primary release followed by backport patches and create maintenance libvirtd, then it's possible the backport libvirtd date is later than the main one. If that's installed first, then the current logic won't update the cache. Unfortunately the investigation of the theory stalled back around comment 18/19

I posted a couple of RFC patches upstream to get some feedback on my thoughts, see:

http://www.redhat.com/archives/libvir-list/2015-May/msg00655.html

Comment 25 John Ferlan 2015-05-26 15:56:44 UTC
I've made a couple of adjustments to the cache algorithm which should force a cache rebuilt when the libvirtd image date changes and when the libvirt version changes just in case someone plays around with system time or does some other strangeness.

commit a14eff38477e4f63d8ac6e6410c409569681b03b
Author: John Ferlan <jferlan>
Date:   Sat May 23 10:19:20 2015 -0400

    qemu: Add libvirt version check to refresh capabilities algorithm
    
    Rather than an algorithm based solely on libvirtd ctime to refresh the
    capabilities add the element of the libvirt build version into the equation.
    Since that version wouldn't be there prior to this code being run - don't
    fail on reading the capabilities if not found. In this case, the cache
    will always be rebuilt when a new libvirt version is installed.


git describe a14eff38477e4f63d8ac6e6410c409569681b03b
v1.2.15-156-ga14eff3


If this persists in the future - the details of exactly the process used in order to reproduce will be most helpful! What version you started with, what process was used, dates/times on files, versions, etc.  (qemu and libvirt that is - as that's all that's checked)

Comment 30 yisun 2015-07-23 11:21:14 UTC
Hi John, 
Following is my test steps, please help to review if it's covering the fix. Thanks in advance. 


Verified on libvirt-1.2.17-1 and 1.2.17-2


Test scenario 1:
1. install libvirt-1.2.17-1
2. delte the <flag name='virtio-scsi-pci'/> line in /var/cache/libvirt/qemu/capabilities/xxx.xml
3. upgrade libvirt to 1.2.17-2
4. check if the cache file updated and <flag name='virtio-scsi-pci'/> shows up. 


Test scenario 2:
1. install libvirt-1.2.17-2
2. delte the <flag name='virtio-scsi-pci'/> line in /var/cache/libvirt/qemu/capabilities/xxx.xml
3. downgrade libvirt to 1.2.17-1
4. check if the cache file updated and <flag name='virtio-scsi-pci'/> shows up. 

=============================================

Test steps:
=======
Test scenario 1 - upgrade libvirt:
1. # rpm -qa | grep libvirt-1
libvirt-1.2.17-1.el7.x86_64

2. # pwd
/var/cache/libvirt/qemu/capabilities

3. # cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>


4. # sed -i "/<flag name='virtio-scsi-pci'\/>/d" 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml 
   # cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
   <==== empty output

5. #yum update libvirt -y
   # rpm -qa | grep libvirt-1
   libvirt-1.2.17-2.el7.x86_64   <=== upgraded to libvirt-1.2.17-2

6. # cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>    <=== cache file updated as expected. 


=======
Test scenario 2 - downgrade libvirt
Steps: 
1. download all related libvirt-1.2.17-1 rpms 
   #  pwd
/root/libivrt

   # ll
total 12776
-rw-r--r--. 1 root root  101396 Jul  2 15:42 libvirt-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 4520544 Jul  2 15:42 libvirt-client-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  576080 Jul  2 15:42 libvirt-daemon-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  102420 Jul  2 15:42 libvirt-daemon-config-network-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  105028 Jul  2 15:42 libvirt-daemon-config-nwfilter-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  145844 Jul  2 15:42 libvirt-daemon-driver-interface-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  738672 Jul  2 15:42 libvirt-daemon-driver-lxc-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  287052 Jul  2 15:42 libvirt-daemon-driver-network-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  145060 Jul  2 15:42 libvirt-daemon-driver-nodedev-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  169608 Jul  2 15:42 libvirt-daemon-driver-nwfilter-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  558424 Jul  2 15:42 libvirt-daemon-driver-qemu-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  138856 Jul  2 15:42 libvirt-daemon-driver-secret-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  313824 Jul  2 15:42 libvirt-daemon-driver-storage-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  100656 Jul  2 15:42 libvirt-daemon-kvm-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  100632 Jul  2 15:42 libvirt-daemon-lxc-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 1058388 Jul  2 15:42 libvirt-devel-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 3335748 Jul  2 15:42 libvirt-docs-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  150640 Jul  2 15:42 libvirt-lock-sanlock-1.2.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  391456 Jul  2 15:42 libvirt-login-shell-1.2.17-1.el7.x86_64.rpm


2. # createrepo .
Spawning worker 0 with 3 pkgs
Spawning worker 1 with 3 pkgs
Spawning worker 2 with 3 pkgs
Spawning worker 3 with 2 pkgs
Spawning worker 4 with 2 pkgs
Spawning worker 5 with 2 pkgs
Spawning worker 6 with 2 pkgs
Spawning worker 7 with 2 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

3. # cat /etc/yum.repos.d/local.repo 
[local]
name=local repo
baseurl=file:///root/libivrt/
enabled=1
gpgcheck=0
gpgkey=file:///root/libivrt/RPM-GPG-KEY-redhat-release

4. # cd /var/cache/libvirt/qemu/capabilities
   # cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>  

5. # sed -i "/<flag name='virtio-scsi-pci'\/>/d" 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml 
6. #yum downgrade libvirt* -y
7. # rpm -qa | grep libvirt-1
libvirt-1.2.17-1.el7.x86_64

8. # service libvirtd start
Redirecting to /bin/systemctl start  libvirtd.service

9. # cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>   <==== it's updated as expected.

Comment 31 John Ferlan 2015-07-23 17:58:06 UTC
Good question - no sure how to answer it since it wasn't overly clear what sequence of commands triggers the issue.

You'll know the "fix" is in place by the <selfvers> tag shows up in the xml.

Does the same sequence you describe fail before this patch is part of libvirt? The assumption I was working under was that prior to this change was some sort of ordering between a built 7.0.z and 7.0 rpm, but I think that may have been disproved at some point. The change is to check the libvirt version. 

What shows up in the rpms you've used - if they're different, then the code to generate the cache xml has been run for that reason.  There are other reasons to generate, the libvirt version is just one of them.

Comment 32 yisun 2015-07-26 08:07:23 UTC
Try libvirt 1.2.8-16.el7 and 1.2.8-16.el7_1.3, issue not reproducible. 


[root@hp-dl320eg8-05 yum.repos.d]# rpm -qa |grep libvirt-1
libvirt-1.2.8-16.el7.x86_64

[root@hp-dl320eg8-05 capabilities]# cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>

[root@hp-dl320eg8-05 capabilities]# sed -i "/<flag name='virtio-scsi-pci'\/>/d" 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml 
[root@hp-dl320eg8-05 capabilities]# yum update libvirt -y

[root@hp-dl320eg8-05 capabilities]# cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>

[root@hp-dl320eg8-05 capabilities]# rpm -qa | grep libvirt-1
libvirt-1.2.8-16.el7_1.3.x86_64

Next, will try to upgrade to libvirt 1.2.13-el7 which is released prior to libvirt 1.2.8-el7_1.3

Comment 33 yisun 2015-07-26 08:17:18 UTC
Update libvirt-1.2.8-16.el7_1.3.x86_64 to libvirt-1.2.13-1.el7.x86_64
Still not reproducible. Will consider some other scenarios. 

[root@hp-dl320eg8-05 capabilities]# rpm -qa | grep libvirt-1
libvirt-1.2.8-16.el7_1.3.x86_64

[root@hp-dl320eg8-05 capabilities]# cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml  | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>

[root@hp-dl320eg8-05 capabilities]#  sed -i "/<flag name='virtio-scsi-pci'\/>/d" 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml 


[root@hp-dl320eg8-05 capabilities]# yum update libvirt -y


[root@hp-dl320eg8-05 capabilities]# rpm -qa | grep libvirt-1
libvirt-1.2.13-1.el7.x86_64

[root@hp-dl320eg8-05 capabilities]# cat 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml  | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>

Comment 34 yisun 2015-07-27 07:00:52 UTC
Hi Bill,
Seems it's not reproducible in my pure libvirt environment. 
Could you please have a try with libvirt 1.2.16 or higher and see if it's still reproducible in your environment?

Comment 36 yisun 2015-09-14 11:21:33 UTC
Verified with:
libvirt-1.2.17-8.el7.x86_64


Refer to the verification steps of https://bugzilla.redhat.com/show_bug.cgi?id=1252363#c7
	


1 .reproduce reporter's bug with following libvirt&qemu-kvm-rhev versions
# rpm -qa | grep libvirt-1
libvirt-1.2.8-16.el7_1.4.x86_64
# rpm -qa | grep qemu-kvm-rhev
qemu-kvm-rhev-2.1.2-23.el7_1.9.x86_64


2. backup original qemu-kvm and create a shell script to replace it as follow:

#mv /usr/libexec/qemu-kvm /usr/libexec/qemu-kvm-bak

# cat /usr/libexec/qemu-kvm
#!/bin/bash
for i in "$@"
do
    if [[ "/var/lib/libvirt/qemu/capabilities.pidfile" == "$i" ]]
    then
        exit 1
    fi
done

exec /usr/libexec/qemu-kvm-bak "$@"


# chmod a+x /usr/libexec/qemu-kvm


3. prepare a vm's xml
#cat demo1.xml
<domain type='qemu' id='5'>
  <name>demo1</name>
  <uuid>a11aea7a-4606-4998-93e3-449bf7929975</uuid>
  <memory unit='KiB'>220160</memory>
  <currentMemory unit='KiB'>219200</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.1.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/sdc'/>
      <backingStore/>
      <target dev='sda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
   
      <controller type='scsi' index='0' model='virtio-scsi'>
    </controller>
   <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5901' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_tcg_t:s0:c636,c949</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c636,c949</imagelabel>
  </seclabel>
</domain>

4. stop libvirtd 
#service libvirtd stop


5. remove current capabilites
#rm -f /var/cache/libvirt/qemu/capabilities/*

6. start libvirtd 
# service libvirtd start

7. check if capabilites contain virt-scsi-pci
# cat /var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml  | grep virtio-scsi-pci
<==== nothing output

8. use virsh define to reproduce the issue. 
# virsh define demo1.xml 
error: Failed to define domain from demo1.xml
error: unsupported configuration: This QEMU doesn't support virtio scsi controller

9. update libvirt to latest version
#yum update libvirt -y 

# rpm -qa | grep libvirt-1
libvirt-1.2.17-8.el7.x86_64


# ll /var/cache/libvirt/qemu/capabilities/
total 0

10. define the vm again, it'll be failed due to absence of capabilites.
# virsh define demo1.xml 
error: Failed to define domain from demo1.xml
error: invalid argument: could not find capabilities for arch=x86_64 


11. resume qemu-kvm
#  mv /usr/libexec/qemu-kvm /usr/libexec/testkvm
# mv /usr/libexec/qemu-kvm-bak /usr/libexec/qemu-kvm

12. define the vm again. successful
# virsh define demo1.xml 
Domain demo1 defined from demo1.xml

13. check the capabilites, it's correctly containing the virtio-scsi-pci
# cat /var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml  | grep virtio-scsi-pci
  <flag name='virtio-scsi-pci'/>

Comment 47 errata-xmlrpc 2015-11-19 06:16:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html