RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1164528 - VM with a storage volume that contains a RBD volume in the backing chain fails to start
Summary: VM with a storage volume that contains a RBD volume in the backing chain fail...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: All
OS: All
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-16 08:12 UTC by Peter Krempa
Modified: 2015-03-05 07:47 UTC (History)
8 users (show)

Fixed In Version: libvirt-1.2.8-8.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:47:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Peter Krempa 2014-11-16 08:12:41 UTC
Description of problem:
A VM backed by a disk that has a RBD volume in it's backing chain fails to start as libvirt doesn't implement the parser for the RBD backing store specification string.

Version-Release number of selected component (if applicable):
libvirt-1.2.8-1.el7 and newer

Steps to Reproduce:
1. Create a local qcow2 overlay on top of a raw RBD volume:
$ qemu-img create -f qcow2 -b rbd:rbdpool/disk/instance-00000002.0.disk:conf=/etc/ceph/ceph.conf:id=admin -F raw /path/to/image

2. use the image as a disk for a VM
3. try to start the VM

Actual results:
VM startup fails with: "internal error: backing store parser is not implemented for protocol rbd"

Expected results:
VM starts

Additional info:
The commandline for qemu-img in reproducer step 1 is used when cloning a OpenStack Nova instance.

Comment 1 Peter Krempa 2014-11-16 08:14:04 UTC
Fix posted for upstream review:

http://www.redhat.com/archives/libvir-list/2014-November/msg00385.html

Comment 2 Peter Krempa 2014-11-21 13:51:48 UTC
Fixed upstream by:

commit b7d1bee2b9a8d7ed76456447b090702223da39f5
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 11 17:31:24 2014 +0100

    storage: rbd: Implement support for passing config file option
    
    To be able to express some use cases of the RBD backing with libvirt, we
    need to be able to specify a config file for the RBD client to qemu as
    that is one of the commonly used options.

commit 02556606584ef6c065eae6e36c311250fa7a24f4
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 11 11:35:25 2014 +0100

    storage: rbd: qemu: Add support for specifying internal RBD snapshots
    
    Some storage systems have internal support for snapshots. Libvirt should
    be able to select a correct snapshot when starting a VM.
    
    This patch adds a XML element to select a storage source snapshot for
    the RBD protocol which supports this feature.

commit 930b77598b4a8481ad98c786e77e372dc6b803cc
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 4 17:35:16 2014 +0100

    storage: Allow parsing of RBD backing strings when building backing chain
    
    As we now have a common function to parse backing store string for RBD
    backing store we can reuse it in the backing store walker so that we
    don't fail on files backed by RBD storage.
    
    This patch also adds a few tests to verify that the parsing works as
    expected.

commit b327df87befd2870e5e9dc5a35dd1210ee8f3291
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 4 14:07:53 2014 +0100

    util: storagefile: Split out parsing of NBD string into a separate func
    
    Split out the code so that the function looks homogenous after adding
    more protocol specific parsers.

commit 5604c056bfa967683b8445349dd5218e531497d4
Author: Peter Krempa <pkrempa>
Date:   Fri Oct 31 17:49:56 2014 +0100

    util: split out qemuParseRBDString into a common helper
    
    To allow reuse this non-trivial parser code in the backing store parser
    this part of the command line parser needs to be split out into a
    separate funciton.

commit 162e1ac6face347d9e93715a5224653cd284a0ef
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 11 10:37:27 2014 +0100

    tests: Reflow the expected output from RBD disk test
    
    Addition of tested cases to the test will be more obvious.

commit dc0175f5359d995db955974457f3a00c92ffde35
Author: Peter Krempa <pkrempa>
Date:   Mon Nov 10 17:55:26 2014 +0100

    qemu: Refactor qemuBuildNetworkDriveURI to take a virStorageSourcePtr
    
    Instead of splitting out various fields, pass the complete structure and
    let the function pick various things of it.
    
    As one of the callers isn't using virStorageSourcePtr to store the data,
    this patch adds glue code that fills the data into a dummy
    virStorageSourcePtr before calling the func.
    
    This change will help when adding new fields that need output processing
    in the future.

commit c264ea58e9a34b7202d8041687621dfa68ad8750
Author: Peter Krempa <pkrempa>
Date:   Thu Oct 30 11:52:17 2014 +0100

    util: storage: Copy hosts of a storage file only if they exist
    
    If there are no hosts for a storage source virStorageSourceCopy and
    virStorageSourceNewFromBackingRelative would try to copy them anyways.
    As the success of virStorageNetHostDefCopy is determined by returning
    a pointer and malloc of 0 elements might return NULL according to the
    implementation, the result of the copy function may vary.
    
    Fix this by copying the hosts array only if there are hosts defined.

commit ceb3e59530a5007240049ad2613e20b86aa7afd5
Author: Peter Krempa <pkrempa>
Date:   Thu Oct 30 11:42:55 2014 +0100

    util: storage: Add notice for extension of struct virStorageSource
    
    As we now have a deep copy function for struct virStorageSource add a
    notice that extensions of the structure require also appropriate changes
    to the virStorageSourceCopy func.

commit 7be41e787de5dc101f1fa6d21f3030aedbc1e12c
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 11 17:23:49 2014 +0100

    util: buffer: Clarify scope of the escape operation in virBufferEscape
    
    The escaping is applied only to the string, not the format argument.
    State this fact in the docs.

commit e650f30b93cb7318406fb0b88a621d8898289800
Author: Peter Krempa <pkrempa>
Date:   Tue Nov 4 14:26:45 2014 +0100

    test: virstoragetest: Add testing of network disk details
    
    To be able to fully test parsing of networked storage strings we need to
    add a few fields for: hostname, protocol and auth string.

commit 33b282eadc9a9bd69f2f0d12360d3be0a772eafe
Author: Peter Krempa <pkrempa>
Date:   Wed Nov 12 13:54:10 2014 +0100

    docs: domain: Move docs for storage hosts under the <source> element
    
    The docs describing the <host> element that are under the <source>
    element in the XML document were incorrectly placed under the <disk>
    element. Move them to the correct place.

Comment 5 Yang Yang 2014-12-01 10:24:10 UTC
Hi Peter,
VM can NOT boot when using a rbd disk with an internal snapshot as 1st bootable disk. And if a rbd disk with an internal snapshot is used as 2nd disk in vm, I/O error occures when mounting. However, if NOT specifying <snapshot../> in xml, guest can boot and works well.

Steps to reproduce the issue as following:
1. # rbd ls libvirt-pool
new-libvirt-image
test1.img

2. create an internal snapshot for a rbd disk
 # rbd snap create libvirt-pool/new-libvirt-image@sn1

3. # rbd snap ls libvirt-pool/new-libvirt-image
SNAPID NAME    SIZE 
     4 sn1  8192 MB

4. start a vm specifying <snapshot../>
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
        <host name='10.66.85.215' port='6789'/>
        <snapshot name='sn1'/>    --> Important
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

# virsh start vm2
Domain vm2 started

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 35    vm2                            running


5. # ps -ef|grep qemu
-drive file=rbd:libvirt-pool/new-libvirt-image@sn1:auth_supported=none:mon_host=10.66.85.215\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none

6. I/O error prompts in guest

7. If the rbd disk with an internal snapshot is used as 2nd disk in vm, I/O error occures when mounting

dmesg | tail
[   16.434760] end_request: I/O error, dev vda, sector 0
[   16.435757] Buffer I/O error on device vda, logical block 0
[   16.435757] lost page write due to I/O error on vda
[   16.435757] end_request: I/O error, dev vda, sector 1288
[   16.435757] Buffer I/O error on device vda, logical block 161
[   16.435757] lost page write due to I/O error on vda
[   16.437994] JBD2: recovery failed
[   16.437997] EXT4-fs (vda): error loading journal

8. If Not specifying <snapshot../> in xml, guest can boot and works well.
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
        <host name='10.66.85.215' port='6789'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>
# virsh start vm1
Domain vm1 started

# virsh list --all
36    vm1                            running

# ps -ef|grep vm1
-drive file=rbd:libvirt-pool/new-libvirt-image:auth_supported=none:mon_host=10.66.85.215\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none

guest works well.

Comment 6 Yang Yang 2014-12-03 07:38:02 UTC
Hi Peter,
After debugging the issue described in comment #5, vm works well when using qemu-kvm command line and specifying the format as rbd. But vm can NOT boot when using qemu-kvm command line and specifying the format as raw. So it seems that only rbd format works with internal rbd snapshot.

However, currently libvirt does not support rbd format. Does it deserve to file a RFE bug to support rbd format ?

e.g.
# qemu-img info rbd:libvirt-pool/rbd1.img:mon_host=$ip
image: rbd:libvirt-pool/rbd1.img:mon_host=$ip
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: unavailable
cluster_size: 4194304
Snapshot list:
ID        TAG                 VM SIZE                DATE       VM CLOCK
sn1       sn1                    8.0G 1970-01-01 08:00:00   00:00:00.000

Start a vm with qemu-kvm
# /usr/libexec/qemu-kvm -drive file=rbd:libvirt-pool/rbd1.img@sn1:auth_supported=none:mon_host=$ip,if=none,id=drive-virtio-disk1,rerror=stop,format=rbd,werror=stop,snapshot=on -device virtio-blk-pci,drive=drive-virtio-disk1,id=sys-img -monitor stdio -spice port=5931,disable-ticketing -boot menu=on -m 2G

vm works well

Thanks
Yang

Comment 7 Yang Yang 2014-12-04 03:46:01 UTC
Hi Peter,

I met another issue and want your help.

Regarding the following patch, vm failed to start when specifying <config../> in domain configuration file unless I copy the ceph configuration file 'ceph.conf' in ceph server to client. Does the ceph configuration file have to exist in client node? IOW, whether the conf file specifying in domain configuration file is a local file in client or a remote file in ceph server ?

 storage: rbd: Implement support for passing config file option

Steps for reproduce it are as following:

libvirt-1.2.8-10.el7.x86_64
qemu-kvm-rhev-2.1.2-14.el7.x86_64
3.10.0-212.el7.x86_64

1. Attempted to start vm with the following xml. Note the ceph conf file 'ceph.conf' does not exist in client node.

#virsh dumpxml vm1 | grep disk -a10
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <auth username='libvirt'>
        <secret type='ceph' usage='client.libvirt secret'/>
      </auth>
      <source protocol='rbd' name='libvirt-pool/rbd1.img'>
        <host name='$ip' port='6789'/>
        <config file='/etc/ceph/ceph.conf'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

# virsh start vm1
error: Failed to start domain vm1
error: internal error: process exited while connecting to monitor: 2014-12-03T10:39:19.747734Z qemu-kvm: -drive file=rbd:libvirt-pool/rbd1.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:mon_host=10.66.85.215\:6789:conf=/etc/ceph/ceph.conf,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image rbd:libvirt-pool/rbd1.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:mon_host=10.66.85.215\:6789:conf=/etc/ceph/ceph.conf: error reading conf file /etc/ceph/ceph.conf

2. Start the vm after copying the 'ceph.conf' to client
# virsh start vm1
Domain vm1 started

# ps -ef| grep qemu
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=rbd:libvirt-pool/rbd1.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:mon_host=10.66.85.215\:6789:conf=/etc/ceph/ceph.conf,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1

vm works well

Comment 8 Yang Yang 2014-12-05 09:52:53 UTC
Hi Peter,

Pardon my wrong comment #6, please ignore it.

Also regarding the patch "storage: rbd: qemu: Add support for specifying internal RBD snapshots"

Per ceph.com, a snapshot is a read-only copy of the state of an image at a particular point in time. Thus, vm can NOT boot and work using rbd:pool/image@snapshot as bootable disk. It can only be used as data disk (not os disk) in vm, right ? However, I/O error out when mounting as it's readonly. So how can i test and verify the patch, just start a vm with rbd internal snapshot and check the responding qemu command line ?

ceph snapshot doc:
http://docs.ceph.com/docs/master/rbd/rbd-snapshot/

Thanks
Yang

Comment 9 Peter Krempa 2014-12-05 10:57:07 UTC
Thanks for checking all the possible options. I think the semantics of the <snapshot> tag for RBD will need to be updated according to your testing although that should be a separate issue from using a RBD volume as a backing chain element of a different image which caused the original problem.

I'll look into fixing the semantics of the snapshot element while this bug still should track using RBD volumes as backing for e.g. file volumes or so.

Comment 10 Yang Yang 2014-12-08 06:15:48 UTC
Hi Peter,
<auth../> element disappeared after external disk snapshot was created. See scenario 3 below for details. The issue is also reproduced when using iscsi as backing file. I filed a separate bug to track the issue.

Bug 1171569 - <auth>..</auth> element is gone after creating external disk snapshots for a rbd disk

Verify using rbd as backing file

libvirt-1.2.8-10.el7.x86_64
qemu-kvm-rhev-2.1.2-14.el7.x86_64
kernel-3.10.0-212.el7.x86_64

Senario 1: 
Test using RBD as backing file when specifying <host../>
1. define/start a vm with the following xml
# qemu-img create -f raw rbd:libvirt-pool/rbd1.img:mon_host=$ip 8G
Formatting 'rbd:libvirt-pool/rbd1.img:mon_host=$ip', fmt=raw size=8589934592

<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/rbd1.img'>
        <host name='$ip' port='6789'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

2. Install os
3. Create external snapshot and put the snapshot file in local dir
# virsh snapshot-create-as vm1 s1 --disk-only --diskspec vda,file=/tmp/rbd.s1
Domain snapshot s1 created

4. # qemu-img info /tmp/rbd.s1 --backing-chain
image: /tmp/rbd.s1
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: rbd:libvirt-pool/rbd1.img:auth_supported=none:mon_host=$ip\:6789
backing file format: raw
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: rbd:libvirt-pool/rbd1.img:auth_supported=none:mon_host=$ip\:6789
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: unavailable
cluster_size: 4194304

5. # virsh dumpxml vm1 | grep disk -a6

 <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rbd.s1'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='rbd' name='libvirt-pool/rbd1.img'>
          <host name='$ip' port='6789'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </disk>

6. vm works well after creating external snapshot


Scenario 2: 
Test using RBD as backing file when specifying <config../>
1. start vm with the following xml
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='libvirt-pool/rbd1.img'>
        <config file='/etc/ceph/ceph.conf'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

2. create external disk snapshot
# virsh snapshot-create-as rbd s1 --disk-only --diskspec vda,file=/tmp/rbd.s1
Domain snapshot s1 created

# virsh snapshot-list rbd
 Name                 Creation Time             State
------------------------------------------------------------
 s1                   2014-12-08 11:49:39 +0800 disk-snapshot

3. check domain conf xml
# virsh dumpxml rbd | grep disk -a6
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rbd.s1'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='rbd' name='libvirt-pool/rbd1.img'>
          <config file='/etc/ceph/ceph.conf'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

4. check image backing chain
# qemu-img info /tmp/rbd.s1 --backing-chain
image: /tmp/rbd.s1
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 644K
cluster_size: 65536
backing file: rbd:libvirt-pool/rbd1.img:auth_supported=none\;none:conf=/etc/ceph/ceph.conf
backing file format: raw
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: rbd:libvirt-pool/rbd1.img:auth_supported=none\;none:conf=/etc/ceph/ceph.conf
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: unavailable
cluster_size: 4194304

5. vm works well 


Scenario 3: 
Test using RBD as backing file when specifying <auth../>
1. start vm with the following xml
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <auth username='libvirt'>
        <secret type='ceph' usage='client.libvirt secret'/>
      </auth>
      <source protocol='rbd' name='libvirt-pool/rbd1.img'>
        <config file='/etc/ceph/ceph.conf'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

2. create external disk-only snapshot
# virsh snapshot-create-as rbd s1 --disk-only --diskspec vda,file=/tmp/rbd.s1
Domain snapshot s1 created
# virsh snapshot-list rbd
 Name                 Creation Time             State
------------------------------------------------------------
 s1                   2014-12-08 12:00:17 +0800 disk-snapshot


3. check domain xml
# virsh dumpxml rbd | grep disk -a6
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/tmp/rbd.s1'/>
      <backingStore type='network' index='1'>
        <format type='raw'/>            --------> <auth../> disappeared
        <source protocol='rbd' name='libvirt-pool/rbd1.img'>
          <config file='/etc/ceph/ceph.conf'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

4. check image backing chain
# qemu-img info /tmp/rbd.s1 --backing-chain
image: /tmp/rbd.s1
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 1.7M
cluster_size: 65536
backing file: rbd:libvirt-pool/rbd1.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:conf=/etc/ceph/ceph.conf
backing file format: raw
Format specific information:
    compat: 1.1
    lazy refcounts: false

image: rbd:libvirt-pool/rbd1.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:conf=/etc/ceph/ceph.conf
file format: raw
virtual size: 8.0G (8589934592 bytes)
disk size: unavailable
cluster_size: 4194304

Comment 11 Yang Yang 2014-12-08 06:26:16 UTC
Peter,
I found that vm fails to start when the specifying conf file is NOT in dir /etc/ceph/. Does conf file have to exist in dir etc/ceph/ ?
e.g.
try to start vm with the following xml
<disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <auth username='libvirt'>
        <secret type='ceph' usage='client.libvirt secret'/>
      </auth>
      <source protocol='rbd' name='libvirt-pool/rbd-rhel-raw.img'>
        <config file='/yy/ceph.conf'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

# virsh start rbd
error: Failed to start domain rbd
error: internal error: process exited while connecting to monitor: 2014-12-08T03:23:52.208968Z qemu-kvm: -drive file=rbd:libvirt-pool/rbd-rhel-raw.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:conf=/yy/ceph.conf,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image rbd:libvirt-pool/rbd-rhel-raw.img:id=libvirt:key=AQCI335UkAgXHhAA90By4w5NR6zb63LbbM0MGg==:auth_supported=cephx\;none:conf=/yy/ceph.conf: error reading conf file /yy/ceph.conf

# ll /yy/ceph.conf 
-rw-r--r--. 1 root root 244 Dec  8 10:07 /yy/ceph.conf

# cat /yy/ceph.conf 
[global]
fsid = 092b1b7f-58ca-4edc-a840-eda831c25773
mon_initial_members = intel-5504-12-1
mon_host = $ip:6789
auth_cluster_required = cephx 
auth_service_required = cephx 
auth_client_required = cephx 
filestore_xattr_use_omap = true

Comment 12 Peter Krempa 2015-01-08 10:17:21 UTC
(In reply to yangyang from comment #11)

After discussion offline the issue is that the file is not labelled correctly for access from qemu. As the file is considered shared the user should make sure it's accessible by the VM process.

Comment 13 Yang Yang 2015-01-08 10:21:46 UTC
Thanks Peter.

Per comment #10, #11, #12, mark it as verified.

Comment 15 errata-xmlrpc 2015-03-05 07:47:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.