Bug 847222 - [Rbd]Get error when define a rbd pool
[Rbd]Get error when define a rbd pool
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.4
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Osier Yang
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-10 03:36 EDT by zhe peng
Modified: 2012-08-16 10:25 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-08-16 10:25:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description zhe peng 2012-08-10 03:36:23 EDT
Description of problem:

always get error when define a rbd backend pool

Version
libvirt-0.10.0-0rc0.el6.x86_64
upstream qemu


How reproducible:
100%

Steps to Reproduce:


1. prepare ceph env. ,check it worked well
#ceph -s
HEALTH_WARN 208 pgs degraded; 208 pgs stuck unclean; recovery 30/60 degraded (50.000%)
   monmap e1: 1 mons at {0=192.168.122.252:6789/0}, election epoch 0, quorum 0 0
   osdmap e7: 1 osds: 1 up, 1 in
    pgmap v49: 208 pgs: 208 active+degraded; 16660 bytes data, 101 MB used, 771 MB / 1000 MB avail; 30/60 degraded (50.000%)
   mdsmap e4: 1/1/1 up {0=0=up:active}
 v0) v1 ==== 415+0+0 (336046976 0 0) 0x7f0f10001150 con 0x19ddb80
   health HEALTH_WARN 208 pgs degraded; 208 pgs stuck unclean; recovery 30/60 degraded (50.000%)
   monmap e1: 1 mons at {0=192.168.122.252:6789/0}, election epoch 0, quorum 0 0
   osdmap e7: 1 osds: 1 up, 1 in
    pgmap v49: 208 pgs: 208 active+degraded; 16660 bytes data, 101 MB used, 771 MB / 1000 MB avail; 30/60 degraded (50.000%)
   mdsmap e4: 1/1/1 up {0=0=up:active}

2. create a rados pool
#rados mkpool rbdpool
#rados lspools
data
metadata
rbdpool

3. make sure the pool can worked, create a rbd volume named foo
#qemu-img create -f rbd rbd:rbdpool/foo 1G

# qemu-img info rbd:rbdpool/foo
image: rbd:rbdpool/foo
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: unavailable

4. create a rbd pool xml

<pool type="rbd">
    <name>myrbdpool</name>
      <source>
      <name>rbdpool</name>
      <host name='192.168.122.252' port='6789'/>
       <auth username='admin' type='ceph'>
       <secret uuid='2ec115d7-3a88-3ceb-bc12-0ac909a6fd87'/>
       </auth>
      </source>
</pool>

5. define the pool
#virsh pool-define rbd.xml

Actual results:
always get error:
error: Failed to define pool from rbd.xml
error: internal error missing backend for pool type 8

tail -f /var/log/libvirt/libvirtd.log
2012-08-10 21:31:27.611+0000: 29060: error : virStorageBackendForType:1002 : internal error missing backend for pool type 8

Expected results:
rbd pool can be defined.


Additional info:
add below xml in guest xml,it can worked well,in guest ,vdb can used.
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <source protocol='rbd' name='rbdpool/foo'>
        <host name='192.168.122.252' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
Comment 6 Jiri Denemark 2012-08-16 10:25:10 EDT
RBD is not support in libvirt's build in RHEL and running upstream qemu is not supported either.

Note You need to log in before you can comment on or make changes to this bug.