Bug 1052114

Summary: guest fail to start with permission denied error when with gluster volume
Product: Red Hat Enterprise Linux 7 Reporter: Peter Krempa <pkrempa>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0CC: dyuan, eblake, eharney, mazhang, mzhan, rbalakri, shyu, xuzhang, yanyang, zhwang, zpeng
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-1.2.7-1.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1052014 Environment:
Last Closed: 2015-03-05 07:29:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1052014    

Description Peter Krempa 2014-01-13 10:54:16 UTC
+++ This bug was initially created as a clone of Bug #1052014 +++

Description of problem:

guest fail to start with permission denied error when with gluster volume

Version-Release number of selected component (if applicable):
qemu-kvm-1.5.3-35.el7
libvirt-1.1.1-18.el7

How reproducible:
100%

Steps to Reproduce:
1.prepare gluster server and volume gluster-vol1

[gluster server]# gluster volume info

Volume Name: gluster-vol1
Type: Distribute
Volume ID: 32fe2d1d-7a2a-45d9-8a96-faab68253edd
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.66.106.20:/mnt/gluster-volume1
Brick2: 10.66.106.22:/mnt/gluster-volume1
Options Reconfigured:
server.allow-insecure: on

2.create an image by qemu-img on gluster client

[gluster client~]#qemu-img create -f qcow2 gluster://gluster server/gluster-vol1/rhel6.4-qcow2.img 10G

[gluster client~]#qemu-img info gluster://gluster server/gluster-vol1/rhel6.4-qcow2.img 
image: gluster://gluster server/gluster-vol1/rhel6.4-qcow2.img
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 136K
cluster_size: 65536

3.define an guest with the disk created in step 2

[gluster client~]#virsh dumpxml rhel6.4-qcow2|grep disk -A 6

<disk type='network' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source protocol='gluster' name='gluster-vol1/rhel6.4-qcow2.img'>
        <host name='gluster server' port='24007'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

4.start the guest

[gluster client~]#virsh start rhel6.4-qcow2

error: Failed to start domain rhel6.4-qcow2
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/3
qemu-kvm: -drive file=gluster+tcp://10.66.106.20:24007/gluster-vol1/rhel6.4-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2: could not open disk image gluster+tcp://10.66.106.20:24007/gluster-vol1/rhel6.4-qcow2.img: Permission denied

Actual results:


Expected results:

should succeed starting guest in step 4.


Additional info:

even though i set storage.owner-uid/gid on gluster server, it doesn't work.
error log:
2014-01-13 06:14:06.953+0000: 2622: error : qemuProcessWaitForMonitor:1801 : internal error process exited while connecting to monitor: char device redirected to /dev/pts/3
qemu-kvm: -drive file=gluster+tcp://10.66.106.20:24007/gluster-vol1/rhel6.4-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2: could not open disk image gluster+tcp://10.66.106.20:24007/gluster-vol1/rhel6.4-qcow2.img: Permission denied

--- Additional comment from Shanzhi Yu on 2014-01-13 09:19:59 CET ---

there are two workaround to make it work
1)login to glusterfs server and change owner and group of disk(created from gluster client,here is rhel6.4-qcow2.img) to qemu:qemu
2)login to gluster server and change permission of disk to 666(the default is 600)
3)configure user = "root" and group = "root" in /etc/libvirt/qemu.conf on gluster client, restart libvirt. guest will start without any error.

Comment 7 Peter Krempa 2014-07-24 08:07:22 UTC
Fixed in v1.2.6-246-ga2a67ef :

commit a2a67eff18ac6e279bdd32f5feddcc0528d16245
Author: Peter Krempa <pkrempa>
Date:   Mon Jun 30 15:05:07 2014 +0200

    storage: Implement virStorageFileCreate for local and gluster files
    
    Add backends for this frontend function so that we can use it in the
    snapshot creation code.

commit 24e5cafba6dbc2722e05f92dc0ae31b0f938f9f0
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 10 15:46:01 2014 +0200

    qemu: Implement DAC driver chown callback to co-operate with storage drv
    
    Use the storage driver to chown remote images.

commit 0a515a3ba329bcb134ffcb47048c59aa623bdc4f
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 10 16:05:07 2014 +0200

    security: DAC: Plumb usage of chown callback
    
    Use the callback to set disk and storage image labels by modifying the
    existing functions and adding wrappers to avoid refactoring a lot of the
    code.

commit 7490a6d272486f15c21aa10435f5c0e8bf66ee18
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 10 14:17:24 2014 +0200

    security: DAC: Introduce callback to perform image chown
    
    To integrate the security driver with the storage driver we need to
    pass a callback for a function that will chown storage volumes.
    
    Introduce and document the callback prototype.

commit 9f28599d5140ce38a5600870e28aaf1b9e6bfe93
Author: Peter Krempa <pkrempa>
Date:   Thu Jul 10 15:20:24 2014 +0200

    security: DAC: Remove superfluous link resolution
    
    When restoring security labels in the dac driver the code would resolve
    the file path and use the resolved one to be chown-ed. The setting code
    doesn't do that. Remove the unnecessary code.

commit 222860cd364184300c42f800df661793e0ea2210
Author: Peter Krempa <pkrempa>
Date:   Wed Jul 9 16:52:06 2014 +0200

    storage: Add witness for checking storage volume use in security driver
    
    With my intended use of storage driver assist to chown files on remote
    storage we will need a witness that will tell us whether the given
    storage volume supports operations needed by the storage driver.

commit 50f09651dff39d6d097eec8e16824a76867ec6c7
Author: Peter Krempa <pkrempa>
Date:   Wed Jul 9 16:42:10 2014 +0200

    storage: Implement storage driver helper to chown disk images
    
    Gluster storage works on a similar principle to NFS where it takes the
    uid and gid of the actual process and uses it to access the storage
    volume on the remote server. This introduces a need to chown storage
    files on gluster via native API.

Comment 9 Yang Yang 2014-12-18 05:09:11 UTC
I can reproduce it with libvirt-1.2.6-1.el7.x86_64 and verify it with libvirt-1.2.8-10.el7.x86_64

Steps to verify it as following:
> define/start vm using gluster image as disk and create external snapshot

1.create image in client
# qemu-img create -f qcow2 gluster://10.66.4.164/gluster-vol1/test.qcow2 5G
Formatting 'gluster://10.66.4.164/gluster-vol1/test.qcow2', fmt=qcow2 size=5368709120 encryption=off cluster_size=65536 lazy_refcounts=off

# qemu-img info gluster://10.66.4.164/gluster-vol1/test.qcow2
image: gluster://10.66.4.164/gluster-vol1/test.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 193K
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false

2. check the image uid/gid in gluster server

# ll /br1/
total 200
-rw-------. 2 root root 197120 Dec 18 10:46 test.qcow2

3. start a vm using the gluster images as disk
# virsh dumpxml gluster | grep disk -a6
<disk type='network' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source protocol='gluster' name='gluster-vol1/test.qcow2'>
        <host name='$ip' port='24007'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>

# virsh start gluster
Domain gluster started

4. check the image uid/gid in server
# ll /br1/
total 200
-rw-------. 2 qemu qemu 197120 Dec 18 10:46 test.qcow2

5. check the vm works well

6. create external snapshot
# cat s1.xml 
<domainsnapshot>
<name>snap1-gluster</name>
<disks>
<disk name='vda' type='network'>
<driver type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/r7g-snap1.img'>
<host name='10.66.4.164'/>
</source>
</disk>
</disks>
</domainsnapshot>

# virsh snapshot-create gluster s1.xml --disk-only
Domain snapshot snap1-gluster created from 's1.xml'

#  virsh snapshot-list gluster
 Name                 Creation Time             State
------------------------------------------------------------
 snap1-gluster        2014-12-18 11:12:58 +0800 disk-snapshot

7. check the snapshot file uid/gid in server
# ll /br1/
total 1242836
-rw-------. 2 qemu qemu     393216 Dec 18 11:13 r7g-snap1.img
-rw-------. 2 qemu qemu 1272381440 Dec 18 11:12 test.qcow2

> define/start vm using gluster symlink file as disk and then create external snapshot

1. create image in client
# qemu-img create gluster://$ip/gluster-vol1/test.raw 5G
Formatting 'gluster://$ip/gluster-vol1/test.raw', fmt=raw size=5368709120 

2. Create symlink volumes

# mount -t glusterfs $ip:/gluster-vol1 /mnt
# cd /mnt
# ln -s test.raw test.raw.link

# ll
total 1246813
-rw-------. 1 root root    4456448 Dec 18 11:17 r7g-snap1.img
-rw-------. 1 qemu qemu 1272381440 Dec 18 11:12 test.qcow2
-rw-------. 1 root root 5368709120 Dec 18 11:32 test.raw
lrwxrwxrwx. 1 root root          8 Dec 18 11:41 test.raw.link -> test.raw

# umount /mnt

3. start vm using symlink file as disk

# virsh dumpxml gluster | grep disk -a6
<disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <source protocol='gluster' name='gluster-vol1/test.raw.link'>
        <host name='10.66.4.164'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>

4. check uid/gid of the underlying file
# ll
total 1246816
-rw-------. 2 root root    4456448 Dec 18 11:17 r7g-snap1.img
-rw-------. 2 qemu qemu 1272381440 Dec 18 11:12 test.qcow2
-rw-------. 2 qemu qemu 5368709120 Dec 18 11:32 test.raw
lrwxrwxrwx. 2 root root          8 Dec 18 11:41 test.raw.link -> test.raw

5. create external snapshot
# cat s1.xml 
<domainsnapshot>
<name>snap1-gluster</name>
<disks>
<disk name='vda' type='network'>
<driver type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/r7g-snap2.img'>
<host name='10.66.4.164'/>
</source>
</disk>
</disks>
</domainsnapshot>

# virsh snapshot-create gluster s1.xml --disk-only
Domain snapshot snap1-gluster created from 's1.xml'

# virsh dumpxml gluster | grep disk -a10
<disk type='network' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source protocol='gluster' name='gluster-vol1/r7g-snap2.img'>
        <host name='10.66.4.164'/>
      </source>
      <backingStore type='network' index='1'>
        <format type='raw'/>
        <source protocol='gluster' name='gluster-vol1/test.raw.link'>
          <host name='10.66.4.164'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <boot order='2'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>

6. check the uid/gid of the snapshot file
# ll
total 2558788
-rw-------. 2 root root    4456448 Dec 18 11:17 r7g-snap1.img
-rw-------. 2 qemu qemu     524288 Dec 18 12:34 r7g-snap2.img
-rw-------. 2 qemu qemu 1272381440 Dec 18 11:12 test.qcow2
-rw-------. 2 qemu qemu 5368709120 Dec 18 12:33 test.raw
lrwxrwxrwx. 2 root root          8 Dec 18 11:41 test.raw.link -> test.raw

As all the steps got the expected results, set it to verified.

Comment 10 Yang Yang 2014-12-18 05:42:08 UTC
It's verified with both qemu-kvm-rhev-2.1.2-16.el7 and qemu-kvm-1.5.3-84.el7.x86_64

Comment 12 errata-xmlrpc 2015-03-05 07:29:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html