Bug 1136534 - glusterfs backend does not support discard
Summary: glusterfs backend does not support discard
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jeff Cody
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1037503 1055487
Blocks: GlusterThinProvisioning 1103845
TreeView+ depends on / blocked
 
Reported: 2014-09-02 19:30 UTC by Jeff Cody
Modified: 2016-06-22 14:42 UTC (History)
15 users (show)

Fixed In Version: qemu-kvm-1.5.3-70.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1055487
Environment:
Last Closed: 2016-06-22 14:41:34 UTC
Target Upstream Version:


Attachments (Terms of Use)

Comment 1 Miroslav Rezanina 2014-09-12 12:55:28 UTC
Fix included in qemu-kvm-1.5.3-70.el7

Comment 2 Sibiao Luo 2014-10-11 09:50:09 UTC
verified this issue on qemu-kvm-1.5.3-75.el7.x86_64.

host info:
# uname -r && rpm -q qemu-kvm
3.10.0-123.9.2.el7.x86_64
qemu-kvm-1.5.3-75.el7.x86_64

Steps:
1. Create a 1G raw/qcow2 image on an XFS/ext4 file system.
# qemu-img create -f raw gluster://10.66.106.35/volume_sluo/test.raw 1G
Formatting 'gluster://10.66.106.35/volume_sluo/test.raw', fmt=raw size=1073741824

2. Start qemu with a command-line like the following:
e.g:/usr/libexec/qemu-kvm...-device virtio-scsi-pci,id=scsi2,indirect_desc=off,event_idx=off,bus=pci.0,addr=0x8 -drive file=gluster://10.66.106.35/volume_sluo/test.raw,if=none,id=drive-hd-disk,media=disk,format=raw,cache=none,werror=stop,rerror=stop,discard=on -device scsi-hd,drive=drive-hd-disk,id=scsi_disk

3. count the blocks number.
# mount -t glusterfs 10.66.106.35:volume_sluo /mnt/
# stat /mnt/test.raw 

4.Make file system to the disk in the guest.
# mkfs.ext4 /dev/sdb

5. On the host
# # stat /mnt/test.raw 

6.On the guest
# mount /dev/sdb /mnt/test
# dd if=/dev/zero of=test/file bs=1M count=500

7.cat map in host.
# stat /mnt/test.raw 

8.remove the file and fstrim it in guest.
# rm /mnt/test/file
# fstrim ./test

9.count the blocks number in host.
# stat /mnt/test.raw 

Results:
1.after step 3,
# stat /mnt/test.raw 
  File: ‘/mnt/test.raw’
  Size: 1073741824	Blocks: 8          IO Block: 131072 regular file

2.after step 5,
# stat /mnt/test.raw 
  File: ‘/mnt/test.raw’
  Size: 1073741824	Blocks: 66888      IO Block: 131072 regular file

3.after step 7,
# stat /mnt/test.raw 
  File: ‘/mnt/test.raw’
  Size: 1073741824	Blocks: 605008     IO Block: 131072 regular file

4.after step 9, check the sectors if roll-back.
# stat /mnt/test.raw 
  File: ‘/mnt/test.raw’
  Size: 1073741824	Blocks: 1123664    IO Block: 131072 regular file

Base on above, the sectors fail to roll-back correctly(Blocks: 605008---->1123664), so this bug has been fixed correctly.

Best Regards,
sluo

Comment 5 Paolo Bonzini 2014-11-12 09:43:28 UTC
Hmm, glusterfs _should_ support discard with the gluster POSIX backend...

Comment 6 Sibiao Luo 2015-01-16 07:20:43 UTC
(In reply to Paolo Bonzini from comment #5)
> Hmm, glusterfs _should_ support discard with the gluster POSIX backend...
Hi Paolo Bonzini,

    Why do you set bug 1136534#c5 FailedQA in assigned status and how to handle bug 1055487? does it also FailedQA. Maybe i mistook this bug title which caused it, does we *should* add discard support for GlusterFS block driver? Thanks.

Best Regards,
sluo

Comment 7 Paolo Bonzini 2015-01-19 12:04:10 UTC
Yes, this bug is that glusterfs should add discard support.

Comment 8 Paolo Bonzini 2015-01-20 11:02:29 UTC
Please test again with 1.5.3-85.el7.  I noticed that qemu-kvm in -70 and -75 didn't have the qemu_gluster_aio_discard symbol, but -85 has it.

Comment 9 Sibiao Luo 2015-01-21 07:13:52 UTC
(In reply to Paolo Bonzini from comment #8)
> Please test again with 1.5.3-85.el7.  I noticed that qemu-kvm in -70 and -75
> didn't have the qemu_gluster_aio_discard symbol, but -85 has it.

Still hit it, I did not see any sectors roll-back correctly(Blocks: 2511416---->2646584) with the same testing as comment #2.

host info:
# uname -r && rpm -q qemu-kvm
3.10.0-222.el7.x86_64
qemu-kvm-1.5.3-85.el7.x86_64
guest info:
# uname -r
3.10.0-222.el7.x86_64

Results:
1.after step 3,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 0          IO Block: 131072 regular file

2.after step 5,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 270872     IO Block: 131072 regular file

3.after step 7,
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 2511416    IO Block: 131072 regular file

4.after step 9, check the sectors if roll-back.
# stat /mnt/my-data-disk.raw 
  File: ‘/mnt/my-data-disk.raw’
  Size: 10737418240	Blocks: 2646584    IO Block: 131072 regular file

Best Regards,
sluo

Comment 11 Sibiao Luo 2015-01-26 06:35:51 UTC
(In reply to Sibiao Luo from comment #9)
> (In reply to Paolo Bonzini from comment #8)
> > Please test again with 1.5.3-85.el7.  I noticed that qemu-kvm in -70 and -75
> > didn't have the qemu_gluster_aio_discard symbol, but -85 has it.
> 
> Still hit it, I did not see any sectors roll-back correctly(Blocks:
> 2511416---->2646584) with the same testing as comment #2.
> 
> host info:
> # uname -r && rpm -q qemu-kvm
> 3.10.0-222.el7.x86_64
> qemu-kvm-1.5.3-85.el7.x86_64
> guest info:
> # uname -r
> 3.10.0-222.el7.x86_64
> 
Forgot to update the glusterfs version which i used the latest package from brewweb.
# rpm -qa | grep gluster
glusterfs-cli-3.6.0.42-1.el7rhs.x86_64
glusterfs-libs-3.6.0.42-1.el7rhs.x86_64
glusterfs-api-devel-3.6.0.42-1.el7rhs.x86_64
glusterfs-devel-3.6.0.42-1.el7rhs.x86_64
glusterfs-3.6.0.42-1.el7rhs.x86_64
glusterfs-server-3.6.0.42-1.el7rhs.x86_64
glusterfs-debuginfo-3.6.0.42-1.el7rhs.x86_64
glusterfs-api-3.6.0.42-1.el7rhs.x86_64
glusterfs-rdma-3.6.0.42-1.el7rhs.x86_64
glusterfs-fuse-3.6.0.42-1.el7rhs.x86_64
glusterfs-geo-replication-3.6.0.42-1.el7rhs.x86_64

Comment 15 Ademar Reis 2016-06-22 14:42:34 UTC
We're working on it upstream and in the qemu-kvm-rhev flavor. See Bug 1055487


Note You need to log in before you can comment on or make changes to this bug.