Bug 1055487
Summary: | glusterfs backend does not support discard | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Paolo Bonzini <pbonzini> | |
Component: | qemu-kvm-rhev | Assignee: | Jeff Cody <jcody> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | FuXiangChun <xfu> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 7.0 | CC: | areis, chayang, dyuan, hhuang, jcody, jinzhao, juzhang, knoel, lmen, mazhang, michen, mrezanin, mzhan, pbonzini, rbalakri, rcyriac, sherold, ssaha, virt-maint, weliao, yanyang | |
Target Milestone: | rc | Keywords: | FutureFeature | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | qemu-kvm-rhev-2.1.2-1.el7.x86_64 | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1037503 | |||
: | GlusterThinProvisioning 1136534 (view as bug list) | Environment: | ||
Last Closed: | 2016-08-04 07:26:28 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1037503 | |||
Bug Blocks: | 1059976, 1103845, 1136534 |
Comment 1
Tushar Katarki
2014-01-28 19:01:50 UTC
Hi Hai and Jeff, If this bz is only fixed in qemu-kvm-rhev build, Could you update the bz component to qemu-kvm-rhev? Best Regards, Junyi verified this issue on qemu-kvm-rhev-2.1.2-1.el7.x86_64. host info: # uname -r && rpm -q qemu-kvm 3.10.0-123.9.2.el7.x86_64 qemu-kvm-rhev-2.1.2-1.el7.x86_64 Steps: 1. Create a 1G raw/qcow2 image on an XFS/ext4 file system. # qemu-img create -f raw gluster://10.66.106.35/volume_sluo/sluo.raw 1G Formatting 'gluster://10.66.106.35/volume_sluo/sluo.raw', fmt=raw size=1073741824 2. Start qemu with a command-line like the following: e.g:/usr/libexec/qemu-kvm...-device virtio-scsi-pci,id=scsi2,indirect_desc=off,event_idx=off,bus=pci.0,addr=0x8 -drive file=gluster://10.66.106.35/volume_sluo/sluo.raw,if=none,id=drive-hd-disk,media=disk,format=raw,cache=none,werror=stop,rerror=stop,discard=on -device scsi-hd,drive=drive-hd-disk,id=scsi_disk 3. count the blocks number. # mount -t glusterfs 10.66.106.35:volume_sluo /mnt/ # stat /mnt/sluo.raw 4.Make file system to the disk in the guest. # mkfs.ext4 /dev/sdb 5. On the host # # stat /mnt/sluo.raw 6.On the guest # mount /dev/sdb /mnt/test # dd if=/dev/zero of=test/file bs=1M count=500 7.cat map in host. # stat /mnt/sluo.raw 8.remove the file and fstrim it in guest. # rm /mnt/test/file # fstrim ./test 9.count the blocks number in host. # stat /mnt/sluo.raw Results: 1.after step 3, # stat /mnt/sluo.raw File: ‘/mnt/sluo.raw’ Size: 1073741824 Blocks: 8 IO Block: 131072 regular file 2.after step 5, # stat /mnt/sluo.raw File: ‘/mnt/sluo.raw’ Size: 1073741824 Blocks: 66888 IO Block: 131072 regular file 3.after step 7, # stat /mnt/sluo.raw File: ‘/mnt/sluo.raw’ Size: 1073741824 Blocks: 971032 IO Block: 131072 regular file 4.after step 9, check the sectors if roll-back. # stat /mnt/sluo.raw File: ‘/mnt/sluo.raw’ Size: 1073741824 Blocks: 1123656 IO Block: 131072 regular file Base on above, the sectors fail to roll-back correctly(Blocks: 971032---->1123656), so this bug has been fixed correctly. Best Regards, sluo Hi Paolo, According to https://bugzilla.redhat.com/show_bug.cgi?id=1136534#c5, seems we need to update this bz into assigned status, right? Best Regards, Junyi glusterfs _should_ support discard with the gluster POSIX backend, so marking as FailedQA. Please try qemu-kvm-2.1.2-20.el7 (In reply to Paolo Bonzini from comment #12) > Please try qemu-kvm-2.1.2-20.el7 Still hit this issue, I did not see any sectors roll-back correctly(Blocks: 2597432---->2646584) with the same testing as comment #5. host info: # uname -r && rpm -q qemu-kvm-rhev 3.10.0-222.el7.x86_64 qemu-kvm-rhev-2.1.2-20.el7.x86_64 guest info: # uname -r 3.10.0-222.el7.x86_64 Results: 1.after step 3, # stat /mnt/my-data-disk.raw File: ‘/mnt/my-data-disk.raw’ Size: 10737418240 Blocks: 0 IO Block: 131072 regular file 2.after step 5, # stat /mnt/my-data-disk.raw File: ‘/mnt/my-data-disk.raw’ Size: 10737418240 Blocks: 270872 IO Block: 131072 regular file 3.after step 7, # stat /mnt/my-data-disk.raw File: ‘/mnt/my-data-disk.raw’ Size: 10737418240 Blocks: 2597432 IO Block: 131072 regular file 4.after step 9, check the sectors if roll-back. # stat /mnt/my-data-disk.raw File: ‘/mnt/my-data-disk.raw’ Size: 10737418240 Blocks: 2646584 IO Block: 131072 regular file Best Regards, sluo (In reply to Sibiao Luo from comment #13) > (In reply to Paolo Bonzini from comment #12) > > Please try qemu-kvm-2.1.2-20.el7 > Still hit this issue, I did not see any sectors roll-back correctly(Blocks: > 2597432---->2646584) with the same testing as comment #5. > > host info: > # uname -r && rpm -q qemu-kvm-rhev > 3.10.0-222.el7.x86_64 > qemu-kvm-rhev-2.1.2-20.el7.x86_64 > guest info: > # uname -r > 3.10.0-222.el7.x86_64 > Forgot to update the glusterfs version which i used the latest package from brewweb. # rpm -qa | grep gluster glusterfs-cli-3.6.0.42-1.el7rhs.x86_64 glusterfs-libs-3.6.0.42-1.el7rhs.x86_64 glusterfs-api-devel-3.6.0.42-1.el7rhs.x86_64 glusterfs-devel-3.6.0.42-1.el7rhs.x86_64 glusterfs-3.6.0.42-1.el7rhs.x86_64 glusterfs-server-3.6.0.42-1.el7rhs.x86_64 glusterfs-debuginfo-3.6.0.42-1.el7rhs.x86_64 glusterfs-api-3.6.0.42-1.el7rhs.x86_64 glusterfs-rdma-3.6.0.42-1.el7rhs.x86_64 glusterfs-fuse-3.6.0.42-1.el7rhs.x86_64 glusterfs-geo-replication-3.6.0.42-1.el7rhs.x86_64 I've tested this on RHEL-7.2, with the following package versions: glusterfs.x86_64 3.7.1-16.el7 glusterfs-api.x86_64 3.7.1-16.el7 glusterfs-api-devel.x86_64 3.7.1-16.el7 glusterfs-client-xlators.x86_64 3.7.1-16.el7 glusterfs-devel.x86_64 3.7.1-16.el7 glusterfs-fuse.x86_64 3.7.1-16.el7 glusterfs-libs.x86_64 3.7.1-16.el7 qemu-img-rhev.x86_64 10:2.3.0-31.el7_2.16 qemu-kvm-common-rhev.x86_64 10:2.3.0-31.el7_2.16 qemu-kvm-rhev.x86_64 10:2.3.0-31.el7_2.16 Discard is working fine in my testing. When testing, I recommend a "sync" in step 8, after removing the file and before the fstrim. Test results: Prior to creating test file in the guest: $ stat test.raw File: ‘test.raw’ Size: 10737418240 Blocks: 595288 IO Block: 131072 regular file $ du -sh test.raw 291M test.raw After 'dd if=/dev/zero of=/mnt/test/junk.bin bs=1M count=128' in guest: $ stat test.raw File: ‘test.raw’ Size: 10737418240 Blocks: 857432 IO Block: 131072 regular file $ du -sh test.raw 419M test.raw After 'rm -f /mnt/test/junk.bin; sync; fstrim -v /mnt' in guest: $ stat test.raw File: ‘test.raw’ Size: 10737418240 Blocks: 595288 IO Block: 131072 regular file $ du -sh test.raw 291M test.raw Moving to MODIFIED, so that it still gets tested by QE, but this should be closed. QE reproduced this issue with following version: Host: qemu-kvm-rhev-1.5.3-60.el7_0.12.x86_64 3.10.0-229.el7.x86_64 glusterfs-3.7.9-2.el7rhgs.x86_64 Guest: 3.10.0-456.el7.x86_64 Steps: 1. Create a 1G raw/qcow2 image on an XFS/ext4 file system. # qemu-img create -f raw gluster://10.66.9.230/test-volume/weliao.raw 1G 2. Start qemu with a command-line like the following: # /usr/libexec/qemu-kvm -name rhel7.3 -M pc -cpu SandyBridge -m 4096 -realtime mlock=off -nodefaults -smp 4 -drive file=/home/RHEL-Server-7.3-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:22:33:44:55,bus=pci.0,addr=0x3 -vga qxl -spice port=5900,disable-ticketing, -monitor stdio -boot menu=on -qmp tcp:0:4444,server,nowait -device virtio-scsi-pci,id=scsi2,indirect_desc=off,event_idx=off,bus=pci.0,addr=0x8 -drive file=gluster://10.66.9.230/test-volume/weliao.raw,if=none,id=drive-virtio-disk1,format=raw,if=none,media=disk,cache=none,werror=none,werror=stop,rerror=stop,discard=on -device scsi-hd,bus=scsi2.0,drive=drive-virtio-disk1,id=virtio-disk1 3. count the blocks number. # mount -t glusterfs 10.66.9.230:test-volume /mnt/ # stat /mnt/weliao.raw 4.Make file system to the disk in the guest. # mkfs.ext4 /dev/sdb 5. On the host # stat /mnt/weliao.raw 6.On the guest # mount /dev/sdb /mnt/test # dd if=/dev/zero of=test/file bs=1M count=500 7.cat map in host. # stat /mnt/weliao.raw 8.remove the file and fstrim it in guest. # rm /mnt/test/file # fstrim ./test 9.count the blocks number in host. # stat /mnt/weliao.raw Results: 1.after step 3, # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 8 IO Block: 131072 regular file 2.after step 5, # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 66888 IO Block: 131072 regular file 3.after step 7, # # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 525520 IO Block: 131072 regular file 4.after step 9, check the sectors if roll-back. # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 1123536 IO Block: 131072 regular file So can reproduced. -------------------------------------------------------------- Verify with following versions: Host: qemu-kvm-rhev-2.6.0-17.el7.x86_64 3.10.0-478.el7.x86_64 glusterfs-3.7.9-2.el7rhgs.x86_64 Guest: 3.10.0-456.el7.x86_64 the same test steps: results: 1.after step 3, # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 8 IO Block: 131072 regular file 2.after step 5, # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 66888 IO Block: 131072 regular file 3.after step 7, # # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 525632 IO Block: 131072 regular file # stat weliao.raw File: ‘weliao.raw’ Size: 1073741824 Blocks: 99528 IO Block: 131072 regular file the sectors roll-back correctly(Blocks: 525632---->99528) So this bug fixed. |