Description of problem: ----------------------- The size of the VM image file as reported from the fuse mount is incorrect. For the file of size 1 TB, the size of the file on the disk is reported as 1 ZB. Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHHI-V 1.6 - RHV 4.2.8 & RHGS 3.4.3 ( glusterfs-3.12.2-38.el7rhgs ) How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. On the Gluster storage domain, create the preallocated disk image of size 1TB 2. Check for the size of the file after its creation has succeesded Actual results: --------------- Size of the file is reported as 1 ZB, though the size of the file is 1TB Expected results: ----------------- Size of the file should be the same as the size created by the user Additional info: ---------------- Volume in the question is replica 3 sharded [root@rhsqa-grafton10 ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 7eb49e90-e2b6-4f8f-856e-7108212dbb72 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data (arbiter) Options Reconfigured: performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable cluster.enable-shared-storage: enable
Size of the file as reported from the fuse mount: [root@ ~]# ls -lsah /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 8.0Z -rw-rw----. 1 vdsm kvm 1.1T Jan 21 17:14 /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b [root@ ~]# du -shc /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com\:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E /rhev/data-center/mnt/glusterSD/rhsqa-grafton10.lab.eng.blr.redhat.com:_data/bbeee86f-f174-4ec7-9ea3-a0df28709e64/images/0206953c-4850-4969-9dad-15140579d354/eaa5e81d-103c-4ce6-947e-8946806cca1b 16E total The disk image is preallocated with 1072GB of space
Are you able to recreate this directly on the gluster fuse-mount using a simple qemu-img command? If so, can you share the exact command? -Krutika
(In reply to Krutika Dhananjay from comment #2) > Are you able to recreate this directly on the gluster fuse-mount using a > simple qemu-img command? > > If so, can you share the exact command? > > -Krutika Ignore this. I see the requested information is already provided in the clone.
The dependent RHGS bug is already ON_QA and moving this bug too to ON_QA
Tested with RHVH 4.3.5 based on RHEL 7.7 with interim RHGS 3.5.0 build ( glusterfs-6.0-7 ) with the following scenarios: 1. Created preallocated image file of size 1TB or more. -------------------------------------------------------- Observed that the size of the image file is consistent now. [root@ ]# qemu-img create -f raw -o preallocation=falloc vm2.img 1T Formatting 'vm2.img', fmt=raw size=1099511627776 preallocation=falloc [root@ ]# ls -lsah vm2.img 1.0T -rw-r--r--. 1 root root 1.0T Jul 3 21:26 vm2.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm3.img 1.5T Formatting 'vm3.img', fmt=raw size=1649267441664 preallocation=falloc [root@ ]# ls -lsah vm3.img 1.5T -rw-r--r--. 1 root root 1.5T Jul 3 21:26 vm3.img 2. Created preallocated image file with the same name ------------------------------------------------------ [root@]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G Jul 3 21:25 vm1.img [root@ ]# qemu-img create -f raw -o preallocation=falloc vm1.img 10G Formatting 'vm1.img', fmt=raw size=10737418240 preallocation=falloc [root@ ]# ls -lsah vm1.img 10G -rw-r--r--. 1 root root 10G Jul 3 21:26 vm1.img In this case, the size of the image file is consistent
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0508