Description of problem: In Sequential FIO tests being executed on RHHI with VDO(+/-), with deduplication level in range(0,25,50,100) and iodepth in range (1,4,8,16), it is seen that particularly the sequential throughput on SSDs and RAID 6 are less with VDO. RAID 6 throughput dropped down to 70% approx w.r.t Non-VDO RHHI SSD throughput dropped down to 80% approx w.r.t Non-VDO RHHI FIO Command used: fio --name=sequentialread --ioengine=sync --rw=read --bs=128k --directory="{{ fio_test_directory }}" --filename_format=f.\$jobnum.\$filenum --filesize=16g --size=16g --numjobs=4 Version-Release number of selected component (if applicable): gluster version: glusterfs-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-api-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-api-devel-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-devel-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-events-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-ganesha-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-rdma-3.8.4-54.15.el7rhgs.x86_64.rpm glusterfs-resource-agents-3.8.4-54.15.el7rhgs.noarch.rpm glusterfs-server-3.8.4-54.15.el7rhgs.x86_64.rpm rhv version: rhv-4.2.5 How reproducible: Steps to Reproduce: 1. Setup RHHI with and without VDO 2. Provision a vm and add (RAID6/SSD) storage disk to VM (Virtio-blk,thin-provisioned) 3. Create xfs filesystem over the block device added to the VM and mount it. 4. Install Fio, in the command above , replace "{{ fio_test_directory }}" with the mountpoint and run the fio test. Actual results: Expected results: Additional info: Result sheet: https://docs.google.com/spreadsheets/d/1OjQ5gaYj9Vo2SwV0gmZN_SmdBanNCqQmxVwpLkuurdg/edit?usp=sharing
Nikhil, with the updated RHGS 3.4.1 version, do you still observe the same results?
Ritesh, assigning to you to check the difference in performance with a VDO layer with 4K changes and RHGS 3.5
We know that VDO does not give significant performance in RHHI-V.Right now we are looking only high priority customer bugs fix and also we don't have plan now for VDO fixes.So closing this bug for now will reopen if any customer comes with same issue.