Bug 1613389

Summary: Low sequential write throughput with VDO in RHHI 2.0
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nikhil Chawla <nichawla>
Component: rhhiAssignee: Ritesh Chikatwar <rchikatw>
Status: CLOSED WONTFIX QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: medium    
Version: rhhiv-1.5CC: dkeefe, godas, guillaume.pavese, pasik, psuriset, rhs-bugs, rsussman, sabose
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1613425 (view as bug list) Environment:
Last Closed: 2020-12-07 07:07:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1613425    
Bug Blocks:    

Description Nikhil Chawla 2018-08-07 13:25:53 UTC
Description of problem:

In Sequential FIO tests being executed on RHHI with VDO(+/-), with deduplication level in range(0,25,50,100) and iodepth in range (1,4,8,16), it is seen that particularly the sequential throughput on SSDs and RAID 6 are less with VDO.

RAID 6 throughput dropped down to 70% approx w.r.t Non-VDO RHHI
SSD throughput dropped down to 80% approx w.r.t Non-VDO RHHI 


FIO Command used:
fio --name=sequentialread --ioengine=sync --rw=read  --bs=128k --directory="{{ fio_test_directory }}" --filename_format=f.\$jobnum.\$filenum --filesize=16g --size=16g --numjobs=4

Version-Release number of selected component (if applicable):

gluster version:
glusterfs-3.8.4-54.15.el7rhgs.x86_64.rpm                                           
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64.rpm                                          
glusterfs-api-devel-3.8.4-54.15.el7rhgs.x86_64.rpm                                  
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64.rpm                                        
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64.rpm                             
glusterfs-devel-3.8.4-54.15.el7rhgs.x86_64.rpm                                      
glusterfs-events-3.8.4-54.15.el7rhgs.x86_64.rpm                                     
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64.rpm                                       
glusterfs-ganesha-3.8.4-54.15.el7rhgs.x86_64.rpm                                      
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64.rpm                              
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64.rpm                                         
glusterfs-rdma-3.8.4-54.15.el7rhgs.x86_64.rpm                                         
glusterfs-resource-agents-3.8.4-54.15.el7rhgs.noarch.rpm                            glusterfs-server-3.8.4-54.15.el7rhgs.x86_64.rpm          


rhv version:
rhv-4.2.5

How reproducible:


Steps to Reproduce:
1. Setup RHHI with and without VDO
2. Provision a vm and add (RAID6/SSD) storage disk to VM (Virtio-blk,thin-provisioned)
3. Create xfs filesystem over the block device added to the VM and mount it.
4. Install Fio, in the command above , replace "{{ fio_test_directory }}" with the mountpoint and run the fio test.

Actual results:


Expected results:


Additional info:



Result sheet:
https://docs.google.com/spreadsheets/d/1OjQ5gaYj9Vo2SwV0gmZN_SmdBanNCqQmxVwpLkuurdg/edit?usp=sharing

Comment 2 Sahina Bose 2018-11-06 09:29:52 UTC
Nikhil, with the updated RHGS 3.4.1 version, do you still observe the same results?

Comment 5 Sahina Bose 2019-11-20 11:48:55 UTC
Ritesh, assigning to you to check the difference in performance with a VDO layer with 4K changes and RHGS 3.5

Comment 6 Gobinda Das 2020-12-07 07:07:30 UTC
We know that VDO does not give significant performance in RHHI-V.Right now we are looking only high priority customer bugs fix and also we don't have plan now for VDO fixes.So closing this bug for now will reopen if any customer comes with same issue.