Bug 1613389 - Low sequential write throughput with VDO in RHHI 2.0
Summary: Low sequential write throughput with VDO in RHHI 2.0
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Ritesh Chikatwar
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1613425
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-07 13:25 UTC by Nikhil Chawla
Modified: 2020-12-07 07:07 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1613425 (view as bug list)
Environment:
Last Closed: 2020-12-07 07:07:30 UTC
Embargoed:


Attachments (Terms of Use)

Description Nikhil Chawla 2018-08-07 13:25:53 UTC
Description of problem:

In Sequential FIO tests being executed on RHHI with VDO(+/-), with deduplication level in range(0,25,50,100) and iodepth in range (1,4,8,16), it is seen that particularly the sequential throughput on SSDs and RAID 6 are less with VDO.

RAID 6 throughput dropped down to 70% approx w.r.t Non-VDO RHHI
SSD throughput dropped down to 80% approx w.r.t Non-VDO RHHI 


FIO Command used:
fio --name=sequentialread --ioengine=sync --rw=read  --bs=128k --directory="{{ fio_test_directory }}" --filename_format=f.\$jobnum.\$filenum --filesize=16g --size=16g --numjobs=4

Version-Release number of selected component (if applicable):

gluster version:
glusterfs-3.8.4-54.15.el7rhgs.x86_64.rpm                                           
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64.rpm                                          
glusterfs-api-devel-3.8.4-54.15.el7rhgs.x86_64.rpm                                  
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64.rpm                                        
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64.rpm                             
glusterfs-devel-3.8.4-54.15.el7rhgs.x86_64.rpm                                      
glusterfs-events-3.8.4-54.15.el7rhgs.x86_64.rpm                                     
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64.rpm                                       
glusterfs-ganesha-3.8.4-54.15.el7rhgs.x86_64.rpm                                      
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64.rpm                              
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64.rpm                                         
glusterfs-rdma-3.8.4-54.15.el7rhgs.x86_64.rpm                                         
glusterfs-resource-agents-3.8.4-54.15.el7rhgs.noarch.rpm                            glusterfs-server-3.8.4-54.15.el7rhgs.x86_64.rpm          


rhv version:
rhv-4.2.5

How reproducible:


Steps to Reproduce:
1. Setup RHHI with and without VDO
2. Provision a vm and add (RAID6/SSD) storage disk to VM (Virtio-blk,thin-provisioned)
3. Create xfs filesystem over the block device added to the VM and mount it.
4. Install Fio, in the command above , replace "{{ fio_test_directory }}" with the mountpoint and run the fio test.

Actual results:


Expected results:


Additional info:



Result sheet:
https://docs.google.com/spreadsheets/d/1OjQ5gaYj9Vo2SwV0gmZN_SmdBanNCqQmxVwpLkuurdg/edit?usp=sharing

Comment 2 Sahina Bose 2018-11-06 09:29:52 UTC
Nikhil, with the updated RHGS 3.4.1 version, do you still observe the same results?

Comment 5 Sahina Bose 2019-11-20 11:48:55 UTC
Ritesh, assigning to you to check the difference in performance with a VDO layer with 4K changes and RHGS 3.5

Comment 6 Gobinda Das 2020-12-07 07:07:30 UTC
We know that VDO does not give significant performance in RHHI-V.Right now we are looking only high priority customer bugs fix and also we don't have plan now for VDO fixes.So closing this bug for now will reopen if any customer comes with same issue.


Note You need to log in before you can comment on or make changes to this bug.