Bug 1590967

Summary: [RFE] Display space savings when a VDO volume is used.
Product: [oVirt] ovirt-engine Reporter: Sahina Bose <sabose>
Component: BLL.GlusterAssignee: Denis Chaplygin <dchaplyg>
Status: CLOSED CURRENTRELEASE QA Contact: bipin <bshetty>
Severity: medium Docs Contact:
Priority: high    
Version: 4.2.3.2CC: apinnick, bshetty, bugs, dchaplyg, dkeefe, lveyde, sabose, sasundar, ylavi
Target Milestone: ovirt-4.2.7Keywords: FutureFeature
Target Release: ---Flags: rule-engine: ovirt-4.2+
rule-engine: blocker+
ylavi: planning_ack+
rule-engine: devel_ack+
bshetty: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-engine-4.2.7.3 Doc Type: Enhancement
Doc Text:
The current release has a 'VDO Savings' field that displays the savings percentage for the Gluster Storage Domain, Volume, and Brick views.
Story Points: ---
Clone Of:
: 1613855 (view as bug list) Environment:
Last Closed: 2018-11-02 14:28:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1617977    
Bug Blocks: 1613855    
Attachments:
Description Flags
RHHI Cockpit VDO Savings Picture
none
vdo-savings
none
VDO_savings_Screenshot
none
vdsm1.log
none
Supervdsm.log
none
vdsm2.log
none
Engine.log
none
Verified_UI_Screenshot_Space_Savings none

Description Sahina Bose 2018-06-13 19:54:59 UTC
Description of problem:

When a VDO volume is used as a storage domain, the user needs to be presented with the space savings that he/she is getting

Version-Release number of selected component (if applicable):
4.2

How reproducible:
NA

Additional info:
Space savings should be presented at the brick level as well as the gluster volume/storage domain level

Comment 1 Denis Chaplygin 2018-06-14 08:30:17 UTC
What should we do in case same VDO volume is used for several storage domains?

What information should we provide in case of (multiple) vdo-thinp layers?

Comment 2 Sahina Bose 2018-06-15 18:22:46 UTC
(In reply to Denis Chaplygin from comment #1)
> What should we do in case same VDO volume is used for several storage
> domains?
> 

I think the same space saving can be reported.

> What information should we provide in case of (multiple) vdo-thinp layers?

Adding Dennis for inputs.

Comment 3 Dennis Keefe 2018-06-15 18:39:51 UTC
Created attachment 1452039 [details]
RHHI Cockpit VDO Savings Picture

Comment 4 Dennis Keefe 2018-06-15 22:18:41 UTC
Savings for 2.0
Cockpit will display the VDO savings per node. See attachment.

Savings for 2.0+
Storage Domain

Each storage domain should display the storage savings by averaging each
VDO's volume savings on each gluster's volume bricks.

example:
the brick for vmstore on host-rhhi1 (/dev/sdb)
has space savings of 49% (vdo command: vdostats --hum)

the brick for vmstore on host-rhhi2 (/dev/sdb)
has space savings of 57%

the brick for vmstore on host-rhhi3 (/dev/sdb)
has space savings of 87%

(49+57+87)/3 = 64% space savings

64% space savings should be displayed for the storage domain after "Guaranteed Free Space" and before "Description" 
--------------------------------

Savings for 2.0+
Dashboard

The Dashboard could show the savings either in the storage "circle" as part of the ring or just as text in the center of the 
ring.  I would normally say that the savings would be green, but that is used already, so blue might work.  The savings 
here should report the savings for the cluster not just per storage domain.  

The stats can come from either VDO manager by using "vdostats --verbose" 
Command: vdostats --verbose|egrep "logical blocks used|physical blocks used"

  data blocks used                    : 17275885
  logical blocks used                 : 33712693

Or from sysfs

Sysfs does not hold a percent savings, which means you have to calculate it yourself.
 
1. capture the output of /sys/kvdo/<vdo volume name>/statistics/{logical_blocks_used|physical_blocks_used}
2. add the physical blocks used for all volumes in the cluster together
3. add the logical blocks used for all volumes in the cluster together 
4. subtract the total logical blocks used by total physical blocks used (this is the total number of blocks saved by VDO)
5. then divide the total saved blocks by the total logical blocks, multiplied by 100 to get the savings percent


ls /sys/kvdo/vdo_sdb/statistics/|egrep "logical blocks used|physical blocks used"
logical_blocks_used
data_blocks_used

cat $(ls -d /sys/kvdo/*/statistics/*|egrep "logical_blocks_used|data_blocks_used")
33712680
17275871

Math
33712680 - 17275871 = 16436059  (saved blocks)
16436059 / 33712680 = 0.4875 *100 = 48% (savings percentage)
0.4875 *100 = 48%

Comment 5 Sahina Bose 2018-08-08 12:29:53 UTC
Is there a separate bug tracking the vdsm changes?

Comment 6 Denis Chaplygin 2018-08-08 12:37:14 UTC
No

Comment 7 Sahina Bose 2018-08-08 13:48:24 UTC
(In reply to Denis Chaplygin from comment #6)
> No

Can this bz be in modified without the vdsm patches - either add those patches here, or have another bug tracking vdsm changes which this bug is dependent on

Comment 8 Denis Chaplygin 2018-08-08 14:14:10 UTC
It will not work without VDSM patches

Comment 9 Sahina Bose 2018-08-09 05:22:14 UTC
(In reply to Denis Chaplygin from comment #8)
> It will not work without VDSM patches

Denis, please move bug state in that case, and add the required patches here

Comment 10 Sandro Bonazzola 2018-08-24 08:18:38 UTC
This bug has not been marked as blocker for 4.2.6 and we are now in blockers only phase. Please consider re-targeting this bug to next release or block the 4.2.6 release for this.

Comment 11 Sandro Bonazzola 2018-08-28 10:12:36 UTC
Moving back to POST having 2 referenced patches still unmerged.

Comment 12 Sahina Bose 2018-08-29 12:09:33 UTC
Space savings are displayed on bricks

Comment 13 SATHEESARAN 2018-09-04 11:33:54 UTC
Advanced brick details always displays the savings% as 0
"Deduplication/Compression savings (%) as 0"

Comment 14 SATHEESARAN 2018-09-04 11:37:53 UTC
VDO space savings reported from CLI

On Node1
---------
[root@ ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      18.2T     41.2G     18.1T   0%           73%

On Node2
---------
[root@ ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      18.2T     41.2G     18.1T   0%           73%

On Node3
---------
[root@ ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      18.2T     41.1G     18.1T   0%           68%

Comment 15 Sahina Bose 2018-09-04 15:15:05 UTC
(In reply to SATHEESARAN from comment #14)
> VDO space savings reported from CLI
> 
> On Node1
> ---------
> [root@ ~]# vdostats --human-readable
> Device                    Size      Used Available Use% Space saving%
> /dev/mapper/vdo_sdb      18.2T     41.2G     18.1T   0%           73%
> 
> On Node2
> ---------
> [root@ ~]# vdostats --human-readable
> Device                    Size      Used Available Use% Space saving%
> /dev/mapper/vdo_sdb      18.2T     41.2G     18.1T   0%           73%
> 
> On Node3
> ---------
> [root@ ~]# vdostats --human-readable
> Device                    Size      Used Available Use% Space saving%
> /dev/mapper/vdo_sdb      18.2T     41.1G     18.1T   0%           68%

can you attach the vdsm.log/supervdsm.log from one of the nodes?

Comment 16 Sahina Bose 2018-09-04 15:18:08 UTC
Created attachment 1480826 [details]
vdo-savings

This is from a 4.2.5 deployment, so logs will help to analyze why the savings are not displayed

Comment 17 bipin 2018-09-24 04:45:02 UTC
Tested this bug and could see the "Deduplication/Compression savings (%) as 0" though the cli "savings percent" was 50.
The savings % in CLI was varying while writing data to the volume but it wasn't reflecting in the UI. Attaching the relevant logs for further debugging.

[root@rhsqa-grafton7 ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      20.0T     31.9G     20.0T   0%           62%

[root@rhsqa-grafton7 ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      20.0T     39.3G     20.0T   0%           59%


[root@rhsqa-grafton7 ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdb      20.0T   1019.4G     19.0T   4%           50%


[root@rhsqa-grafton7 ~]# vdo status | grep "saving percent"
saving percent: 50

Comment 18 bipin 2018-09-24 04:46:10 UTC
Created attachment 1486267 [details]
VDO_savings_Screenshot

Comment 19 bipin 2018-09-24 04:47:51 UTC
Created attachment 1486268 [details]
vdsm1.log

Comment 20 bipin 2018-09-24 04:48:34 UTC
Created attachment 1486269 [details]
Supervdsm.log

Comment 21 bipin 2018-09-24 04:49:09 UTC
Created attachment 1486270 [details]
vdsm2.log

Comment 22 bipin 2018-09-24 04:49:47 UTC
Created attachment 1486271 [details]
Engine.log

Comment 23 bipin 2018-10-22 11:49:44 UTC
Tested the bug on rhvm-4.2.7.4. The fix seems to be working as expected.
The space savings is reflecting in the RHV-M corresponding to the CLI output.


[root@rhsqa-abc ~]# vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo_sdc     223.1G     60.0G    163.0G  26%           62%
/dev/mapper/vdo_sdd     931.0G      4.2G    926.8G   0%           77%


Attaching the screenshot of the RHV-M

Comment 24 bipin 2018-10-22 11:50:19 UTC
Created attachment 1496372 [details]
Verified_UI_Screenshot_Space_Savings

Comment 25 bipin 2018-10-22 11:51:27 UTC
Canceling the needinfo on the assignee,since no longer required.

Comment 26 Sandro Bonazzola 2018-11-02 14:28:46 UTC
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.