Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1624395

Summary: vdo guaranteed space monitoring does not show correct values
Product: [oVirt] ovirt-engine Reporter: Sahina Bose <sabose>
Component: BLL.GlusterAssignee: Denis Chaplygin <dchaplyg>
Status: CLOSED NOTABUG QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: high    
Version: 4.2.6CC: bugs
Target Milestone: ovirt-4.2.7Flags: sabose: ovirt-4.2?
sabose: planning_ack?
sabose: devel_ack?
sabose: testing_ack?
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-27 10:31:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdostats
none
monitoring screenshot none

Description Sahina Bose 2018-08-31 12:51:08 UTC
Created attachment 1480089 [details]
vdostats

Description of problem:

One of the bricks in a 3 node gluster volume ran out of space causing the brick to be down. However the guaranteed space monitoring did not throw an alert. Digging into values shows that it does not correctly display values either.



Version-Release number of selected component (if applicable):
4.2.6

How reproducible:
Always

Steps to Reproduce:
1. Create a gluster volume using dedupe/compression - this was using thinp
2. Create disk images till the brick is out of space


Actual results:
The copydata storage domain (data) volume reports guaranteed free space of ~4000GiB though no space is available

Expected results:
Alerts should have been raised when threshold levels are reached

Additional info:
There were 3 nodes in cluster - rhsdev-grafton2, rhsdev-grafton3, rhsdev-grafton4
Attached txt file with lsblk and vdo status output from all nodes

Comment 1 Sahina Bose 2018-08-31 12:53:03 UTC
Created attachment 1480090 [details]
monitoring screenshot

Comment 2 Sahina Bose 2018-09-27 10:30:55 UTC
As requested:
engine=#  select connection,gluster_volume_id from storage_server_connections;
                   connection                    |          gluster_volume_id           
-------------------------------------------------+--------------------------------------
 10.70.37.28:/engine                             | 
 rhsdev-grafton2.lab.eng.blr.redhat.com:/vmstore | 826718b3-b509-4ff6-abdb-d500d66e7a18
 10.70.37.28:/data                               | 
 10.70.37.28:/ssd                                | 5d4234b6-6f85-485a-902a-cfe35088c5da

Comment 3 Sahina Bose 2018-09-27 10:31:33 UTC
Closing this bug, as the data volume did not have a glusterVolumeId associated