Bug 852421 - Volume top read-perf/write-perf always showing 0MBps
Volume top read-perf/write-perf always showing 0MBps
Status: CLOSED NOTABUG
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
3.3.0
Unspecified Linux
unspecified Severity medium
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-28 09:01 EDT by Filip Pytloun
Modified: 2012-08-28 13:02 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-08-28 13:02:22 EDT
Type: Bug
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Filip Pytloun 2012-08-28 09:01:14 EDT
Description of problem:
When I use command gluster volume top Staging read-perf list-cnt 1, I always get 0MBps traffic even on highly loaded Gluster cluster.
Output bellow:

Brick: dfs01:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...hmU2oqe86e_dataset.labels/dataset.labels.csv 2012-08-28 14:55:20.481924
Brick: dfs05:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...d3b0235-28fe-4a44-a48c-e3d9480b9fde/data.csv 2012-08-28 14:55:21.923247
Brick: dfs03:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...lX6kx6JMjtbyTevC_dataset.sms/dataset.sms.csv 2012-08-28 14:55:21.637640
Brick: dfs07:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...56f0a1f-5fe5-4a32-8776-95ca6a652c7f/data.csv 2012-08-28 14:55:21.915933
Brick: dfs09:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...AphzNgYnlpzIqucEV_dataset.stories/upload.zip 2012-08-28 14:55:19.873379
Brick: dfs04:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...55ed3d6-29f1-4071-882b-aac8502275cd/data.csv 2012-08-28 14:55:21.951723
Brick: dfs10:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...0a_dataset.zendesktags/d_zendesktags_tag.csv 2012-08-28 14:55:20.897776
Brick: dfs02:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ....labelstostories/dataset.labelstostories.csv 2012-08-28 14:55:21.355501
Brick: dfs08:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ...e26e86e-44ef-4d35-b371-2a03ac7359f1/data.csv 2012-08-28 14:55:21.950289
Brick: dfs06:/mnt/gluster/Staging
Throughput 0.00 MBps time 0.0000 secs
MBps Filename                                        Time                      
==== ========                                        ====                      
   0 ..._dataset.stories/d_stories_iteration.csv.log 2012-08-28 14:55:19.457255

Version-Release number of selected component (if applicable):
Vanilla glusterfs-3.3.0

How reproducible:
gluster volume top Staging read-perf or write-perf

Steps to Reproduce:
1. Run gluster volume top Staging read-perf or write-perf
  
Actual results:
0MBps traffic

Expected results:
Traffic by reality

Additional info:
Gluster setup of 10 nodes with replica 2. Clients are using same version of glusterfs and glusterfs mounts.

I can provide more debug informations, however I am not sure what may help.
Comment 1 M S Vishwanath Bhat 2012-08-28 09:17:56 EDT
Have you turned the latency-measurement on? If it's not set then mbps for individual files will not be displayed. 

gluster volume set Staging latency-measurement on

And Can you please run read-perf/write-perf with the block size and count? Like

gluster volume top Staging read-perf bs 1024 count 100
or
gluster volume top Staging write-perf bs 1024 count 500 list-cnt 5

Currently Throughput of the total brick will be displayed as zero until you specify the block-size (bs) and count.

Can you please try above commands and get back to us?
Comment 2 Filip Pytloun 2012-08-28 09:36:21 EDT
Sorry, I have probably misunderstood this command purpose - using your parameters work.
However, I haven't found command that would be able to show current traffic on whole gluster volume (or per-brick). Volume profile command is fine, but too low-level for overall statistics.

Thank you for your answer.
Comment 3 M S Vishwanath Bhat 2012-08-28 10:41:56 EDT
We don't have any gluster commands to capture the overall gluster volume traffic statistics. 

gluster profile/top commands were introduced for getting statistics per brick level. You might want to try out (that is, if you haven't already) some other gluster top commands like open, read, write, opendir, readdir, write-perf and read-perf. You can run 
gluster volume help for the usage of those options.
Comment 4 Filip Pytloun 2012-08-28 13:02:22 EDT
Ok, thank you.

I am closing the issue as invalid.

Note You need to log in before you can comment on or make changes to this bug.