Bug 1248720 - horizon Hypervisors tab's "Disk Usage" is off by order of magnitude for Ceph storage
Summary: horizon Hypervisors tab's "Disk Usage" is off by order of magnitude for Ceph ...
Keywords:
Status: CLOSED DUPLICATE of bug 1236473
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 6.0 (Juno)
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 8.0 (Liberty)
Assignee: Eoghan Glynn
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-30 16:52 UTC by Ben England
Modified: 2019-09-09 16:00 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-06 15:20:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ben England 2015-07-30 16:52:50 UTC
Description of problem:

In the Hypervisors tab, I see "Disk usage" followed by "Used 10 TB of 784 GB".   Very efficient storage! ;-)  It appears that the 784 GB might be the sum of the ephemeral storage available, and the 10 TB might have something to do with the Ceph Cinder storage space used, but it is very wrong.  But seriously, shouldn't RHEL OSP understand that its using Ceph or say something more intelligent?  Does it need a separate bucket for Ephemeral storage and Cinder volumes?

The "nova hypervisor-stats" command is equally brain dead, listing "-9660" in "free_disk_gb" field.  

The word "disk" is the wrong word to use because many sites are going to partly- or all-SSD configs.  

Version-Release number of selected component (if applicable):

RHEL OSP 6 
python-django-horizon-2014.2.3-6.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch

How reproducible:

Every time on my site.

Steps to Reproduce:
1. login to horizon
2. click on "hypervisors" link at top
3. look at "Disk Usage" pie chart

Actual results:

http://perf1.perf.lab.eng.bos.redhat.com/bengland/public/ceph/openstack-horizon-cinder-ceph.jpeg

[root@gprfc041 ~(keystone_admin)]# nova hypervisor-stats
+----------------------+--------+
| Property             | Value  |
+----------------------+--------+
| count                | 16     |
| current_workload     | 0      |
| disk_available_least | -9660  |
| free_disk_gb         | -9496  |
| free_ram_mb          | 233424 |
| local_gb             | 784    |
| local_gb_used        | 10280  |
| memory_mb            | 770000 |
| memory_mb_used       | 536576 |
| running_vms          | 513    |
| vcpus                | 384    |
| vcpus_used           | 514    |
+----------------------+--------+

Expected results:

Should understand how much space is available for Cinder storage and how much space is actually being used.

Additional info:

[root@gprfc041 ~(keystone_admin)]# ceph df
GLOBAL:
    SIZE       AVAIL     RAW USED     %RAW USED 
    10949G     3253G        7695G         70.28 
POOLS:
    NAME        ID     USED      %USED     MAX AVAIL     OBJECTS 
    rbd         0          0         0          434G           0 
    volumes     15     2560G     23.38          434G      656385 

all volumes are 5 GB in size and there are 512 of them, so "ceph df" USED column seems right.

/etc/cinder/cinder.conf

[root@gprfc041 ~(keystone_admin)]# grep -v '^#' /etc/cinder/cinder.conf | grep -v '^$' | more
[DEFAULT]
amqp_durable_queues=False
rabbit_host=10.16.154.120
rabbit_port=5672
rabbit_hosts=10.16.154.120:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=openstack
osapi_volume_listen=0.0.0.0
osapi_volume_workers=24
api_paste_config=/etc/cinder/api-paste.ini
glance_host=10.16.154.120
glance_api_version=2
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
storage_availability_zone=nova
default_availability_zone=nova
auth_strategy=keystone
debug=False
verbose=True
log_dir=/var/log/cinder
use_syslog=False
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_user=cinder
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_secret_uuid=575b15f2-b2b1-48d0-9df9-29dea74333e8
rbd_max_clone_depth=5
rbd_store_chunk_size=4
rados_connect_timeout=-1
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:admin.154.120/cinder
idle_timeout=3600
min_pool_size=1
max_retries=10
retry_interval=10
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[profiler]
[ssl]

Comment 3 Matthias Runge 2015-08-06 07:46:25 UTC
Horizon can only show, what is reported from underlying services.

I would argue: from horizons POV, it should be backend agnostic. I have been told, bot all storages even report their free storage at all.

Comment 5 Stephen Gordon 2015-11-06 15:20:00 UTC

*** This bug has been marked as a duplicate of bug 1236473 ***


Note You need to log in before you can comment on or make changes to this bug.