Bug 1032070 - [packstack] to match the hostname used by nova, ceilometer compute agent config should allow host option to default
[packstack] to match the hostname used by nova, ceilometer compute agent conf...
Status: CLOSED ERRATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack (Show other bugs)
4.0
Unspecified Unspecified
high Severity high
: rc
: 4.0
Assigned To: Francesco Vollero
Kevin Whitney
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-19 08:51 EST by Eoghan Glynn
Modified: 2016-04-26 20:37 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-12-19 19:37:22 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 60249 None None None Never

  None (edit)
Description Eoghan Glynn 2013-11-19 08:51:07 EST
Description of problem:

The DEFAULT.host config option is explicitly set by packstack to the unqualified hostsname:

  https://github.com/stackforge/packstack/blob/master/packstack/puppet/templates/nova_ceilometer.pp#L16

whereas in the nova config, this is allowed to fall back to the default value for this option, i.e. socket.gethostname().

The result may be a mismatch between the hostnames used by nova and the ceilometer compute agent (qualified versus unqualified, respectively).

This is a problem as the ceilometer compute agent queries the nova-api to discover the instances running on the local host, which will yield no data if the host constraint on the query doesn't match the hostname used by nova.

The result is that detailed metrics (CPU util % etc.) are not gathered for these instances.

The DEFAULT.host should instead be allowed to fall back to the default by ceilometer also, as the default values match:

https://github.com/openstack/ceilometer/blob/master/ceilometer/service.py#L34
https://github.com/openstack/nova/blob/master/nova/netconf.py#L53

(it's the match that's important, not whether the hostname used is qualified or unqualified).

This requires that the following logic is deleted:

  https://github.com/stackforge/packstack/blob/master/packstack/puppet/templates/nova_ceilometer.pp#L16


Version-Release number of selected component (if applicable):

openstack-packstack-2013.2.1-0.9.dev840.el6ost.noarch


How reproducible:

100%


Steps to Reproduce:
1. Run packstack --allinone

2. Spin up at least one instance:

   nova boot --flavor 1 --image $IMAGE_ID test_instance

3. Check that nova returns this instance for a query constrained by the hostname returned by sock.gethostname():

   nova list --all-tenants --host $(python -c "import socket ; print socket.gethostname()") 

3. Check that ceilometer is explicitly configured to use the unqualified host:

   openstack-config --get /etc/ceilometer/ceilometer.conf DEFAULT host

4. Check whether ceilometer is gathering cpu_util for that instance:

  ceilometer statistics -m cpu_util -q "resource_id=$INSTANCE_ID"


Actual results:

cpu_util is not gathered when the hostname qualification doesn't match.


Expected results:

cpu_util should always be gathered.


Additional info:

Workaround is to explicitly set the host in ceilometer.conf to the correct value:

   openstack-config --get /etc/ceilometer/ceilometer.conf DEFAULT host $(python -c "import socket ; print socket.gethostname()")
   service openstack-ceilometer-compute restart
Comment 2 Francesco Vollero 2013-12-05 09:31:50 EST
As result of your suggestion to remove the host flag, this is the result you was expecting.

[DEFAULT]
glance_control_exchange=glance
debug=False
verbose=True
log_dir=/var/log/ceilometer
notification_topics=notifications,glance_notifications
rpc_backend=ceilometer.openstack.common.rpc.impl_qpid
qpid_hostname=192.168.7.225
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
qpid_reconnect=True
qpid_reconnect_interval_max=0
qpid_reconnect_interval_min=0
qpid_reconnect_timeout=0
qpid_reconnect_interval=0
os_auth_url=http://192.168.7.225:35357/v2.0
metering_secret=3e06f38a68534859
os_tenant_name=services
qpid_reconnect_limit=0
os_username=ceilometer
os_password=ddcc216b65f646ae
os_auth_region=RegionOne
Comment 6 Ami Jeain 2013-12-16 07:20:43 EST
verified:
making sure that the host param doesn't exist in the /etc/ceilometer/ceilometer.conf file, and ran:
# ceilometer statistics -m cpu_util -q "resource_id=$INSTANCE_ID"

# nova list
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks                        |
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
| 6ac8e887-2e8e-4caa-a2ba-a10e2bef243d | my_instance | ACTIVE | None       | Running     | int=192.168.32.2, 192.168.122.3 |
+--------------------------------------+-------------+--------+------------+-------------+---------------------------------+
[root@cougar13 cougar13-2013121613451387194353(keystone_admin)]# ceilometer statistics -m cpu_util -q "resource_id=6ac8e887-2e8e-4caa-a2ba-a10e2bef243d"
+--------+---------------------+---------------------+-------+----------------+------+---------------+----------------+----------+---------------------+---------------------+
| Period | Period Start        | Period End          | Count | Min            | Max  | Sum           | Avg            | Duration | Duration Start      | Duration End        |
+--------+---------------------+---------------------+-------+----------------+------+---------------+----------------+----------+---------------------+---------------------+
| 0      | 2013-12-15T11:11:57 | 2013-12-15T11:11:57 | 151   | 0.148333333333 | 0.77 | 24.9634986995 | 0.165321183441 | 90215.0  | 2013-12-15T11:11:57 | 2013-12-16T12:15:32 |
+--------+---------------------+---------------------+-------+----------------+------+---------------+----------------+----------+---------------------+---------------------+
Comment 8 errata-xmlrpc 2013-12-19 19:37:22 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2013-1859.html

Note You need to log in before you can comment on or make changes to this bug.