Bug 1392866 - Upgrade to 4.1 without collectd installed
Summary: Upgrade to 4.1 without collectd installed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
medium vote
Target Milestone: ovirt-4.1.0-alpha
: 4.19.2
Assignee: Yaniv Bronhaim
QA Contact: Jiri Belka
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-08 11:53 UTC by Yaniv Bronhaim
Modified: 2017-02-15 15:02 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-15 15:02:17 UTC
oVirt Team: Infra
rule-engine: ovirt-4.1+
bmcclain: priority_rfe_tracking+
ybronhei: devel_ack+
pstehlik: testing_ack+


Attachments (Terms of Use)

Description Yaniv Bronhaim 2016-11-08 11:53:12 UTC
Description of problem:
As part of the metrics effort we enable vdsm metrics by default. collectd is installed by ansible and can be done later after the upgrade. Once collectd is installed it should get vdsm metrics without restarting vdsm. 
Also, vdsm should not report any errors if collectd installation is missing.

Comment 1 Yaniv Kaul 2016-11-08 12:25:41 UTC
(In reply to Yaniv Bronhaim from comment #0)
> Description of problem:
> As part of the metrics effort we enable vdsm metrics by default. collectd is
> installed by ansible and can be done later after the upgrade. Once collectd
> is installed it should get vdsm metrics without restarting vdsm. 
> Also, vdsm should not report any errors if collectd installation is missing.

Why not install collectd by default along VDSM?
Configuring it is something different. In fact, I'd argue that it can also be configured - it's just the fluentd part that needs some Ansible-based specific configuration, no?

Comment 2 Yaniv Bronhaim 2016-11-09 14:40:44 UTC
both collectd.conf and fluent.conf needs to be set. we can require collectd as part of vdsm installation. but why? it doesn't sound reasonable

Comment 3 Yaniv Bronhaim 2016-11-21 08:19:47 UTC
works as expected - vdsm sends the metrics to localhost. if collectd is not installed properly no error appear. but when collectd is set even when vdsmd is already running you can see that metrics are arrived without touching vdsmd or other services.

Comment 4 Sandro Bonazzola 2016-12-12 14:01:19 UTC
The fix for this issue should be included in oVirt 4.1.0 beta 1 released on December 1st. If not included please move back to modified.

Comment 5 Jiri Belka 2017-02-02 10:06:55 UTC
ok, vdsm updated to 4.1 without installing collectd. metric are enabled and sent, no error if nothing is "listening" for the metrics.

[root@dell-r210ii-13 ~]# grep vdsm-4 /var/log/yum.log 
Feb 01 17:41:49 Installed: vdsm-4.18.21-1.el7ev.x86_64
Feb 01 17:51:04 Updated: vdsm-4.18.21.1-1.el7ev.x86_64
Feb 02 10:58:21 Updated: vdsm-4.19.4-1.el7ev.x86_64
[root@dell-r210ii-13 ~]# sed -n '/metrics/,/^$/p' /usr/lib/python2.7/site-packages/vdsm/config.py
    # Section: [metrics]
    ('metrics', [
        ('enabled', 'true',
            'Enable metrics collection (default true)'),

            'Number of metrics messages to queue if collector is not'
            ' responsive. When the queue is full, oldest messages are'
            ' dropped. Used only by hawkular-client collector (default 100)'),
    ]),

[root@dell-r210ii-13 ~]# tcpdump -i lo -n -ttt -A -c 5 port 8125
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
00:00:00.000000 IP 127.0.0.1.33195 > 127.0.0.1.8125: UDP, length 20
E..0..@.@.n&.............../hosts.cpu.sys:0.02|g
00:00:00.000024 IP 127.0.0.1.33195 > 127.0.0.1.8125: UDP, length 25
E..5..@.@.n .............!.4hosts.nic.lo.speed:1000|g
00:00:00.000012 IP 127.0.0.1.33195 > 127.0.0.1.8125: UDP, length 26
E..6..@.@.n..............".5hosts.swap.total_mb:2047|g
00:00:00.000011 IP 127.0.0.1.33195 > 127.0.0.1.8125: UDP, length 30
E..:..@.@.n..............&.9hosts.memory.usage_percent:7|g
00:00:00.000010 IP 127.0.0.1.33195 > 127.0.0.1.8125: UDP, length 22
E..2..@.@.n ...............1hosts.cpu.idle:99.94|g
5 packets captured
22 packets received by filter
0 packets dropped by kernel

[root@dell-r210ii-13 ~]# yum history | head -n4 | tail -n1
     6 | root <root>              | 2017-02-02 10:58 | Update         |   11 EE
[root@dell-r210ii-13 ~]# yum history info 6
Loaded plugins: product-id, search-disabled-repos
Transaction ID : 6
Begin time     : Thu Feb  2 10:58:16 2017
Begin rpmdb    : 624:012dea92dd7677e635f5fdf45143e25887f26ca9
End time       :            10:58:42 2017 (26 seconds)
End rpmdb      : 624:5493f50a9f3e7acfb74f0086c86fde921443c090
User           : root <root>
Return-Code    : Success
Command Line   : update
Transaction performed with:
    Installed     rpm-4.11.3-21.el7.x86_64 @rhel73-brq/7.3
    Installed     yum-3.4.3-150.el7.noarch @rhel73-brq/7.3
Packages Altered:
    Updated ovirt-imageio-common-0.3.0-0.el7ev.noarch    @rhel-7-server-rhv-4-mgmt-agent-rpms-pulp
    Update                       1.0.0-0.el7ev.noarch    @rhv-4.1.0-11
    Updated ovirt-imageio-daemon-0.4.0-0.el7ev.noarch    @rhel-7-server-rhv-4-mgmt-agent-rpms-pulp
    Update                       1.0.0-0.el7ev.noarch    @rhv-4.1.0-11
    Updated python-cpopen-1.4-0.el7ev.x86_64             @rhel-7-server-rhv-4-mgmt-agent-rpms-pulp
    Update                1.5-1.el7ev.x86_64             @rhv-4.1.0-11
    Updated vdsm-4.18.21.1-1.el7ev.x86_64                @4.0.6-9
    Update       4.19.4-1.el7ev.x86_64                   @rhv-4.1.0-11
    Updated vdsm-api-4.18.21.1-1.el7ev.noarch            @4.0.6-9
    Update           4.19.4-1.el7ev.noarch               @rhv-4.1.0-11
    Updated vdsm-cli-4.18.21.1-1.el7ev.noarch            @4.0.6-9
    Update           4.19.4-1.el7ev.noarch               @rhv-4.1.0-11
    Updated vdsm-hook-vmfex-dev-4.18.21.1-1.el7ev.noarch @4.0.6-9
    Update                      4.19.4-1.el7ev.noarch    @rhv-4.1.0-11
    Updated vdsm-jsonrpc-4.18.21.1-1.el7ev.noarch        @4.0.6-9
    Update               4.19.4-1.el7ev.noarch           @rhv-4.1.0-11
    Updated vdsm-python-4.18.21.1-1.el7ev.noarch         @4.0.6-9
    Update              4.19.4-1.el7ev.noarch            @rhv-4.1.0-11
    Updated vdsm-xmlrpc-4.18.21.1-1.el7ev.noarch         @4.0.6-9
    Update              4.19.4-1.el7ev.noarch            @rhv-4.1.0-11
    Updated vdsm-yajsonrpc-4.18.21.1-1.el7ev.noarch      @4.0.6-9
    Update                 4.19.4-1.el7ev.noarch         @rhv-4.1.0-11
Scriptlet output:
   1 warning: /etc/vdsm/vdsm.conf created as /etc/vdsm/vdsm.conf.rpmnew
history info


Note You need to log in before you can comment on or make changes to this bug.