Bug 1576831 - Creation of DB disk for manageiq exceed timeout
Summary: Creation of DB disk for manageiq exceed timeout
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-ansible-collection
Classification: oVirt
Component: manageiq
Version: 1.1.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.2.4
: ---
Assignee: Ondra Machacek
QA Contact: Petr Kubica
URL:
Whiteboard:
Depends On: 1588415
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-10 13:39 UTC by Petr Kubica
Modified: 2018-06-26 08:43 UTC (History)
3 users (show)

Fixed In Version: ovirt-ansible-manageiq-1.1.9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-26 08:43:21 UTC
oVirt Team: Infra
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github oVirt ovirt-ansible-manageiq pull 54/ None None None 2018-05-31 07:45:04 UTC

Description Petr Kubica 2018-05-10 13:39:46 UTC
Description of problem:
On my setup with normal nfs storage I measure that creation of 50GiB raw disk takes almost 14 minutes but timeout is hardcoded in role at 180 seconds (default of ovirt_disks module)

TASK [ovirt-manageiq : Hotplug database disk for CFME] ************************************************************************************************************************************************************
task path: /usr/share/ansible/roles/oVirt.manageiq/tasks/init_cfme.yml:24
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_disk.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386 `" && echo ansible-tmp-1525957651.81-99232286325386="` echo /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386 `" ) && sleep 0'
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-53872EmkltT/tmpVwpHai TO /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386/ovirt_disk.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386/ /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386/ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386/ovirt_disk.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1525957651.81-99232286325386/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/tmp/ansible_cFhH8s/ansible_module_ovirt_disk.py", line 619, in main
    fail_condition=lambda d: d.status == otypes.DiskStatus.ILLEGAL if lun is None else False,
  File "/tmp/ansible_cFhH8s/ansible_modlib.zip/ansible/module_utils/ovirt.py", line 601, in create
    poll_interval=self._module.params['poll_interval'],
  File "/tmp/ansible_cFhH8s/ansible_modlib.zip/ansible/module_utils/ovirt.py", line 336, in wait
    raise Exception("Timeout exceed while waiting on result state of the entity.")
Exception: Timeout exceed while waiting on result state of the entity.

failed: [localhost] (item={u'interface': u'virtio', u'format': u'raw', u'name': u'miq_db_disk', u'size': u'50GiB'}) => {
    "changed": false, 
    "invocation": {
        "module_args": {
            "auth": {
                "ca_file": null, 
                "compress": true, 
                "headers": null, 
                "insecure": true, 
                "kerberos": false, 
                "timeout": 0, 
                "token": "TkF3m1gJMiTxBSupgRHMhB4XMxdSAaZzroWhMHXVldFu2LY05qsWJo9BNSL5zYpEC9VUvq47lEB3MzxbL75Rww", 
                "url": "https://engine.example.com/ovirt-engine/api"
            }, 
            "bootable": null, 
            "description": null, 
            "download_image_path": null, 
            "fetch_nested": false, 
            "force": false, 
            "format": "raw", 
            "id": null, 
            "image_provider": null, 
            "interface": "virtio", 
            "logical_unit": null, 
            "name": "miq_db_disk", 
            "nested_attributes": [], 
            "openstack_volume_type": null, 
            "poll_interval": 3, 
            "profile": null, 
            "quota_id": null, 
            "shareable": null, 
            "size": "50GiB", 
            "sparse": null, 
            "sparsify": null, 
            "state": "present", 
            "storage_domain": "1_storage", 
            "storage_domains": null, 
-->>        "timeout": 180, 
            "upload_image_path": null, 
            "vm_id": null, 
            "vm_name": "manageiq_g2", 
            "wait": true
        }
    }, 
    "item": {
        "format": "raw", 
        "interface": "virtio", 
        "name": "miq_db_disk", 
        "size": "50GiB"
    }, 
    "msg": "Timeout exceed while waiting on result state of the entity."


Version-Release number of selected component (if applicable):
ansible-2.5.2-1.el7ae.noarch
ovirt-ansible-manageiq-1.1.8-1.el7ev.noarch

How reproducible:
always

Steps to Reproduce:
1. run role as described in example

Actual results:
timeout exceed on creation of DB disk

Comment 1 Petr Kubica 2018-05-10 13:49:57 UTC
used playbook:
- hosts: localhost
  gather_facts: false
  remote_user: root

  vars:
    engine_fqdn: engine.example.com
    engine_user: admin@internal
    engine_password: 123456
    miq_vm_name: manageiq_g2
    miq_qcow_url: http://releases.manageiq.org/manageiq-ovirt-gaprindashvili-2.qc2

  roles:
      - ovirt-manageiq

Comment 3 Martin Perina 2018-05-11 12:44:21 UTC
Ondro, can we specify a timeout for adding a disk as a role parameter?

Comment 4 Ondra Machacek 2018-05-11 13:51:00 UTC
Yes, I will add it.

Comment 5 Yaniv Kaul 2018-05-13 07:53:59 UTC
(In reply to Martin Perina from comment #3)
> Ondro, can we specify a timeout for adding a disk as a role parameter?

I would not do it - we might have a storage performance issue - let's verify it first (there's a different BZ for it currently being investigated).

Where's the 180s coming from?

Comment 6 Ondra Machacek 2018-05-23 09:55:21 UTC
(In reply to Yaniv Kaul from comment #5)
> I would not do it - we might have a storage performance issue - let's verify
> it first (there's a different BZ for it currently being investigated).
> 
> Where's the 180s coming from?

We won't do anything wrong if we will add it as configurable. 180s is default value of ovirt_disks module to create disk.

Comment 7 Petr Kubica 2018-06-21 11:59:12 UTC
Verified in
ovirt-ansible-manageiq-1.1.10-1.el7ev.noarch

Comment 8 Sandro Bonazzola 2018-06-26 08:43:21 UTC
This bugzilla is included in oVirt 4.2.4 release, published on June 26th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.