Bug 1627026
| Summary: | ManageIQ/CFME stuck on boot when using cloud-init | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-ansible-collection | Reporter: | Petr Kubica <pkubica> | ||||
| Component: | manageiq | Assignee: | Ondra Machacek <omachace> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Petr Kubica <pkubica> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 1.1.10 | CC: | mperina, omachace, pkubica | ||||
| Target Milestone: | ovirt-4.2.7 | Keywords: | Regression | ||||
| Target Release: | --- | Flags: | rule-engine:
ovirt-4.2+
rule-engine: blocker+ lsvaty: testing_ack+ |
||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | ovirt-ansible-manageiq-1.1.13 | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-11-02 14:33:08 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Both latest versions cfme-rhevm-5.9.4.7-1.x86_64.qcow2 and manageiq-ovirt-gaprindashvili-5.qc2 worked OK for me, can you please re-check? Created attachment 1483034 [details]
logs
I check it's mostly unpredictable when it fails, but it's not in 100% cases. prerequisite is to use cloud-init to initialize ManageIQ/CFME appliance. I'm adding logs from working a non-working scenario.
Mostly the first VM with appliance start successfully but additional VM with another appliance which use cloud-init too mostly fail but it isn't a rule. It could happen that the first deployed VM will fail too, but it happened to me in rare cases. it's not 100% reproducible.
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP. Default is still virtio for database disk: https://github.com/oVirt/ovirt-ansible-manageiq/blob/d135b5841693fb236ec9dd9aaf331b798117dac0/defaults/main.yml#L59 # Default one is database disk. miq_vm_disks: database: name: "{{ miq_vm_name }}_database" size: 50GiB interface: virtio format: raw timeout: 900 should be virtio_scsi? Failed based on comment #4 in version ovirt-ansible-manageiq-1.1.13-0.1.master.20180927100210.el7.noarch Created pull request https://github.com/oVirt/ovirt-ansible-manageiq/pull/70 Verified in ovirt-ansible-manageiq-1.1.13-0.1.master.20181008152002.el7.noarch This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |
Description of problem: ManageIQ and also CFME stuck on booting from HDD while using miq_vm_cloud_init miq_vm_cloud_init: host_name: "{{ miq_vm_name }}" without using miq_vm_cloud_init VM will boot normally tried with: - http://releases.manageiq.org/manageiq-ovirt-gaprindashvili-3.qc2 - cfme-rhevm-5.9.4.7-1 Version-Release number of selected component (if applicable): ovirt-ansible-manageiq-1.1.12-0.1.master.20180903123359.el7.noarch How reproducible: always Steps to Reproduce: 1. use cloud init in role to set hostname