Bug 1481315
| Summary: | Cloud-init integration with ovirt supports just a subset of cloud-init keywords | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Ivan Necas <inecas> |
| Component: | Compute Resources - RHEV | Assignee: | Ivan Necas <inecas> |
| Status: | CLOSED ERRATA | QA Contact: | Sanket Jagtap <sjagtap> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.3.0 | CC: | inecas, orabin, pcreech, sjagtap |
| Target Milestone: | 6.5.0 | Keywords: | Triaged |
| Target Release: | Unused | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-05-14 12:36:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Ivan Necas
2017-08-14 14:53:16 UTC
Connecting redmine issue http://projects.theforeman.org/issues/20590 from this bug Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/20590 has been resolved. Added additional issue regarding support of shell-script format for the cloud-init Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/24217 has been resolved. Build: Satellite 6.5 snap 9 I am getting safe mode error on trying to use the Kickstart default user data shipped with satellite. If, I am right this should work with safe mode too, or is there any other template I should use for Cloud Init. I also tried with User data default template, same result. Am I missing something? On disabling the safemode, I am able to verify this issue and Cloud Init works successfully . Build: Satellite 6.5 snap 18
Tested on API v3, no issues looks like i was hitting a v4 bug
Cloud-config
#cloud-config
hostname: earl-lopata
fqdn: earl-lopata
manage_etc_hosts: true
ssh_pwauth: true
groups:
- admin
users:
- default
- name: admin
primary-group: admin
groups: users
shell: /bin/bash
sudo: ['ALL=(ALL) ALL']
lock-passwd: false
passwd:
yum_repos:
zoo:
baseurl: https://inecas.fedorapeople.org/fakerepos/new_cds/content/zoo/1.0/x86_64/rpms/
name: zoo
enabled: true
gpgcheck: false
Output log cloud_init.log:
2019-03-06 09:03:19,011 - helpers.py[DEBUG]: Running config-rh_subscription using lock (<FileLock using file '/var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_rh_subscription'>)
2019-03-06 09:03:19,011 - cc_rh_subscription.py[DEBUG]: rh_subscription: module not configured.
2019-03-06 09:03:19,011 - handlers.py[DEBUG]: finish: modules-config/config-rh_subscription: SUCCESS: config-rh_subscription ran successfully
2019-03-06 09:03:19,011 - stages.py[DEBUG]: Running module yum-add-repo (<module 'cloudinit.config.cc_yum_add_repo' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_yum_add_repo.pyc'>) with frequency once-per-instance
2019-03-06 09:03:19,012 - handlers.py[DEBUG]: start: modules-config/config-yum-add-repo: running config-yum-add-repo with frequency once-per-instance
2019-03-06 09:03:19,012 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo - wb: [644] 20 bytes
2019-03-06 09:03:19,013 - util.py[DEBUG]: Restoring selinux mode for /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo (recursive=False)
2019-03-06 09:03:19,013 - util.py[DEBUG]: Restoring selinux mode for /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo (recursive=False)
2019-03-06 09:03:19,014 - helpers.py[DEBUG]: Running config-yum-add-repo using lock (<FileLock using file '/var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo'>)
2019-03-06 09:03:19,015 - util.py[DEBUG]: Writing to /etc/yum.repos.d/zoo.repo - wb: [644] 191 bytes
2019-03-06 09:03:19,016 - util.py[DEBUG]: Restoring selinux mode for /etc/yum.repos.d/zoo.repo (recursive=False)
2019-03-06 09:03:19,017 - util.py[DEBUG]: Restoring selinux mode for /etc/yum.repos.d/zoo.repo (recursive=False)
2019-03-06 09:03:19,017 - handlers.py[DEBUG]: finish: modules-config/config-yum-add-repo: SUCCESS: config-yum-add-repo ran successfully
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:1222 |