Bug 1481315

Summary: Cloud-init integration with ovirt supports just a subset of cloud-init keywords
Product: Red Hat Satellite Reporter: Ivan Necas <inecas>
Component: Compute Resources - RHEVAssignee: Ivan Necas <inecas>
Status: CLOSED ERRATA QA Contact: Sanket Jagtap <sjagtap>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.3.0CC: inecas, orabin, pcreech, sjagtap
Target Milestone: 6.5.0Keywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-14 12:36:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ivan Necas 2017-08-14 14:53:16 UTC
Description of problem:
As of https://bugzilla.redhat.com/show_bug.cgi?id=1189813, the cloud-init
integration with ovirt supports just a sub-set of cloud-init keys.

The following template seems to be the maximal set of keys we
can currently pass successfully to the cloud-init via user_data:

```
#cloud-init
hostname: test.example.com
ssh_authorized_keys:
- ssh-rsa test123123123 test.example.com
runcmd:
- ls /
phone_home:
  url: satellite.example.com
  post: []
```

Cloud-init supports much wider range or modules and configuration options,
https://cloudinit.readthedocs.io/en/latest/topics/modules.html#. Currently
we don't support those and this options get ignored.

Version-Release number of selected component (if applicable):
6.3.0

How reproducible:
Always

Steps to Reproduce:
1. add user_data image with cloud-init installed and configured.
2. use one of the valid cloud-init keys that ARE NOT one of `hostname`, `ssh_authorized_keys`, `runcmd`, `phone_home`,
an example could be:
```
#cloud-init
yum_repos:
    zoo:
        baseurl: https://inecas.fedorapeople.org/fakerepos/new_cds/content/zoo/1.0/x86_64/rpms/
        name: zoo
        enabled: true
```



Actual results:
Many cloud-init modules options get ignored (including the `yum_repos` one in the example)

Expected results:
All cloud-init modules options get passed to cloud-init (including the `yum_repos` one in the example)

Additional info:

Comment 2 Ivan Necas 2017-08-14 15:12:02 UTC
Connecting redmine issue http://projects.theforeman.org/issues/20590 from this bug

Comment 4 Satellite Program 2018-04-25 12:19:31 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/20590 has been resolved.

Comment 5 Ivan Necas 2018-07-10 10:31:23 UTC
Added additional issue regarding support of shell-script format for the cloud-init

Comment 6 Satellite Program 2018-07-13 10:21:10 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/24217 has been resolved.

Comment 9 Sanket Jagtap 2018-12-31 07:34:20 UTC
Build: Satellite 6.5 snap 9 

I am getting safe mode error on trying to use the Kickstart default user data shipped with satellite.
If, I am right this should work with safe mode too, or is there any other template I should use for Cloud Init. I also tried with User data default template, same result.
Am I missing something?

On disabling the safemode, I am able to verify this issue and Cloud Init works successfully .

Comment 15 Sanket Jagtap 2019-03-06 09:12:27 UTC
Build: Satellite 6.5 snap 18

Tested on API v3, no issues looks like i was hitting a v4 bug

Cloud-config 

#cloud-config
hostname: earl-lopata
fqdn: earl-lopata
manage_etc_hosts: true
ssh_pwauth: true
groups:
- admin
users:
- default
- name: admin
  primary-group: admin
  groups: users
  shell: /bin/bash
  sudo: ['ALL=(ALL) ALL']
  lock-passwd: false
  passwd:

yum_repos:
  zoo:
    baseurl: https://inecas.fedorapeople.org/fakerepos/new_cds/content/zoo/1.0/x86_64/rpms/
    name: zoo
    enabled: true
    gpgcheck: false


Output log cloud_init.log:
2019-03-06 09:03:19,011 - helpers.py[DEBUG]: Running config-rh_subscription using lock (<FileLock using file '/var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_rh_subscription'>)
2019-03-06 09:03:19,011 - cc_rh_subscription.py[DEBUG]: rh_subscription: module not configured.
2019-03-06 09:03:19,011 - handlers.py[DEBUG]: finish: modules-config/config-rh_subscription: SUCCESS: config-rh_subscription ran successfully
2019-03-06 09:03:19,011 - stages.py[DEBUG]: Running module yum-add-repo (<module 'cloudinit.config.cc_yum_add_repo' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_yum_add_repo.pyc'>) with frequency once-per-instance
2019-03-06 09:03:19,012 - handlers.py[DEBUG]: start: modules-config/config-yum-add-repo: running config-yum-add-repo with frequency once-per-instance
2019-03-06 09:03:19,012 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo - wb: [644] 20 bytes
2019-03-06 09:03:19,013 - util.py[DEBUG]: Restoring selinux mode for /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo (recursive=False)
2019-03-06 09:03:19,013 - util.py[DEBUG]: Restoring selinux mode for /var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo (recursive=False)
2019-03-06 09:03:19,014 - helpers.py[DEBUG]: Running config-yum-add-repo using lock (<FileLock using file '/var/lib/cloud/instances/b8b10f2b-dbf0-4d2b-b0b7-a896d1365a67/sem/config_yum_add_repo'>)
2019-03-06 09:03:19,015 - util.py[DEBUG]: Writing to /etc/yum.repos.d/zoo.repo - wb: [644] 191 bytes
2019-03-06 09:03:19,016 - util.py[DEBUG]: Restoring selinux mode for /etc/yum.repos.d/zoo.repo (recursive=False)
2019-03-06 09:03:19,017 - util.py[DEBUG]: Restoring selinux mode for /etc/yum.repos.d/zoo.repo (recursive=False)
2019-03-06 09:03:19,017 - handlers.py[DEBUG]: finish: modules-config/config-yum-add-repo: SUCCESS: config-yum-add-repo ran successfully

Comment 19 errata-xmlrpc 2019-05-14 12:36:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:1222