Bug 1699976

Summary: [UPDATE] rolling_update.yml fails in ceph-update-run.sh
Product: Red Hat OpenStack Reporter: Raviv Bar-Tal <rbartal>
Component: openstack-tripleo-heat-templatesAssignee: John Fulton <johfulto>
Status: CLOSED ERRATA QA Contact: Raviv Bar-Tal <rbartal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 14.0 (Rocky)CC: gfidente, jjoyce, johfulto, jschluet, lbezdick, mburns, shdunne, slinaber, tangvald, tvignaud
Target Milestone: ---Keywords: ZStream
Target Release: 14.0 (Rocky)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-tripleo-heat-templates-9.3.1-0.20190314162759.d0a6cb1.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-30 17:51:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
cep-update-run.log none

Description Raviv Bar-Tal 2019-04-15 14:00:33 UTC
Created attachment 1555258 [details]
cep-update-run.log

Description of problem:
When running ceph-update-run.sh if failes with this error:
"2019-04-10 14:46:26 |  u'TASK [print ceph-ansible output in case of failure] ****************************',
2019-04-10 14:46:26 |  u'Wednesday 10 April 2019  14:46:24 -0400 (0:00:16.289)       0:01:15.572 ******* ',
2019-04-10 14:46:26 |  u'fatal: [undercloud]: FAILED! => {',
2019-04-10 14:46:26 |  u'    "failed_when_result": true, ',
2019-04-10 14:46:26 |  u'    "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [',
2019-04-10 14:46:26 |  u'        "Running ceph-ansible playbook /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml", ',
2019-04-10 14:46:26 |  u'        "ansible-playbook 2.6.11", ',
2019-04-10 14:46:26 |  u'        "  config file = /usr/share/ceph-ansible/ansible.cfg", ',
2019-04-10 14:46:26 |  u'        "  configured module search path = [u\'/usr/share/ceph-ansible/library\']", ',
2019-04-10 14:46:26 |  u'        "  ansible python module location = /usr/lib/python2.7/site-packages/ansible", ',
2019-04-10 14:46:26 |  u'        "  executable location = /usr/bin/ansible-playbook", ',
2019-04-10 14:46:26 |  u'        "  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]", ',
2019-04-10 14:46:26 |  u'        "Using /usr/share/ceph-ansible/ansible.cfg as config file", ',
2019-04-10 14:46:26 |  u'        "statically imported: /usr/share/ceph-ansible/roles/ceph-handler/tasks/check_running_cluster.yml", ',
2019-04-10 14:46:26 |  u'        "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", ',
2019-04-10 14:46:26 |  u'        "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", ',
2019-04-10 14:46:26 |  u'        "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", ',
2019-04-10 14:46:26 |  u'        "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", ',
2019-04-10 14:46:26 |  u'        "", ',
2019-04-10 14:46:26 |  u'        "PLAYBOOK: rolling_update.yml ***************************************************", ',
2019-04-10 14:46:26 |  u'        "13 plays in /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml", ',
2019-04-10 14:46:26 |  u'        "PLAY [confirm whether user really meant to upgrade the cluster] ****************", ',
2019-04-10 14:46:26 |  u'        "TASK [Gathering Facts] *********************************************************", ',
2019-04-10 14:46:26 |  u'        "task path: /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:17", ',
2019-04-10 14:46:26 |  u'        "Wednesday 10 April 2019  14:46:12 -0400 (0:00:00.050)       0:00:00.050 ******* ", ',
2019-04-10 14:46:26 |  u'        "An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds", ',
2019-04-10 14:46:26 |  u'        "fatal: [localhost]: FAILED! => {\\"changed\\": false, \\"cmd\\": \\"/sbin/udevadm info --query property --name /dev/vda1\\", \\"msg\\": \\"Timer expired after 10 seconds\\", \\"rc\\": 257}", ',
2019-04-10 14:46:26 |  u'        "PLAY RECAP *********************************************************************", ',
2019-04-10 14:46:26 |  u'        "localhost                  : ok=0    changed=0    unreachable=0    failed=1   ", ',
2019-04-10 14:46:26 |  u'        "Wednesday 10 April 2019  14:46:24 -0400 (0:00:12.198)       0:00:12.249 ******* ", ',
2019-04-10 14:46:26 |  u'        "=============================================================================== "',
2019-04-10 14:46:26 |  u'    ]',
2019-04-10 14:46:26 |  u'}',
"

Version-Release number of selected component (if applicable):
ceph-ansible-3.2.8-1.

How reproducible:


Steps to Reproduce:
1. install osp14 GA 
2. run the update procedure.
3. ceph-update-run.sh fails 

Actual results:


Expected results:


Additional info:
more logs can be found on Jenkins:
http://staging-jenkins2-qe-playground.usersys.redhat.com/view/DFG/view/upgrades/view/update/job/DFG-upgrades-updates-14-from-GA-HA-ipv4-titan70-poc/1/artifact/

Comment 2 John Fulton 2019-04-17 12:16:43 UTC
(In reply to Raviv Bar-Tal from comment #0)
> Created attachment 1555258 [details]
> cep-update-run.log
> 
> Description of problem:
...
> 2019-04-10 14:46:26 |  u'        "task path:
> /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:17", ',
> 2019-04-10 14:46:26 |  u'        "Wednesday 10 April 2019  14:46:12 -0400
> (0:00:00.050)       0:00:00.050 ******* ", ',
> 2019-04-10 14:46:26 |  u'        "An exception occurred during task
> execution. To see the full traceback, use -vvv. The error was: TimeoutError:
> Timer expired after 10 seconds", ',
> 2019-04-10 14:46:26 |  u'        "fatal: [localhost]: FAILED! =>
> {\\"changed\\": false, \\"cmd\\": \\"/sbin/udevadm info --query property
> --name /dev/vda1\\", \\"msg\\": \\"Timer expired after 10 seconds\\",
> \\"rc\\": 257}", ',

This udevadm output looks like an environmental issue. 

However line 17 of rolling update is just the confirmation query: 
 https://github.com/ceph/ceph-ansible/blob/v3.2.8/infrastructure-playbooks/rolling_update.yml#L17

This is confusing. Please reproduce and contact me to get me a live environment to work in. Also before you reproduce please increase the verbosity with something like this:

parameter_defaults:
  CephAnsiblePlaybookVerbosity: 3

The above should be a permanent addition to CI. 

> Steps to Reproduce:
> 1. install osp14 GA 
> 2. run the update procedure.
> 3. ceph-update-run.sh fails 

Is this for a minor ceph update? 
 
> Additional info:
> more logs can be found on Jenkins:
> http://staging-jenkins2-qe-playground.usersys.redhat.com/view/DFG/view/
> upgrades/view/update/job/DFG-upgrades-updates-14-from-GA-HA-ipv4-titan70-poc/
> 1/artifact/

This 404s.

Comment 3 Raviv Bar-Tal 2019-04-17 12:26:12 UTC
Hey Jonh
I have run the job on my machine, you can  monitor it here:
http://staging-jenkins2-qe-playground.usersys.redhat.com/view/DFG/view/upgrades/view/update/job/DFG-upgrades-updates-14-from-GA-HA-ipv4-titan60-poc/

Comment 4 John Fulton 2019-04-18 05:54:19 UTC
It looks like this Ansible 2.6 issue: https://github.com/ansible/ansible/issues/43884 

Before the ceph-ansible's rolling_update playbook can run its first task, it fails on fact gathering. 

One of the facts it tries to gather is to run "/sbin/udevadm info" which fails and isn't anything in the playbook the ceph-ansible team can change. That's in ansible's fact gathering itself. 

As per the logs you're using Ansible 2.6.11. 

Can you modify the test to use Ansible 2.7.x to confirm this? 

  John

[1] 
(undercloud) [stack@undercloud-0 ae48c718-43d5-402e-81b4-6b7019658462]$ cat ansible-errors.json  | jq .
{
  "undercloud": [
    [
      "print ceph-ansible output in case of failure",
      {
        "_ansible_verbose_always": true,
        "failed_when_result": true,
        "changed": false,
        "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [
          "Running ceph-ansible playbook /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml",
          "ansible-playbook 2.6.11",
          "  config file = /usr/share/ceph-ansible/ansible.cfg",
          "  configured module search path = [u'/usr/share/ceph-ansible/library']",
          "  ansible python module location = /usr/lib/python2.7/site-packages/ansible",
          "  executable location = /usr/bin/ansible-playbook",
          "  python version = 2.7.5 (default, Sep 12 2018, 05:31:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]",
          "Using /usr/share/ceph-ansible/ansible.cfg as config file",
          "statically imported: /usr/share/ceph-ansible/roles/ceph-handler/tasks/check_running_cluster.yml",
          "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml",
          "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml",
          "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml",
          "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml",
          "",
          "PLAYBOOK: rolling_update.yml ***************************************************",
          "13 plays in /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml",
          "PLAY [confirm whether user really meant to upgrade the cluster] ****************",
          "TASK [Gathering Facts] *********************************************************",
          "task path: /usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:17",
          "Wednesday 17 April 2019  14:35:50 -0400 (0:00:00.048)       0:00:00.048 ******* ",
          "An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TimeoutError: Timer expired after 10 seconds",
          "fatal: [localhost]: FAILED! => {\"changed\": false, \"cmd\": \"/sbin/udevadm info --query property --name /dev/vda1\", \"msg\": \"Timer expired after 10 seconds\", \"rc\": 257}",
          "PLAY RECAP *********************************************************************",
          "localhost                  : ok=0    changed=0    unreachable=0    failed=1   ",
          "Wednesday 17 April 2019  14:36:03 -0400 (0:00:12.193)       0:00:12.242 ******* ",
          "=============================================================================== "
        ],
        "_ansible_no_log": false
      }
    ]
  ]
}
(undercloud) [stack@undercloud-0 ae48c718-43d5-402e-81b4-6b7019658462]$ pwd
/var/lib/mistral/ae48c718-43d5-402e-81b4-6b7019658462
(undercloud) [stack@undercloud-0 ae48c718-43d5-402e-81b4-6b7019658462]$

Comment 5 Raviv Bar-Tal 2019-04-18 10:25:28 UTC
I don't see a problem with the ndevadm command,
Manually running the same command "/sbin/udevadm info --query property --name /dev/vda1"
From the undercloud without change anything, using ansible 3.6.11 works fine.
(undercloud) [stack@undercloud-0 ~]$ ansible --version
ansible 2.6.11
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/stack/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
(undercloud) [stack@undercloud-0 ~]$ ansible -i new-inventory.yaml  all -a "/sbin/udevadm info --query property --name /dev/vda1"
undercloud | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/b2db9996-33ae-4ee8-b0eb-212b6d9a4394
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_TYPE=xfs
ID_FS_USAGE=filesystem
ID_FS_UUID=b2db9996-33ae-4ee8-b0eb-212b6d9a4394
ID_FS_UUID_ENC=b2db9996-33ae-4ee8-b0eb-212b6d9a4394
ID_PART_ENTRY_DISK=253:0
ID_PART_ENTRY_FLAGS=0x80
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=115339008
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=253
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=81770

compute-1 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-42-00-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-42-00-00
ID_FS_UUID_ENC=2019-04-17-09-42-00-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=85570

controller-2 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-37-48-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-37-48-00
ID_FS_UUID_ENC=2019-04-17-09-37-48-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=54115

controller-1 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-37-44-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-37-44-00
ID_FS_UUID_ENC=2019-04-17-09-37-44-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=8550

controller-0 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-37-46-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-37-46-00
ID_FS_UUID_ENC=2019-04-17-09-37-46-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=78619

compute-0 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-37-44-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-37-44-00
ID_FS_UUID_ENC=2019-04-17-09-37-44-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=11544

ceph-2 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-42-10-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-42-10-00
ID_FS_UUID_ENC=2019-04-17-09-42-10-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=88253

ceph-0 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-37-44-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-37-44-00
ID_FS_UUID_ENC=2019-04-17-09-37-44-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=59864

ceph-1 | SUCCESS | rc=0 >>
DEVLINKS=/dev/disk/by-label/config-2 /dev/disk/by-path/pci-0000:00:08.0-part1 /dev/disk/by-path/virtio-pci-0000:00:08.0-part1 /dev/disk/by-uuid/2019-04-17-09-41-51-00
DEVNAME=/dev/vda1
DEVPATH=/devices/pci0000:00/0000:00:08.0/virtio4/block/vda/vda1
DEVTYPE=partition
ID_FS_APPLICATION_ID=GENISOIMAGE\x20ISO\x209660\x2fHFS\x20FILESYSTEM\x20CREATOR\x20\x28C\x29\x201993\x20E.YOUNGDALE\x20\x28C\x29\x201997-2006\x20J.PEARSON\x2fJ.SCHILLING\x20\x28C\x29\x202006-2007\x20CDRKIT\x20TEAM
ID_FS_LABEL=config-2
ID_FS_LABEL_ENC=config-2
ID_FS_PUBLISHER_ID=OpenStack\x20Compute\x2018.0.3-0.20181011032838.d1243fe.el7ost
ID_FS_SYSTEM_ID=LINUX
ID_FS_TYPE=iso9660
ID_FS_USAGE=filesystem
ID_FS_UUID=2019-04-17-09-41-51-00
ID_FS_UUID_ENC=2019-04-17-09-41-51-00
ID_FS_VERSION=Joliet Extension
ID_PART_ENTRY_DISK=252:0
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=2048
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
ID_PATH=pci-0000:00:08.0
ID_PATH_TAG=pci-0000_00_08_0
MAJOR=252
MINOR=1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=51597


@John: I think we should continue to debug this failure

Comment 6 Lukas Bezdicka 2019-04-18 12:18:17 UTC
ANSIBLE_GATHER_TIMEOUT=60 solves the issue, looks like gathering from undercloud is just slow because of nested sshing.

Comment 7 John Fulton 2019-04-18 12:40:19 UTC
I will update the THT which sets up the ceph-ansible command to include this new environment variable

Comment 18 errata-xmlrpc 2019-04-30 17:51:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0878