Bug 2102557 - Upgrade [OSP16.2 -> OSP17.1] Ceph adoption failed due to lack of permissions on target directory
Summary: Upgrade [OSP16.2 -> OSP17.1] Ceph adoption failed due to lack of permissions...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: tripleo-ansible
Version: 17.1 (Wallaby)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 17.1
Assignee: Francesco Pantano
QA Contact: Juan Badia Payno
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-30 08:19 UTC by Juan Badia Payno
Modified: 2023-08-16 01:11 UTC (History)
5 users (show)

Fixed In Version: tripleo-ansible-3.3.1-1.20220820222621.66d7edc.el9ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-16 01:11:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 849278 0 None MERGED [wallaby-only] Ability to skip create_ceph_ansible_remote_tmp command 2022-08-22 05:55:35 UTC
Red Hat Issue Tracker OSP-16164 0 None None None 2022-06-30 08:27:52 UTC
Red Hat Product Errata RHEA-2023:4577 0 None None None 2023-08-16 01:11:53 UTC

Description Juan Badia Payno 2022-06-30 08:19:36 UTC
Once the undercloud was upgraded to osp17, the overcloud upgrade prepare command is executed.
As can be seen the inventory uses the heat-admin user instead of tripleo-admin.
On the ceph upgrade steps fails due to the lack of permissions at /tmp/ceph_ansible_tmp
I tried several test and the only one that worked was modified the tag at line 60 on the 
create_ceph_ansible_remote_tmp.yml file and skipped that tag from the external execution.



## Execution to prepare the overcloud 
openstack overcloud upgrade prepare \
--timeout 240 \
--templates /usr/share/openstack-tripleo-heat-templates \
  --environment-file /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \
--stack qe-Cloud-0 \
--libvirt-type kvm \
--ntp-server clock1.rdu2.redhat.com \
-e /home/stack/tmp/baremetal_deployment.yaml \
-e /home/stack/tmp/generated-networks-deployed.yaml \
-e /home/stack/tmp/generated-vip-deployed.yaml \
-e /home/stack/virt/internal.yaml \
--networks-file /home/stack/virt/network/network_data_v2.yaml \
-e /home/stack/virt/network/network-environment_v2.yaml \
-e /home/stack/virt/enable-tls.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/public_vip.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/config_heat.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e /home/stack/virt/firstboot.yaml \
-e ~/containers-prepare-parameter.yaml \
-e /home/stack/virt/performance.yaml \
-e /home/stack/virt/l3_fip_qos.yaml \
-e /home/stack/virt/ovn-extras.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/tmp/ipaservices-baremetal-ansible.yaml \
-e /home/stack/tmp/cephadm_fqdn.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
-e /home/stack/virt/cloud-names.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
-e /home/stack/overcloud-deploy/qe-Cloud-0/tripleo-qe-Cloud-0-passwords.yaml \
-e /home/stack/overcloud_images.yaml \
--log-file overcloud_deployment_89.log

## This is part of the inventory, the user to be used is heat-admin
(undercloud) [stack@undercloud-0 ~]$ cat /home/stack/overcloud-deploy/qe-Cloud-0/tripleo-ansible-inventory.yaml
 
...
Controller:
  hosts:
    controller-0:
      ansible_host: 192.168.24.48
      canonical_hostname: controller-0.redhat.local
      ctlplane_hostname: controller-0.ctlplane.redhat.local
      ctlplane_ip: 192.168.24.48
      deploy_server_id: 1264e43f-7f2a-432f-8804-8a6e26f07fd0
      external_hostname: controller-0.external.redhat.local
      external_ip: 10.0.0.141
      internal_api_hostname: controller-0.internalapi.redhat.local
      internal_api_ip: 172.17.1.114
      storage_hostname: controller-0.storage.redhat.local
      storage_ip: 172.17.3.31
      storage_mgmt_hostname: controller-0.storagemgmt.redhat.local
      storage_mgmt_ip: 172.17.4.34
      tenant_hostname: controller-0.tenant.redhat.local
      tenant_ip: 172.17.2.54
    controller-1:
      ansible_host: 192.168.24.11
      canonical_hostname: controller-1.redhat.local
      ctlplane_hostname: controller-1.ctlplane.redhat.local
      ctlplane_ip: 192.168.24.11
      deploy_server_id: 0d92d2e9-c924-47a0-9c6b-eec306f08a64
      external_hostname: controller-1.external.redhat.local
      external_ip: 10.0.0.116
      internal_api_hostname: controller-1.internalapi.redhat.local
      internal_api_ip: 172.17.1.96
      storage_hostname: controller-1.storage.redhat.local
      storage_ip: 172.17.3.71
      storage_mgmt_hostname: controller-1.storagemgmt.redhat.local
      storage_mgmt_ip: 172.17.4.108
      tenant_hostname: controller-1.tenant.redhat.local
      tenant_ip: 172.17.2.61
    controller-2:
      ansible_host: 192.168.24.27
      canonical_hostname: controller-2.redhat.local
      ctlplane_hostname: controller-2.ctlplane.redhat.local
      ctlplane_ip: 192.168.24.27
      deploy_server_id: 4890f2a6-e52a-4d18-ad33-90bfb4daad4d
      external_hostname: controller-2.external.redhat.local
      external_ip: 10.0.0.110
      internal_api_hostname: controller-2.internalapi.redhat.local
      internal_api_ip: 172.17.1.102
      storage_hostname: controller-2.storage.redhat.local
      storage_ip: 172.17.3.46
      storage_mgmt_hostname: controller-2.storagemgmt.redhat.local
      storage_mgmt_ip: 172.17.4.87
      tenant_hostname: controller-2.tenant.redhat.local
      tenant_ip: 172.17.2.92
  vars:
    ansible_ssh_user: heat-admin
    bootstrap_server_id: 1264e43f-7f2a-432f-8804-8a6e26f07fd0
    ctlplane_cidr: '24'
    ctlplane_dns_nameservers: &id001
    - 10.0.0.36
    ctlplane_gateway_ip: 192.168.24.1
    ctlplane_host_routes: []
    ctlplane_mtu: 1500
    ctlplane_subnet_cidr: '24'
    ctlplane_vlan_id: '1'
    external_cidr: '24'
    external_dns_nameservers: []
    external_gateway_ip: 10.0.0.1
    external_host_routes:
    - default: true
      nexthop: 10.0.0.1
    external_mtu: 1500
    external_vlan_id: '10'
    internal_api_cidr: '24'
    internal_api_dns_nameservers: &id002 []
    internal_api_gateway_ip: null
    internal_api_host_routes: []
    internal_api_mtu: 1500
    internal_api_vlan_id: '20'
    networks_all: &id003
    - Storage
    - StorageMgmt
    - InternalApi
    - Tenant
    - External
...


## The execution of the openstack overcloud extenal-upgrade to adopt ceph which failed.

(undercloud) [stack@undercloud-0 ~]$ openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts
...
        "TASK [ceph-infra : add logrotate configuration] ********************************",
        "task path: /usr/share/ceph-ansible/roles/ceph-infra/tasks/main.yml:40",
        "Tuesday 28 June 2022  13:17:07 +0000 (0:00:01.484)       0:00:33.112 ********** ",
        "fatal: [controller-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0189223-782939-58772081411051 `\\\" && echo ansible-tmp-1656422228.0189223-782939-58772081411051=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0189223-782939-58772081411051 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0500329-782940-254378687854924 `\\\" && echo ansible-tmp-1656422228.0500329-782940-254378687854924=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0500329-782940-254378687854924 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0762165-782944-264171691841358 `\\\" && echo ansible-tmp-1656422228.0762165-782944-264171691841358=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.0762165-782944-264171691841358 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.100523-782949-37552519535997 `\\\" && echo ansible-tmp-1656422228.100523-782949-37552519535997=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.100523-782949-37552519535997 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.125743-782953-229886247109401 `\\\" && echo ansible-tmp-1656422228.125743-782953-229886247109401=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.125743-782953-229886247109401 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.1485791-782958-76427067622758 `\\\" && echo ansible-tmp-1656422228.1485791-782958-76427067622758=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422228.1485791-782958-76427067622758 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "NO MORE HOSTS LEFT *************************************************************",
        "PLAY RECAP *********************************************************************",
        "ceph-0                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "ceph-1                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "ceph-2                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "compute-0                  : ok=26   changed=3    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   ",
        "compute-1                  : ok=26   changed=3    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   ",
        "controller-0               : ok=33   changed=4    unreachable=1    failed=0    skipped=46   rescued=0    ignored=0   ",
        "controller-1               : ok=27   changed=3    unreachable=1    failed=0    skipped=38   rescued=0    ignored=0   ",
        "controller-2               : ok=27   changed=3    unreachable=1    failed=0    skipped=38   rescued=0    ignored=0   ",
        "localhost                  : ok=0    changed=0    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   ",
        "Tuesday 28 June 2022  13:17:08 +0000 (0:00:00.206)       0:00:33.319 ********** ",
        "=============================================================================== ",
        "ceph-infra : install chrony --------------------------------------------- 3.43s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:27 ---------------",
        "gather and delegate facts ----------------------------------------------- 2.82s",
        "/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:82 --------",
        "ceph-facts : get current fsid ------------------------------------------- 1.90s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:141 ------------------",
        "ceph-facts : read osd pool default crush rule --------------------------- 1.52s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:228 ------------------",
        "ceph-facts : resolve device link(s) ------------------------------------- 1.50s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/devices.yml:2 ------------------",
        "ceph-infra : enable chronyd --------------------------------------------- 1.48s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:58 ---------------",
        "gather facts ------------------------------------------------------------ 1.45s",
        "/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:74 --------",
        "ceph-facts : find a running mon container ------------------------------- 1.24s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:59 -------------------",
        "ceph-infra : disable time sync using timesyncd if we are not using it --- 1.20s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:44 ---------------",
        "ceph-facts : check if the ceph conf exists ------------------------------ 0.88s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:217 ------------------",
        "ceph-facts : check if it is atomic host --------------------------------- 0.87s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:2 --------------------",
        "ceph-facts : check if podman binary is present -------------------------- 0.61s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/container_binary.yml:2 ---------",
        "ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.61s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_monitor_address.yml:2 ------",
        "ceph-facts : set_fact rgw_instances with rgw multisite ------------------ 0.40s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_radosgw_address.yml:80 -----",
        "ceph-facts : include facts.yml ------------------------------------------ 0.39s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/main.yml:6 ---------------------",
        "ceph-infra : include_tasks setup_ntp.yml -------------------------------- 0.36s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/main.yml:17 --------------------",
        "ceph-facts : read osd pool default crush rule --------------------------- 0.35s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:244 ------------------",
        "ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.31s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_radosgw_address.yml:62 -----",
        "ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.31s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/devices.yml:77 -----------------",
        "ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli --- 0.30s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:259 ------------------"
    ],
    "failed_when_result": true
}
2022-06-28 13:17:08.841260 | 525400cf-ccf8-0e5e-163b-000000001937 |     TIMING | tripleo_ceph_run_ansible : print ceph-ansible output in case of failure | undercloud | 0:01:10.398349 | 0.13s

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
ceph-0                     : ok=7    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-1                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-2                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-0                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-1                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-0               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-1               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-2               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
undercloud                 : ok=84   changed=16   unreachable=0    failed=1    skipped=98   rescued=0    ignored=2   
2022-06-28 13:17:08.856049 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.856437 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 186        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.856802 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:01:10.413910 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.857138 |                                 UUID |       Info |       Host |   Task Name |   Run Time
2022-06-28 13:17:08.857466 | 525400cf-ccf8-0e5e-163b-000000001935 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : run ceph-ansible | 36.57s
2022-06-28 13:17:08.857830 | 525400cf-ccf8-0e5e-163b-000000001683 |    SUMMARY | undercloud | tripleo_ceph_uuid : run nodes-uuid command | 5.60s
2022-06-28 13:17:08.858197 | 525400cf-ccf8-0e5e-163b-0000000008bd |    SUMMARY | undercloud | Get ceph-ansible repository | 2.80s
2022-06-28 13:17:08.858594 | 525400cf-ccf8-0e5e-163b-000000001927 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : run create_ceph_ansible_remote_tmp command | 1.47s
2022-06-28 13:17:08.858979 | 525400cf-ccf8-0e5e-163b-000000000445 |    SUMMARY | undercloud | ceph : Gather the package facts | 1.37s
2022-06-28 13:17:08.859338 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY |  compute-0 | Gathering Facts | 1.02s
2022-06-28 13:17:08.859668 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY |     ceph-1 | Gathering Facts | 0.98s
2022-06-28 13:17:08.860070 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY | controller-0 | Gathering Facts | 0.96s
2022-06-28 13:17:08.860463 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY |  compute-1 | Gathering Facts | 0.93s
2022-06-28 13:17:08.860813 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY | controller-2 | Gathering Facts | 0.93s
2022-06-28 13:17:08.861141 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY |     ceph-0 | Gathering Facts | 0.91s
2022-06-28 13:17:08.861426 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY | controller-1 | Gathering Facts | 0.88s
2022-06-28 13:17:08.861708 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY |     ceph-2 | Gathering Facts | 0.83s
2022-06-28 13:17:08.861988 | 525400cf-ccf8-0e5e-163b-00000000106a |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.83s
2022-06-28 13:17:08.862339 | 525400cf-ccf8-0e5e-163b-000000000a32 |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.81s
2022-06-28 13:17:08.862805 | 525400cf-ccf8-0e5e-163b-000000000726 |    SUMMARY | undercloud | Gathering Facts | 0.78s
2022-06-28 13:17:08.863211 | 525400cf-ccf8-0e5e-163b-000000000a48 |    SUMMARY | undercloud | tripleo_ceph_work_dir : generate ceph-ansible group vars all | 0.59s
2022-06-28 13:17:08.863581 | 525400cf-ccf8-0e5e-163b-00000000049a |    SUMMARY | undercloud | generate ceph-ansible group vars mgrs | 0.50s
2022-06-28 13:17:08.864016 | 525400cf-ccf8-0e5e-163b-000000001080 |    SUMMARY | undercloud | tripleo_ceph_work_dir : generate ceph-ansible group vars all | 0.40s
2022-06-28 13:17:08.864444 | 525400cf-ccf8-0e5e-163b-0000000004c6 |    SUMMARY | undercloud | generate ceph-ansible group vars osds | 0.40s
2022-06-28 13:17:08.864795 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.865179 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ State Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.865501 | ~~~~~~~~~~~~~~~~~~ Number of nodes which did not deploy successfully: 1 ~~~~~~~~~~~~~~~~~
2022-06-28 13:17:08.867831 |  The following node(s) had failures: undercloud
2022-06-28 13:17:08.868337 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:17:09.212 778255 INFO tripleoclient.utils.utils [-] Temporary directory [ /tmp/tripleokn05u6jo ] cleaned up
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.utils.utils [-] Ansible execution failed. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml, Run Status: failed, Return Code: 2, To rerun the failed command manually execute the following script: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/ansible-playbook-command.sh
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun [-] Exception occured while running the command: RuntimeError: Ansible execution failed. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml, Run Status: failed, Return Code: 2, To rerun the failed command manually execute the following script: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/ansible-playbook-command.sh
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun Traceback (most recent call last):
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun   File "/usr/lib/python3.6/site-packages/tripleoclient/command.py", line 32, in run
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun     super(Command, self).run(parsed_args)
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun   File "/usr/lib/python3.6/site-packages/osc_lib/command/command.py", line 39, in run
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun     return super(Command, self).run(parsed_args)
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun   File "/usr/lib/python3.6/site-packages/cliff/command.py", line 186, in run
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun     return_code = self.take_action(parsed_args) or 0
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun   File "/usr/lib/python3.6/site-packages/tripleoclient/v1/overcloud_external_upgrade.py", line 153, in take_action
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun     reproduce_command=True
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun   File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 736, in run_ansible_playbook
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun     raise RuntimeError(err_msg)
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun RuntimeError: Ansible execution failed. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml, Run Status: failed, Return Code: 2, To rerun the failed command manually execute the following script: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/ansible-playbook-command.sh
2022-06-28 13:17:09.212 778255 ERROR tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun 
2022-06-28 13:17:09.214 778255 ERROR openstack [-] Ansible execution failed. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml, Run Status: failed, Return Code: 2, To rerun the failed command manually execute the following script: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/ansible-playbook-command.sh: RuntimeError: Ansible execution failed. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml, Run Status: failed, Return Code: 2, To rerun the failed command manually execute the following script: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/ansible-playbook-command.sh
2022-06-28 13:17:09.214 778255 INFO osc_lib.shell [-] END return value: 1
/usr/lib/python3.6/site-packages/barbicanclient/__init__.py:61: UserWarning: The secrets module is moved to barbicanclient/v1 directory, direct import of barbicanclient.secrets will be deprecated. Please import barbicanclient.v1.secrets instead.
  % (name, name, name))


## These are part of the directories which it complains

(undercloud) [stack@undercloud-0 ~]$ ls -l /tmp/ | grep ceph 
drwx------. 2 root          root           4096 Jun 28 13:16 ceph_ansible_control_path
drwx------. 2 root          root              6 Jun 23 07:54 ceph_ansible_tmp
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin ls -l /tmp/ | grep ceph 
Warning: Permanently added 'controller-0.ctlplane' (ECDSA) to the list of known hosts.
drwx------. 2 root       root        6 Jun 29 08:41 ceph_ansible_tmp
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin ls -l /tmp/ | grep ceph 
Warning: Permanently added 'ceph-0.ctlplane' (ECDSA) to the list of known hosts.
drwx------. 2 root       root        6 Jun 29 08:41 ceph_ansible_tmp
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin ls -l /tmp/ | grep ceph 
Warning: Permanently added 'compute-0.ctlplane' (ECDSA) to the list of known hosts.
drwx------. 2 root       root        6 Jun 29 08:41 ceph_ansible_tmp

Comment 1 Juan Badia Payno 2022-06-30 08:21:45 UTC
### Another test as Rabi suggested after removing the remote directories (/tmp/ceph_ansible_tmp) on all hosts

(undercloud) [stack@undercloud-0 ~]$ ANSIBLE_REMOTE_USER=tripleo-admin openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts

        "task path: /usr/share/ceph-ansible/roles/ceph-infra/tasks/main.yml:40",
        "Tuesday 28 June 2022  13:28:13 +0000 (0:00:01.663)       0:00:31.696 ********** ",
        "fatal: [controller-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.3973598-803969-199512256121537 `\\\" && echo ansible-tmp-1656422893.3973598-803969-199512256121537=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.3973598-803969-199512256121537 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4302225-803970-231745331753857 `\\\" && echo ansible-tmp-1656422893.4302225-803970-231745331753857=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4302225-803970-231745331753857 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4676113-803975-168426745654952 `\\\" && echo ansible-tmp-1656422893.4676113-803975-168426745654952=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4676113-803975-168426745654952 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4976327-803980-104164280060313 `\\\" && echo ansible-tmp-1656422893.4976327-803980-104164280060313=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.4976327-803980-104164280060313 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.5311334-803984-89881189582370 `\\\" && echo ansible-tmp-1656422893.5311334-803984-89881189582370=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.5311334-803984-89881189582370 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.5590832-803989-122640184691967 `\\\" && echo ansible-tmp-1656422893.5590832-803989-122640184691967=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.5590832-803989-122640184691967 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "NO MORE HOSTS LEFT *************************************************************",
        "PLAY RECAP *********************************************************************",
        "ceph-0                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "ceph-1                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "ceph-2                     : ok=28   changed=3    unreachable=1    failed=0    skipped=37   rescued=0    ignored=0   ",
        "compute-0                  : ok=26   changed=3    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   ",
        "compute-1                  : ok=26   changed=3    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0   ",
        "controller-0               : ok=33   changed=4    unreachable=1    failed=0    skipped=46   rescued=0    ignored=0   ",
        "controller-1               : ok=27   changed=3    unreachable=1    failed=0    skipped=38   rescued=0    ignored=0   ",
        "controller-2               : ok=27   changed=3    unreachable=1    failed=0    skipped=38   rescued=0    ignored=0   ",
        "localhost                  : ok=0    changed=0    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   ",
        "Tuesday 28 June 2022  13:28:13 +0000 (0:00:00.247)       0:00:31.944 ********** ",
        "=============================================================================== ",
        "gather and delegate facts ----------------------------------------------- 2.78s",
        "/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:82 --------",
        "ceph-facts : resolve device link(s) ------------------------------------- 2.22s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/devices.yml:2 ------------------",
        "ceph-infra : install chrony --------------------------------------------- 1.85s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:27 ---------------",
        "ceph-infra : enable chronyd --------------------------------------------- 1.66s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:58 ---------------",
        "ceph-facts : get current fsid ------------------------------------------- 1.60s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:141 ------------------",
        "gather facts ------------------------------------------------------------ 1.57s",
        "/usr/share/ceph-ansible/infrastructure-playbooks/rolling_update.yml:74 --------",
        "ceph-facts : find a running mon container ------------------------------- 1.53s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:59 -------------------",
        "ceph-infra : disable time sync using timesyncd if we are not using it --- 1.13s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/setup_ntp.yml:44 ---------------",
        "ceph-facts : check if it is atomic host --------------------------------- 0.78s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:2 --------------------",
        "ceph-facts : check if the ceph conf exists ------------------------------ 0.72s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:217 ------------------",
        "ceph-facts : check if podman binary is present -------------------------- 0.61s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/container_binary.yml:2 ---------",
        "ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.58s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_monitor_address.yml:2 ------",
        "ceph-facts : set_fact rgw_instances with rgw multisite ------------------ 0.58s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_radosgw_address.yml:80 -----",
        "ceph-facts : read osd pool default crush rule --------------------------- 0.47s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:228 ------------------",
        "ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.44s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_radosgw_address.yml:62 -----",
        "ceph-facts : include facts.yml ------------------------------------------ 0.43s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/main.yml:6 ---------------------",
        "ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 0.40s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/devices.yml:77 -----------------",
        "ceph-infra : include_tasks setup_ntp.yml -------------------------------- 0.33s",
        "/usr/share/ceph-ansible/roles/ceph-infra/tasks/main.yml:17 --------------------",
        "ceph-facts : read osd pool default crush rule --------------------------- 0.32s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:244 ------------------",
        "ceph-facts : set_fact container_exec_cmd -------------------------------- 0.30s",
        "/usr/share/ceph-ansible/roles/ceph-facts/tasks/facts.yml:53 -------------------"
    ],
    "failed_when_result": true
}
2022-06-28 13:28:14.196911 | 525400cf-ccf8-2c00-06c7-000000001937 |     TIMING | tripleo_ceph_run_ansible : print ceph-ansible output in case of failure | undercloud | 0:01:10.454347 | 0.08s

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
ceph-0                     : ok=7    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-1                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-2                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-0                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-1                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-0               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-1               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-2               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
undercloud                 : ok=84   changed=6    unreachable=0    failed=1    skipped=98   rescued=0    ignored=2   
2022-06-28 13:28:14.208359 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:28:14.208700 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 186        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:28:14.209015 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:01:10.466472 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 13:28:14.209324 |                                 UUID |       Info |       Host |   Task Name |   Run Time
2022-06-28 13:28:14.209704 | 525400cf-ccf8-2c00-06c7-000000001935 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : run ceph-ansible | 34.93s
2022-06-28 13:28:14.210107 | 525400cf-ccf8-2c00-06c7-000000001683 |    SUMMARY | undercloud | tripleo_ceph_uuid : run nodes-uuid command | 7.21s
2022-06-28 13:28:14.210526 | 525400cf-ccf8-2c00-06c7-0000000008bd |    SUMMARY | undercloud | Get ceph-ansible repository | 3.02s
2022-06-28 13:28:14.210975 | 525400cf-ccf8-2c00-06c7-000000001927 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : run create_ceph_ansible_remote_tmp command | 1.99s
2022-06-28 13:28:14.211404 | 525400cf-ccf8-2c00-06c7-000000000445 |    SUMMARY | undercloud | ceph : Gather the package facts | 1.21s
2022-06-28 13:28:14.211862 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY | controller-1 | Gathering Facts | 0.90s
2022-06-28 13:28:14.212278 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY |  compute-0 | Gathering Facts | 0.87s
2022-06-28 13:28:14.212803 | 525400cf-ccf8-2c00-06c7-000000000a32 |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.87s
2022-06-28 13:28:14.213238 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY |     ceph-0 | Gathering Facts | 0.86s
2022-06-28 13:28:14.213590 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY |     ceph-2 | Gathering Facts | 0.84s
2022-06-28 13:28:14.213925 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY | controller-0 | Gathering Facts | 0.83s
2022-06-28 13:28:14.214281 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY |     ceph-1 | Gathering Facts | 0.82s
2022-06-28 13:28:14.214622 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY |  compute-1 | Gathering Facts | 0.81s
2022-06-28 13:28:14.214934 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY | controller-2 | Gathering Facts | 0.81s
2022-06-28 13:28:14.215211 | 525400cf-ccf8-2c00-06c7-00000000106a |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.68s
2022-06-28 13:28:14.215482 | 525400cf-ccf8-2c00-06c7-000000000726 |    SUMMARY | undercloud | Gathering Facts | 0.67s
2022-06-28 13:28:14.215777 | 525400cf-ccf8-2c00-06c7-000000000a48 |    SUMMARY | undercloud | tripleo_ceph_work_dir : generate ceph-ansible group vars all | 0.59s
2022-06-28 13:28:14.216064 | 525400cf-ccf8-2c00-06c7-00000000049d |    SUMMARY | undercloud | generate ceph-ansible group vars mons | 0.46s
2022-06-28 13:28:14.216361 | 525400cf-ccf8-2c00-06c7-000000001925 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : genereate create_ceph_ansible_remote_tmp playbook | 0.43s
2022-06-28 13:28:14.216668 | 525400cf-ccf8-2c00-06c7-000000000497 |    SUMMARY | undercloud | generate ceph-ansible group vars clients | 0.40s

Comment 2 Juan Badia Payno 2022-06-30 08:22:24 UTC
The workaround:


https://github.com/openstack/tripleo-ansible/blob/stable/wallaby/tripleo_ansible/roles/tripleo_ceph_run_ansible/tasks/create_ceph_ansible_remote_tmp.yml#L60 modified the tag as run_ceph_ansible_jbp

directories removed as the owner is root.

and executed 
openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts --skip-tags run_ceph_ansible_jbp

2022-06-28 14:06:44.704834 | 525400cf-ccf8-64ae-5370-000000000702 |       TASK | generate ceph-ansible group vars osds
2022-06-28 14:06:44.721679 | 525400cf-ccf8-64ae-5370-000000000702 |    SKIPPED | generate ceph-ansible group vars osds | undercloud
2022-06-28 14:06:44.722674 | 525400cf-ccf8-64ae-5370-000000000702 |     TIMING | generate ceph-ansible group vars osds | undercloud | 0:12:49.066927 | 0.02s

PLAY RECAP *********************************************************************
ceph-0                     : ok=7    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-1                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
ceph-2                     : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-0                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
compute-1                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-0               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-1               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
controller-2               : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=2   
undercloud                 : ok=108  changed=11   unreachable=0    failed=0    skipped=237  rescued=0    ignored=2   
2022-06-28 14:06:44.729158 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 14:06:44.729466 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 349        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 14:06:44.729768 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Elapsed Time: 0:12:49.074025 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 14:06:44.730095 |                                 UUID |       Info |       Host |   Task Name |   Run Time
2022-06-28 14:06:44.730388 | 525400cf-ccf8-64ae-5370-000000001935 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : run ceph-ansible | 713.64s
2022-06-28 14:06:44.730678 | 525400cf-ccf8-64ae-5370-0000000020f8 |    SUMMARY | undercloud | tripleo_ceph_client : push files to the other nodes of cluster | 7.49s
2022-06-28 14:06:44.730981 | 525400cf-ccf8-64ae-5370-000000001683 |    SUMMARY | undercloud | tripleo_ceph_uuid : run nodes-uuid command | 3.00s
2022-06-28 14:06:44.731268 | 525400cf-ccf8-64ae-5370-0000000020f7 |    SUMMARY | undercloud | tripleo_ceph_client : Ensure /etc/ceph exists on all clients | 2.95s
2022-06-28 14:06:44.731530 | 525400cf-ccf8-64ae-5370-0000000008bd |    SUMMARY | undercloud | Get ceph-ansible repository | 2.42s
2022-06-28 14:06:44.731802 | 525400cf-ccf8-64ae-5370-000000001936 |    SUMMARY | undercloud | tripleo_ceph_run_ansible : search output of ceph-ansible run(s) non-zero return codes | 1.31s
2022-06-28 14:06:44.732102 | 525400cf-ccf8-64ae-5370-000000000445 |    SUMMARY | undercloud | ceph : Gather the package facts | 1.28s
2022-06-28 14:06:44.732385 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY | controller-1 | Gathering Facts | 0.98s
2022-06-28 14:06:44.732690 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY |     ceph-0 | Gathering Facts | 0.97s
2022-06-28 14:06:44.733025 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY |  compute-0 | Gathering Facts | 0.95s
2022-06-28 14:06:44.733306 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY |     ceph-1 | Gathering Facts | 0.93s
2022-06-28 14:06:44.733567 | 525400cf-ccf8-64ae-5370-000000000a32 |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.90s
2022-06-28 14:06:44.733835 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY | controller-0 | Gathering Facts | 0.90s
2022-06-28 14:06:44.734081 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY |  compute-1 | Gathering Facts | 0.89s
2022-06-28 14:06:44.734344 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY | controller-2 | Gathering Facts | 0.87s
2022-06-28 14:06:44.734593 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY |     ceph-2 | Gathering Facts | 0.84s
2022-06-28 14:06:44.734867 | 525400cf-ccf8-64ae-5370-000000000726 |    SUMMARY | undercloud | Gathering Facts | 0.71s
2022-06-28 14:06:44.735129 | 525400cf-ccf8-64ae-5370-000000000a48 |    SUMMARY | undercloud | tripleo_ceph_work_dir : generate ceph-ansible group vars all | 0.69s
2022-06-28 14:06:44.735431 | 525400cf-ccf8-64ae-5370-00000000106a |    SUMMARY | undercloud | tripleo_ceph_work_dir : create ceph-ansible temp dirs | 0.67s
2022-06-28 14:06:44.735708 | 525400cf-ccf8-64ae-5370-000000000dc7 |    SUMMARY | undercloud | tripleo_ceph_uuid : generate nodes-uuid data file | 0.58s
2022-06-28 14:06:44.735981 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ End Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2022-06-28 14:06:45.055 854945 INFO tripleoclient.utils.utils [-] Temporary directory [ /tmp/tripleob9g7r78w ] cleaned up
2022-06-28 14:06:45.056 854945 INFO tripleoclient.utils.utils [-] Ansible execution success. playbook: /home/stack/overcloud-deploy/qe-Cloud-0/config-download/tripleo-multi-playbook.yaml
2022-06-28 14:06:45.184 854945 INFO tripleoclient.v1.overcloud_external_upgrade.ExternalUpgradeRun [-] Completed Overcloud External Upgrade Run.
2022-06-28 14:06:45.184 854945 INFO osc_lib.shell [-] END return value: None
/usr/lib/python3.6/site-packages/barbicanclient/__init__.py:61: UserWarning: The secrets module is moved to barbicanclient/v1 directory, direct import of barbicanclient.secrets will be deprecated. Please import barbicanclient.v1.secrets instead.



(undercloud) [stack@undercloud-0 ~]$ ls -l /tmp/ | grep ceph
drwx------. 2 root          root              6 Jun 28 14:16 ceph_ansible_control_path
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin ls -l /tmp/ | grep ceph 
Warning: Permanently added 'controller-0.ctlplane' (ECDSA) to the list of known hosts.
drwx------. 2 heat-admin heat-admin  6 Jun 29 09:27 ceph_ansible_tmp

Comment 3 John Fulton 2022-06-30 09:51:45 UTC
(In reply to Juan Badia Payno from comment #0)
> Once the undercloud was upgraded to osp17, the overcloud upgrade prepare
> command is executed.
> As can be seen the inventory uses the heat-admin user instead of
> tripleo-admin.

Why can't it be run by tripleo-admin instead of heat-admin?

> ## Execution to prepare the overcloud 
> openstack overcloud upgrade prepare \
> --timeout 240 \
> --templates /usr/share/openstack-tripleo-heat-templates \
>   --environment-file
> /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \
> --stack qe-Cloud-0 \
> --libvirt-type kvm \
> --ntp-server clock1.rdu2.redhat.com \
> -e /home/stack/tmp/baremetal_deployment.yaml \
> -e /home/stack/tmp/generated-networks-deployed.yaml \
> -e /home/stack/tmp/generated-vip-deployed.yaml \
> -e /home/stack/virt/internal.yaml \
> --networks-file /home/stack/virt/network/network_data_v2.yaml \
> -e /home/stack/virt/network/network-environment_v2.yaml \
> -e /home/stack/virt/enable-tls.yaml \
> -e /home/stack/virt/inject-trust-anchor.yaml \
> -e /home/stack/virt/public_vip.yaml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-
> public-ip.yaml \
> -e /home/stack/virt/hostnames.yml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-
> ansible.yaml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-
> ovn-dvr-ha.yaml \
> -e /home/stack/virt/debug.yaml \
> -e /home/stack/virt/config_heat.yaml \
> -e /home/stack/virt/nodes_data.yaml \
> -e /home/stack/virt/firstboot.yaml \
> -e ~/containers-prepare-parameter.yaml \
> -e /home/stack/virt/performance.yaml \
> -e /home/stack/virt/l3_fip_qos.yaml \
> -e /home/stack/virt/ovn-extras.yaml \
> -e /home/stack/containers-prepare-parameter.yaml \
> -e /home/stack/tmp/ipaservices-baremetal-ansible.yaml \
> -e /home/stack/tmp/cephadm_fqdn.yaml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-
> endpoints-dns.yaml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-
> tls.yaml \
> -e /home/stack/virt/cloud-names.yaml \
> -e
> /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-
> public-tls-certmonger.yaml \
> -e /home/stack/overcloud-deploy/qe-Cloud-0/tripleo-qe-Cloud-0-passwords.yaml
> \
> -e /home/stack/overcloud_images.yaml \
> --log-file overcloud_deployment_89.log
> 
> ## This is part of the inventory, the user to be used is heat-admin
> (undercloud) [stack@undercloud-0 ~]$ cat
> /home/stack/overcloud-deploy/qe-Cloud-0/tripleo-ansible-inventory.yaml
>  
> ...
> Controller:
>   hosts:
>     controller-0:
>       ansible_host: 192.168.24.48
>       canonical_hostname: controller-0.redhat.local
>       ctlplane_hostname: controller-0.ctlplane.redhat.local
>       ctlplane_ip: 192.168.24.48
>       deploy_server_id: 1264e43f-7f2a-432f-8804-8a6e26f07fd0
>       external_hostname: controller-0.external.redhat.local
>       external_ip: 10.0.0.141
>       internal_api_hostname: controller-0.internalapi.redhat.local
>       internal_api_ip: 172.17.1.114
>       storage_hostname: controller-0.storage.redhat.local
>       storage_ip: 172.17.3.31
>       storage_mgmt_hostname: controller-0.storagemgmt.redhat.local
>       storage_mgmt_ip: 172.17.4.34
>       tenant_hostname: controller-0.tenant.redhat.local
>       tenant_ip: 172.17.2.54
>     controller-1:
>       ansible_host: 192.168.24.11
>       canonical_hostname: controller-1.redhat.local
>       ctlplane_hostname: controller-1.ctlplane.redhat.local
>       ctlplane_ip: 192.168.24.11
>       deploy_server_id: 0d92d2e9-c924-47a0-9c6b-eec306f08a64
>       external_hostname: controller-1.external.redhat.local
>       external_ip: 10.0.0.116
>       internal_api_hostname: controller-1.internalapi.redhat.local
>       internal_api_ip: 172.17.1.96
>       storage_hostname: controller-1.storage.redhat.local
>       storage_ip: 172.17.3.71
>       storage_mgmt_hostname: controller-1.storagemgmt.redhat.local
>       storage_mgmt_ip: 172.17.4.108
>       tenant_hostname: controller-1.tenant.redhat.local
>       tenant_ip: 172.17.2.61
>     controller-2:
>       ansible_host: 192.168.24.27
>       canonical_hostname: controller-2.redhat.local
>       ctlplane_hostname: controller-2.ctlplane.redhat.local
>       ctlplane_ip: 192.168.24.27
>       deploy_server_id: 4890f2a6-e52a-4d18-ad33-90bfb4daad4d
>       external_hostname: controller-2.external.redhat.local
>       external_ip: 10.0.0.110
>       internal_api_hostname: controller-2.internalapi.redhat.local
>       internal_api_ip: 172.17.1.102
>       storage_hostname: controller-2.storage.redhat.local
>       storage_ip: 172.17.3.46
>       storage_mgmt_hostname: controller-2.storagemgmt.redhat.local
>       storage_mgmt_ip: 172.17.4.87
>       tenant_hostname: controller-2.tenant.redhat.local
>       tenant_ip: 172.17.2.92
>   vars:
>     ansible_ssh_user: heat-admin
>     bootstrap_server_id: 1264e43f-7f2a-432f-8804-8a6e26f07fd0
>     ctlplane_cidr: '24'
>     ctlplane_dns_nameservers: &id001
>     - 10.0.0.36
>     ctlplane_gateway_ip: 192.168.24.1
>     ctlplane_host_routes: []
>     ctlplane_mtu: 1500
>     ctlplane_subnet_cidr: '24'
>     ctlplane_vlan_id: '1'
>     external_cidr: '24'
>     external_dns_nameservers: []
>     external_gateway_ip: 10.0.0.1
>     external_host_routes:
>     - default: true
>       nexthop: 10.0.0.1
>     external_mtu: 1500
>     external_vlan_id: '10'
>     internal_api_cidr: '24'
>     internal_api_dns_nameservers: &id002 []
>     internal_api_gateway_ip: null
>     internal_api_host_routes: []
>     internal_api_mtu: 1500
>     internal_api_vlan_id: '20'
>     networks_all: &id003
>     - Storage
>     - StorageMgmt
>     - InternalApi
>     - Tenant
>     - External

Comment 4 John Fulton 2022-06-30 10:08:24 UTC
(In reply to Juan Badia Payno from comment #1)
> ### Another test as Rabi suggested after removing the remote directories
> (/tmp/ceph_ansible_tmp) on all hosts
> 
> (undercloud) [stack@undercloud-0 ~]$ ANSIBLE_REMOTE_USER=tripleo-admin
> openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts
> 
>         "task path:
> /usr/share/ceph-ansible/roles/ceph-infra/tasks/main.yml:40",
>         "Tuesday 28 June 2022  13:28:13 +0000 (0:00:01.663)      
> 0:00:31.696 ********** ",
>         "fatal: [controller-0]: UNREACHABLE! => {\"changed\": false,
> \"msg\": \"Failed to create temporary directory.In some cases, you may have
> been able to authenticate and did not have permissions on the target
> directory. Consider changing the remote tmp path in ansible.cfg to a path
> rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command
> was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir
> \\\"` echo
> /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.3973598-803969-199512256121537
> `\\\" && echo ansible-tmp-1656422893.3973598-803969-199512256121537=\\\"`
> echo
> /tmp/ceph_ansible_tmp/ansible-tmp-1656422893.3973598-803969-199512256121537
> `\\\" ), exited with result 1\", \"unreachable\": true}",

This is the exact same error which lead to this patch being committed:

  https://github.com/openstack/tripleo-ansible/commit/1b440db7f69b1f0e69016ed5a409e84b6fab1f9d

The above patch is a workaround to this ansible issue:

  https://github.com/ansible/ansible/issues/68218

Because Ansible wouldn't fix the bug introduced by the above, we had to do the workaround in the above tripleo-ansible commit.

Ever since the dawn of ceph-ansible integration in OSP12 (and it's been removed in OSP17) we have only supported tripleo running ceph-ansible with ANSIBLE_REMOTE_USER=tripleo-admin.


If the overcloud upgrade prepare command is calling ceph-ansible with ANSIBLE_REMOTE_USER=heat-admin, then it's introducing the problem. Please try calling it instead with ANSIBLE_REMOTE_USER=tripleo-admin. To clarify, don't do this:

  ANSIBLE_REMOTE_USER=tripleo-admin openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts

Instead use ANSIBLE_REMOTE_USER=tripleo-admin when running the overcloud upgrade prepare command.

To review:

1. The the overcloud upgrade prepare as tripleo-admin
2. openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts 

Please then let me know if you still encounter the reported bug.

Comment 5 Juan Badia Payno 2022-07-08 13:17:14 UTC
I executed:
 1.- overcloud upgrade prepare as heat-admin
 2.- overcloud upgrade prepare as tripleo-admin
 3.- openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph,facts

But the error seems the same to me.


        "fatal: [controller-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.1517313-240551-260314301557948 `\\\" && echo ansible-tmp-1656035560.1517313-240551-260314301557948=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.1517313-240551-260314301557948 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.179285-240553-101656748574132 `\\\" && echo ansible-tmp-1656035560.179285-240553-101656748574132=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.179285-240553-101656748574132 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [controller-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.232933-240557-18518472864724 `\\\" && echo ansible-tmp-1656035560.232933-240557-18518472864724=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.232933-240557-18518472864724 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-0]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.2632713-240561-273289552445292 `\\\" && echo ansible-tmp-1656035560.2632713-240561-273289552445292=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.2632713-240561-273289552445292 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-1]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.2977533-240566-249196005274129 `\\\" && echo ansible-tmp-1656035560.2977533-240566-249196005274129=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.2977533-240566-249196005274129 `\\\" ), exited with result 1\", \"unreachable\": true}",
        "fatal: [ceph-2]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \\\"` echo /tmp/ceph_ansible_tmp `\\\"&& mkdir \\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.3225586-240571-263523010554300 `\\\" && echo ansible-tmp-1656035560.3225586-240571-263523010554300=\\\"` echo /tmp/ceph_ansible_tmp/ansible-tmp-1656035560.3225586-240571-263523010554300 `\\\" ), exited with result 1\", \"unreachable\": true}",


list of ceph directories before executing the external-upgrade 
(undercloud) [stack@undercloud-0 ~]$ ls -lhrt /tmp/ | grep ceph 
drwx------. 2 tripleo-admin tripleo-admin   6 Jun 23 07:54 ceph_ansible_tmp
drwx------. 2 root          root            6 Jun 23 08:24 ceph_ansible_control_path
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin  ls -lhrt /tmp/ 
Warning: Permanently added 'controller-0.ctlplane' (ECDSA) to the list of known hosts.
total 0
drwx------. 3 root       root       17 Jul  8 12:49 systemd-private-d964c583f0d344298480dda9d192a995-chronyd.service-vqM4zi
drwx------. 2 heat-admin heat-admin 25 Jul  8 13:02 ssh-LAuqO7iMZT

list of ceph directories after executing the external-upgrade
(undercloud) [stack@undercloud-0 ~]$ ls -lhrt /tmp/ | grep ceph 
drwx------. 2 root          root             6 Jun 23 07:54 ceph_ansible_tmp
drwx------. 2 root          root          4.0K Jun 24 03:40 ceph_ansible_control_path
(undercloud) [stack@undercloud-0 ~]$ ssh heat-admin  ls -lhrt /tmp/ | grep ceph 
Warning: Permanently added 'controller-0.ctlplane' (ECDSA) to the list of known hosts.
drwx------. 2 root          root           6 Jul  8 13:14 ceph_ansible_tmp

Comment 23 errata-xmlrpc 2023-08-16 01:11:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.1 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2023:4577


Note You need to log in before you can comment on or make changes to this bug.