Description of problem: Cinder backups fail when the service is running in a container under pacemaker. The cinder-backup service is essentially missing because the pacemaker bundle isn't created during deployment. The problem has already been fixed upstream. Assigning to Squad:Cinder to test the overall backup feature. Version-Release number of selected component (if applicable): How reproducible: Always. Steps to Reproduce: 1. Deploy OSP-12 with cinder-backup running in a container under pacemaker 2. 3. Actual results: "pcs resource show" will show the entire cinder-backup bundle is missing, and backups fail because the service is missing. Expected results: cinder-backup pacemaker bundle is present, and backups work. Additional info:
Stuck on clone of this BZ for none HA once I get that needinfo/issue resolved I'll try this one https://bugzilla.redhat.com/show_bug.cgi?id=1539090#c5
Verified on: puppet-tripleo-7.4.8-4.el7ost.noarch Deployed OPS12 HA with containerized Cinder. (undercloud) [stack@undercloud-0 ~]$ cat containerized-cinder.yaml resource_registry: OS::TripleO::Services::CinderApi: /usr/share/openstack-tripleo-heat-templates/docker/services/cinder-api.yaml OS::TripleO::Services::CinderScheduler: /usr/share/openstack-tripleo-heat-templates/docker/services/cinder-scheduler.yaml OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/docker/services/pacemaker/cinder-backup.yaml OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/docker/services/pacemaker/cinder-volume.yaml OS::TripleO::Services::Iscsid: /usr/share/openstack-tripleo-heat-templates/docker/services/iscsid.yaml [root@controller-0 ~]# docker ps | grep cinder bbfd8693c6a6 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-volume:pcmklatest "/bin/bash /usr/lo..." 2 hours ago Up 2 hours (healthy) openstack-cinder-volume-docker-0 cba77126042b docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_api_cron 07f01f5d9146 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-scheduler:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_scheduler d191c3716dc2 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours cinder_api [root@controller-1 ~]# docker ps | grep cinder 369d7124a4ce docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-backup:pcmklatest "/bin/bash /usr/lo..." 2 hours ago Up 2 hours (healthy) openstack-cinder-backup-docker-0 9ba678a3a4fa docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_api_cron d2d8891a7578 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-scheduler:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours (healthy) cinder_scheduler ac8bc4a09483 docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-api:2018-03-10.1 "kolla_start" 2 hours ago Up 2 hours cinder_api On one of the controllers $pcs resource show ... Docker container: openstack-cinder-volume [docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-volume:pcmklatest] openstack-cinder-volume-docker-0 (ocf::heartbeat:docker): Started controller-0 Docker container: openstack-cinder-backup [docker-registry.engineering.redhat.com/rhosp12/openstack-cinder-backup:pcmklatest] openstack-cinder-backup-docker-0 (ocf::heartbeat:docker): Started controller-1 Test volume and backup create. cinder create 2 cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | available | - | 2 | - | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ $ cinder backup-create ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 7e821ebf-1e2b-4889-91c9-8bae2ea93635 | | name | None | | volume_id | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | +-----------+--------------------------------------+ cinder backup-list +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ | 7e821ebf-1e2b-4889-91c9-8bae2ea93635 | ee6bb1a0-b9e3-44ed-a6ee-e8851cda3ee9 | available | - | 2 | 42 | volumebackups | +--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+ Both volume and backup are avliable on an HW OPS12 dockerized Cinder deployment.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0607